Particle Physics Planet


September 19, 2018

The n-Category Cafe

A Categorical Look at Random Variables

guest post by Mark Meckes

For the past several years I’ve been thinking on and off about whether there’s a fruitful category-theoretic perspective on probability theory, or at least a perspective with a category-theoretic flavor.

(You can see this MathOverflow question by Pete Clark for some background, though I started thinking about this question somewhat earlier. The fact that I’m writing this post should tell you something about my attitude toward my own answer there. On the other hand, that answer indicates something of the perspective I’m coming from.)

I’m a long way from finding such a perspective I’m happy with, but I have some observations I’d like to share with other n-Category Café patrons on the subject, in hopes of stirring up some interesting discussion. The main idea here was pointed out to me by Tom, who I pester about this subject on an approximately annual basis.

Let’s first dispense with one rather banal observation. Let <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics> be the category whose objects are probability spaces (measure spaces with total measure <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>), and whose morphisms are almost-everywhere-equality equivalence classes of measure-preserving maps. Then:

Probability theory is not about the category <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics>.

To put it a little less (ahem) categorically, probability theory is not about the category <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics>, in the sense that group theory or topology might be said (however incompletely) to be about the categories <semantics>Grp<annotation encoding="application/x-tex">\mathbf{Grp}</annotation></semantics> or <semantics>Top<annotation encoding="application/x-tex">\mathbf{Top}</annotation></semantics>. The most basic justification of this assertion is that isomorphic objects in <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics> are not “the same” from the point of view of probability theory. Indeed, the distributions of

  1. a uniform random variable in an interval,
  2. an infinite sequence of independent coin flips, and
  3. Brownian motion <semantics>{B t:t0}<annotation encoding="application/x-tex">\{B_t : t \ge 0\}</annotation></semantics>

are radically different things in probability theory, but they’re all isomorphic to each other in <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics>!

Anyway, as any probabilist will tell you, probability theory isn’t about probability spaces. The fundamental “objects” in probability theory are actually the morphisms of <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics>: random variables.

Typically, a random variable is defined to be a measurable map <semantics>X:ΩE<annotation encoding="application/x-tex">X:\Omega \to E</annotation></semantics>, where <semantics>(Ω,)<annotation encoding="application/x-tex">(\Omega, \mathbb{P})</annotation></semantics> is a probability space and <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics> is, a priori, just a measurable space. (I’m suppressing <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-algebras here, which indicates how modest the scope of this post is: serious probability theory works with multiple <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-algebras on a single space.) But every random variable canonically induces a probability measure on its codomain, its distribution <semantics>μ=X #<annotation encoding="application/x-tex">\mu = X_\# \mathbb{P}</annotation></semantics> defined by

<semantics>μ(A)=(X 1(A))<annotation encoding="application/x-tex"> \mu(A) = \mathbb{P}(X^{-1}(A)) </annotation></semantics>

for every measurable <semantics>AE<annotation encoding="application/x-tex">A \subseteq E</annotation></semantics>. This formula is precisely what it means to say that <semantics>X:(Ω,)(E,μ)<annotation encoding="application/x-tex">X:(\Omega, \mathbb{P}) \to (E, \mu)</annotation></semantics> is measure-preserving.

In probability theory, the only questions we’re allowed to ask about <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are about its distribution. On the other hand, two random variables which have the same distribution are not thought of as “the same random variable” in the same way that isomorphic groups are “the same group”. In fact, a probabilist’s favorite trick is to replace a random variable <semantics>X:ΩE<annotation encoding="application/x-tex">X:\Omega \to E</annotation></semantics> with another random variable <semantics>X:ΩE<annotation encoding="application/x-tex">X':\Omega' \to E</annotation></semantics> which has the same distribution, but which is in some way easier to analyze. For example, <semantics>X<annotation encoding="application/x-tex">X'</annotation></semantics> may factor in a useful way as the composition of two morphisms in <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics> (although probabilists don’t normally write about things in those terms).

Now let’s fix a codomain <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics>. Then there is a category <semantics>R(E)<annotation encoding="application/x-tex">\mathbf{R}(E)</annotation></semantics> whose objects are <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics>-valued random variables; if <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>X<annotation encoding="application/x-tex">X'</annotation></semantics> are two random variables with domains <semantics>(Ω,)<annotation encoding="application/x-tex">(\Omega, \mathbb{P})</annotation></semantics> and <semantics>(Ω,)<annotation encoding="application/x-tex">(\Omega', \mathbb{P}')</annotation></semantics> respectively, then a morphism from <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to <semantics>X<annotation encoding="application/x-tex">X'</annotation></semantics> is a measure-preserving map <semantics>f:ΩΩ<annotation encoding="application/x-tex">f:\Omega \to \Omega'</annotation></semantics> such that <semantics>Xf=X<annotation encoding="application/x-tex">X' \circ f = X</annotation></semantics>. (Figuring out how to typeset the commutative triangle here is more trouble than I feel like going to.)

In this case

<semantics>X #=(Xf) #=X #f #=X #,<annotation encoding="application/x-tex"> X_{\#} \mathbb{P} = (X' \circ f)_{\#} \mathbb{P} = X'_{\#} f_{\#} \mathbb{P} = X'_{\#} \mathbb{P}', </annotation></semantics>

so if a morphism <semantics>XX<annotation encoding="application/x-tex">X \to X'</annotation></semantics> exists, then <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>X<annotation encoding="application/x-tex">X'</annotation></semantics> have the same distribution. Moreover, if <semantics>μ<annotation encoding="application/x-tex">\mu</annotation></semantics> is a probability measure on <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics>, there is a canonical random variable with distribution <semantics>μ<annotation encoding="application/x-tex">\mu</annotation></semantics>, namely, the identity map <semantics>Id E<annotation encoding="application/x-tex">Id_E</annotation></semantics> on <semantics>(E,μ)<annotation encoding="application/x-tex">(E,\mu)</annotation></semantics>, and any random variable <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> with distribution <semantics>μ<annotation encoding="application/x-tex">\mu</annotation></semantics> itself defines a morphism from the object <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to that object <semantics>Id E<annotation encoding="application/x-tex">Id_E</annotation></semantics>.

It follows that the family <semantics>R(E,μ)<annotation encoding="application/x-tex">\mathbf{R}(E, \mu)</annotation></semantics> of random variables with distribution <semantics>μ<annotation encoding="application/x-tex">\mu</annotation></semantics> is a connected component of <semantics>R(E)<annotation encoding="application/x-tex">\mathbf{R}(E)</annotation></semantics>. (I don’t know whether the construction of <semantics>R(E)<annotation encoding="application/x-tex">\mathbf{R}(E)</annotation></semantics> from <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics> has a standard name, but I have learned that its connected components <semantics>R(E,μ)<annotation encoding="application/x-tex">\mathbf{R}(E, \mu)</annotation></semantics> are slice categories of <semantics>Prob<annotation encoding="application/x-tex">\mathbf{Prob}</annotation></semantics>.)

Now a typical theorem in probability theory starts by taking a family of random variables <semantics>X i:ΩE i<annotation encoding="application/x-tex">X_i : \Omega \to E_i</annotation></semantics> all defined on the same domain <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics>. That’s no problem in this picture: this is the same as a single random variable <semantics>X:Ω iE i<annotation encoding="application/x-tex">X : \Omega \to \prod_i E_i</annotation></semantics>. (There’s also always some kind of assumption about the relationships among the <semantics>X i<annotation encoding="application/x-tex">X_i</annotation></semantics> — independence, for example, though that’s only the simplest such relationship that people think about — I don’t (yet!) have any thoughts to share about expressing those relationships in terms of the picture here.)

The next thing is to cook up a new random variable defined on <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> by applying some measurable function <semantics>F: iE iE<annotation encoding="application/x-tex">F:\prod_i E_i \to E</annotation></semantics>. A prototype is the function (well, family of functions)

<semantics>F: n,(x 1,,x n) i=1 nx n,<annotation encoding="application/x-tex"> F: \mathbb{R}^n \to \mathbb{R}, \qquad (x_1, \ldots, x_n) \mapsto \sum_{i=1}^n x_n, </annotation></semantics>

which has a starring role in all the classics: the Weak and Strong Laws of Large Numbers, Central Limit Theorem, Law of the Iterated Logarithm, Cramér’s Theorem, etc. This fits nicely into this picture, too: any measurable map <semantics>F:EE<annotation encoding="application/x-tex">F:E \to E'</annotation></semantics> induces a functor <semantics>F !:R(E)R(E)<annotation encoding="application/x-tex">F_!:\mathbf{R}(E) \to \mathbf{R}(E')</annotation></semantics> in an obvious way (a morphism in <semantics>R(E)<annotation encoding="application/x-tex">\mathbf{R}(E)</annotation></semantics> given by a measure-preserving <semantics>f:ΩΩ<annotation encoding="application/x-tex">f:\Omega \to \Omega'</annotation></semantics> is mapped to a morphism <semantics>R(E)<annotation encoding="application/x-tex">\mathbf{R}(E)</annotation></semantics> given by the same <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> — that point is probably obvious to most of the people here, but I needed to think a bit about it a bit to convince myself that <semantics>F !<annotation encoding="application/x-tex">F_!</annotation></semantics> really is a functor).

Finally, as I said, a probabilist may go about understanding the distribution of the random variable <semantics>F(X)<annotation encoding="application/x-tex">F(X)</annotation></semantics> — that is, the object <semantics>F !(X)<annotation encoding="application/x-tex">F_!(X)</annotation></semantics> of <semantics>R(E)<annotation encoding="application/x-tex">\mathbf{R}(E)</annotation></semantics> — by instead working with another object <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> in the same connected component of <semantics>R(E)<annotation encoding="application/x-tex">\mathbf{R}(E)</annotation></semantics>. Both the assumptions on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and the structure of <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> may be used to help cook up <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>.

This is quite different from any category-theoretic perspective I’ve ever encountered in, say, algebra or topology, but my ignorance of those fields is broad and deep. If anyone finds this kind of category-theoretic picture familiar, I’d love to hear about it!

One last observation here is that I believe (I haven’t tried writing out all the details) that the mappings

<semantics>ER(E),FF !<annotation encoding="application/x-tex"> E \mapsto \mathbf{R}(E), \qquad F \mapsto F_! </annotation></semantics>

define a functor <semantics>MeasCat<annotation encoding="application/x-tex">\mathbf{Meas} \to \mathbf{Cat}</annotation></semantics>. I have no idea what, if anything, this observation may do for probability theory.

by leinster (Tom.Leinster@gmx.com) at September 19, 2018 01:45 AM

September 18, 2018

Christian P. Robert - xi'an's og

Le Monde puzzle [#1066]

The second Le Monde mathematical puzzle in the new competition is sheer trigonometry:

When in the above figures both triangles ABC are isosceles and the brown segments are all of length 25cm, find the angle in A and the value of DC², respectively.

This could have been solved by R coding the various possible angles of the three segments beyond BC until the isosceles property is met, but it went much faster by pen and paper, leading to an angle of π/9 in the first case and a square of 1250 in the second case.

by xi'an at September 18, 2018 10:18 PM

The n-Category Cafe

What is Applied Category Theory?

Tai-Danae Bradley has a new free “booklet” on applied category theory. It grew out of the workshop Applied Category Theory 2018, and I think it makes a great complement to Fong and Spivak’s book Seven Sketches and my online course based on that book:

Abstract. This is a collection of introductory, expository notes on applied category theory, inspired by the 2018 Applied Category Theory Workshop, and in these notes we take a leisurely stroll through two themes (functorial semantics and compositionality), two constructions (monoidal categories and decorated cospans) and two examples (chemical reaction networks and natural language processing) within the field.

Check it out!

by john (baez@math.ucr.edu) at September 18, 2018 07:47 PM

John Baez - Azimuth

What is Applied Category Theory?

Tai-Danae Bradley has a new free “booklet” on applied category theory. It was inspired by the workshop Applied Category Theory 2018, which she attended, and I think it makes a great complement to Fong and Spivak’s book Seven Sketches and my online course based on that book:

• Tai-Danae Bradley, What is Applied Category Theory?

Abstract. This is a collection of introductory, expository notes on applied category theory, inspired by the 2018 Applied Category Theory Workshop, and in these notes we take a leisurely stroll through two themes (functorial semantics and compositionality), two constructions (monoidal categories and decorated cospans) and two examples (chemical reaction networks and natural language processing) within the field.

Check it out!

by John Baez at September 18, 2018 07:02 PM

Emily Lakdawalla - The Planetary Society Blog

The September Equinox 2018 Issue of The Planetary Report Is Out!
With my first issue of The Planetary Report as editor, I am taking the magazine open-access. Return to Mercury features articles by Elsa Montagnon on BepiColombo and by Long Xiao on the Chang'e-4 and -5 landers.

September 18, 2018 03:39 PM

Peter Coles - In the Dark

Autumn in Maynooth

I was struck by the contrasting mixture of colours in St Joseph’s Square as I walked through just now so thought I’d share a quick snap taken on my phone. The trees are still in full leaf, but the Virginia Creeper that covers the facades of many of the buildings in the quadrangle is turning blood red, a sure sign that autumn is arriving. It is still rather warm, if a little breezy, however, because we’re in the airflow behind tropical storm Helene. This part of Ireland missed the worst of it, despite a rather worrying weather forecast for yesterday:

As it happens, the worst of the rainfall missed this part of Ireland but it has dumped a lot of rain to the West, from Galway to Donegal.

Among the other signs of autumn are the large number of groups of new students being guided around the campus during this orientation week. Lectures don’t begin until Monday 24th September but the first-year students are already here and trying their best to settle in before teaching starts.

by telescoper at September 18, 2018 03:10 PM

Emily Lakdawalla - The Planetary Society Blog

Voyage to Mercury
Elsa Montagnon details the challenges of delivering BepiColombo’s two spacecraft from Earth to Mercury.

September 18, 2018 03:05 PM

Emily Lakdawalla - The Planetary Society Blog

Farside Landing and Nearside Sample Return
Long Xiao previews two ambitious Chinese lunar missions, one of which will make the first-ever landing on the far side of the Moon.

September 18, 2018 03:05 PM

Emily Lakdawalla - The Planetary Society Blog

Chandrayaan-2
Sriram Bhiravarasu anticipates India’s 2019 lunar venture with an orbiter, lander, and rover.

September 18, 2018 03:05 PM

Emily Lakdawalla - The Planetary Society Blog

Why Start A Space Program?
Casey Dreier observes the genesis of a new space agency in Australia, and how The Planetary Society helped make it happen.

September 18, 2018 03:04 PM

CERN Bulletin

Cine club

Wednesday 26 September 2018 at 18:15
CERN Main Auditorium

The CERN CineClub invites everyone to a projection of the film

Almost Nothing – CERN Experimental City
directed by Anna De Manincor

Prix Interreligieux - Visions du Réel, Nyon, 2018

Once upon a time, there was a citadel, established on the Franco-Swiss border, called CERN. According to Alvaro de Rujula, one of the eminences of the scientific fellowship that “runs the shop”, dare we say it, physicists are more “modest” than architects, building their cathedrals underground. A little later on, another one claims, without laughing, that “the most momentous discoveries were probably made in the cafeteria.” We get it, Anna de Manincor and the ZimmerFrei collective (La Beauté c'est ta tête, VdR, 2014) run counter to the films habitually devoted to this institution. They leave the “Higgs boson” or “gravitational waves” in the background to spotlight the humans who are working away in this maze of corridors, cables and high-precision metal parts, seeking “almost nothing.” A sharp, and sometimes humorous, look at the life of this rather particular community, a real human and social experiment whose participants seem to invent the operating rules every day.

(by Emmanuel Chicon https://www.visionsdureel.ch/en/film/almost-nothing)

The projection will be followed by discussion with the director Anna De Manincor and the producer Serena Gramizzi

https://almostnothing.eu/#

September 18, 2018 12:09 PM

Peter Coles - In the Dark

The Open Journal of Astrophysics and NASA/ADS

As I’m working on the Open Journal of Astrophysics project quite a lot these days I’m probably going to be boring a lot of people with updates, but there you go.

First astro.theog.org is now transferred to the new platform here. It doesn’t look like much now but there is a lot sitting behind the front page and we will get the new site up and running when we’ve got various administrative things approved.

Another thing I forgot to mention in yesterday’s post concerns the SAO/NASA Astrophysics Data System which (for the uninitiated) is a Digital Library portal for researchers in astronomy and physics, operated by the Smithsonian Astrophysical Observatory (SAO) under a NASA grant. The ADS maintains three bibliographic databases containing more than 14.0 million records covering publications in Astronomy and Astrophysics, Physics, and (of course) the arXiv e-prints. In addition to maintaining its bibliographic corpus, the ADS tracks citations and other information, which means that it is an important tool for evaluating publication impact.

One of the issues that we’ve had with the handful of papers published so far by the Open Journal of Astrophysics is that, because it is an overlay journal, the primary location of the papers published is on the arXiv, alongside other content that has not been refereed. Up until recently searching ADS for `All bibliographic sources’ would return OJA papers, but `All refereed articles’ would not. I’m glad to say that with the help of the ADS team, this issue has now been resolved and OJA papers now show up as refereed articles, as demonstrated by this example:

I know that this was a particular worry for early career researchers who might have been deterred from submitting to the Open Journal of Astrophysics by the fear that their publications would not look like refereed publications. They need worry no longer!

Incidentally, that image also shows that citations are tracked through the CROSSREF system, in which OJA papers are registered when published and issued with a DOI. All this happens behind the scenes from the point of view of an author, but it involves a lot of interesting machinery! A discussion on facebook the other day led to an academic publisher stating that one of the greatest costs of running a journal was registering publications for citation tracking. In fact it costs a maximum of $1 per article (see here). The industry is relying on academics not understanding how cheap things actually are.

 

by telescoper at September 18, 2018 10:47 AM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juillet et décembre.

La prochaine permanence se tiendra le :

Mardi 25 septembre de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 30 octobre et 27 novembre 2018.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

September 18, 2018 10:09 AM

CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

September 18, 2018 10:09 AM

CERN Bulletin

Offer for our members

Our partner FNAC is offering to all our members 10% discount on all the iMacs and Macbooks.

This offer is valid between September 12 and September 30, 2018 upon the presentation of your Staff Association membership card.

 

September 18, 2018 10:09 AM

CERN Bulletin

Exhibition

Le Chronoscope
Images du Temps, Eclats de Temps

Thomas Desbrières

Du 24 septembre au 5 octobre
CERN Meyrin, Bâtiment principal

Passionné  de sciences et d’art, l’artiste Thomas Desbrières réalise des tableaux d’art fractal (art numérique, création d’images à partir de formules mathématiques calculées par ordinateur).

Le Chronoscope est un instrument scientifique imaginaire dont le but est de capter des images du Temps. Il pose les
questions : A quoi pourrait ressembler le Temps ? Et comment pourrions-nous l’observer ?

Les tableaux fractals sont comme les images résultats de cette expérience. Ce sont des visions originales du Temps obtenues par le Chronoscope. Ils montrent des mécanismes complexes, des cycles démultipliés à l’infini. Leur aspect évoque celui d’horloges étranges, dont les coloris dorés rappellent également le laiton des instruments de précision.

http://www.senarius.fr

Pour plus d’informations et demandes d’accès : staff.association@cern.ch  |  +41 22 767 28 19

September 18, 2018 09:09 AM

September 17, 2018

Christian P. Robert - xi'an's og

Nature tidbits

In the Nature issue of July 19 that I read in the plane to Singapore, there was a whole lot of interesting entries, from various calls expressing deep concern about the anti-scientific stance of the Trump administration, like cutting funds for environmental regulation and restricting freedom of communication (ETA) or naming a non-scientist at the head of NASA and other agencies, or again restricting the protection of species, to a testimony of an Argentinian biologist in front of a congressional committee about the legalisation of abortion (which failed at the level of the Agentinian senate later this month), to a DNA-like version of neural network, to Louis Chen from NUS being mentioned in a career article about the importance of planning well in advance one’s retirement to preserve academia links and manage a new position or even career. Which is what happened to Louis as he stayed head of NUS after the mandatory retirement age and is now emeritus and still engaged into research. (The article made me wonder however how the cases therein had be selected.) It is actually most revealing to see how different countries approach the question of retirements of academics: in France, for instance, one is essentially forced to retire and, while there exist emeritus positions, it is extremely difficult to find funding.

“Louis Chen was technically meant to retire in 2005. The mathematician at the National University of Singapore was turning 65, the university’s official retirement age. But he was only five years into his tenure as director of the university’s new Institute for Mathematical Sciences, and the university wanted him to stay on. So he remained for seven more years, stepping down in 2012. Over the next 18 months, he travelled and had knee surgery, before returning in summer 2014 to teach graduate courses for a year.”

And [yet] another piece on the biases of AIs. Reproducing earlier papers discussed here, with one obvious reason being that the learning corpus is not representative of the whole population, maybe survey sampling should become compulsory in machine learning training degrees. And yet another piece on why protectionism is (also) bad for the environment.

by xi'an at September 17, 2018 10:18 PM

Peter Coles - In the Dark

The Open Journal of Astrophysics – Call for Editors

It’s nice to see that my recent post on the Open Journal of Astrophysics has been attracting some interest. The project is developing rather swiftly right now and it seems the main problems we have to deal with are administrative rather than technical. Fingers crossed anyway.

I thought I’d do a follow-up re-iterating a request in that recent post. As you will be aware, the Open Journal of Astrophysics is an arXiv overlay journal. We apply a simple criterion to decide whether a paper is on a suitable topic for publication, namely that if it it is suitable for the astro-ph section of the arXiv then it is suitable for the Open Journal of Astrophysics. This section of the arXiv, which is rather broad,is divided thuswise:

  1. astro-ph.GA – Astrophysics of Galaxies.
    Phenomena pertaining to galaxies or the Milky Way. Star clusters, HII regions and planetary nebulae, the interstellar medium, atomic and molecular clouds, dust. Stellar populations. Galactic structure, formation, dynamics. Galactic nuclei, bulges, disks, halo. Active Galactic Nuclei, supermassive black holes, quasars. Gravitational lens systems. The Milky Way and its contents
  2. astro-ph.CO – Cosmology and Nongalactic Astrophysics.
    Phenomenology of early universe, cosmic microwave background, cosmological parameters, primordial element abundances, extragalactic distance scale, large-scale structure of the universe. Groups, superclusters, voids, intergalactic medium. Particle astrophysics: dark energy, dark matter, baryogenesis, leptogenesis, inflationary models, reheating, monopoles, WIMPs, cosmic strings, primordial black holes, cosmological gravitational radiation
  3. astro-ph.EP – Earth and Planetary Astrophysics.
    Interplanetary medium, planetary physics, planetary astrobiology, extrasolar planets, comets, asteroids, meteorites. Structure and formation of the solar system
  4. astro-ph.HE – High Energy Astrophysical Phenomena.
    Cosmic ray production, acceleration, propagation, detection. Gamma ray astronomy and bursts, X-rays, charged particles, supernovae and other explosive phenomena, stellar remnants and accretion systems, jets, microquasars, neutron stars, pulsars, black holes
  5. astro-ph.IM – Instrumentation and Methods for Astrophysics.
    Detector and telescope design, experiment proposals. Laboratory Astrophysics. Methods for data analysis, statistical methods. Software, database design
  6. astro-ph.SR – Solar and Stellar Astrophysics.
    White dwarfs, brown dwarfs, cataclysmic variables. Star formation and protostellar systems, stellar astrobiology, binary and multiple systems of stars, stellar evolution and structure, coronas. Central stars of planetary nebulae. Helioseismology, solar neutrinos, production and detection of gravitational radiation from stellar systems.

The expertise of the current Editorial Board is concentrated in the area of (2), and a bit of (5), but we would really like to add some editors from different areas (i.e. 1, 3, 4 and 6).  We  would therefore really appreciate volunteers from other areas of astrophysics (especially stars/exoplanets, etc). If you’re interested please let me know. Please also circulate this call as widely as possible among your colleagues so we can recruit the necessary expertise. The journal is entirely free (both to publish in and to read) and we can’t afford to pay a fee, but there is of course the prestige of being in at the start of a publishing revolution of cosmic proportions!

If you join the Editorial Board we will invite you to an online training session to show you how the platform works.

Thank you in advance for your interest in this project, and I look forward to hearing from you.

 

 

 

by telescoper at September 17, 2018 04:42 PM

September 16, 2018

Christian P. Robert - xi'an's og

QuanTA

My Warwick colleagues Nick Tawn [who also is my most frequent accomplice to running, climbing and currying in Warwick!] and Gareth Robert have just arXived a paper on QuanTA, a new parallel tempering algorithm that Nick designed during his thesis at Warwick, which he defended last semester. Parallel tempering targets in parallel several powered (or power-tempered) versions of the target distribution. With proposed switches between adjacent targets. An improved version transforms the local values before operating the switches. Ideally, the transform should be the composition of the cdf and inverse cdf, but this is impossible. Linearising the transform is feasible, but does not agree with multimodality, which calls for local transforms. Which themselves call for the identification of the different modes. In QuanTA, they are identified by N parallel runs of the standard, or rather N/2 to avoid dependence issues, and K-means estimates. The paper covers the construction of an optimal scaling of temperatures, in that the difference between the temperatures is scaled [with order 1/√d] so that the acceptance rate for swaps is 0.234. Which in turns induces a practical if costly calibration of the temperatures, especially when the size of the jump is depending on the current temperature. However, this cost issue is addressed in the paper, resorting to the acceptance rate as a proxy for effective sample size and the acceptance rate over run time to run the comparison with regular parallel tempering, leading to strong improvements in the mixture examples examined in the paper. The use of machine learning techniques like K-means or more involved solutions is a promising thread in this exciting area of tempering, where intuition about high temperatures can be actually misleading. Because using the wrong scale means missing the area of interest, which is not the mode!

by xi'an at September 16, 2018 10:18 PM

ZapperZ - Physics and Physicists

Want To Located The Accelerometer In Your Smartphone?
Rhett Allain has a simple, fun rotational physics experiment that you can perform on your smartphone to locate the position of the accelerometer in that device, all without opening it.

Your smart phone has a bunch of sensors in it. One of the most common is the accelerometer. It's basically a super tiny mass connected with springs (not actual springs). When the phone accelerates in a particular direction, some of these springs will get compressed in order to make the tiny test mass also accelerate. The accelerometer measures this spring compression and uses that to determine the acceleration of the phone. With that, it will know if it is facing up or down. It also can estimate how far you move and use this along with the camera to find out where real world objects are, using ARKit.

So, we know there is a sensor in the phone—but where is it located? I'm not going to take apart my phone; everyone knows I'll never get it back together after that. Instead, I will find out the location by moving the phone in a circular path. Yes, moving in a circle is a type of acceleration.

I'll let you read the article to know what he did, and what you can do yourself. 

Now, the only thing left is to verify the result. Someone needs to open an iPhone 7 and confirm the location of the accelerometer (do we even know what it looks like in such a device?). Any volunteers? :)

Zz.

by ZapperZ (noreply@blogger.com) at September 16, 2018 03:15 PM

Peter Coles - In the Dark

A Decade In The Dark!

When I logged onto WordPress yesterday I received a message that it was the 10th anniversary of my registration with them as a blogger, which is when I took my first step into the blogosphere; that was way back on 15th September 2008.

I actually wrote my first post on the day I registered but unfortunately I didn’t really know what I was doing on my first day at blogging – no change there, then –  and I didn’t actually manage to figure out how to publish this earth-shattering piece. It was only after I’d written my second post that I realized that the first one wasn’t actually visible to the general public because I hadn’t pressed the right buttons, so the two appear in the wrong order in my archive. Anyway, that confusion is the reason why I usually take 16th September as this blog’s real anniversary.

I’d like to take this opportunity to send my best wishes, and to thank, everyone who reads this blog, however occasionally. According to the WordPress stats, I’ve got readers from all round the world, including  the Vatican!

If you’re interested in statistics then, as of 14.00 Irish Summer Time Today today, I have published 4,225 blog posts, not counting about 20 that I wrote but have not yet published; I’ll probably save these for my memoirs.. These posts have received 3,688,023 hits altogether; I get an average of about 1200 per day.  This varies in a very erratic fashion from day to day, but the annual average has been fairly constant over the last several years. The greatest number of hits I have received in a single day is 8,864 (at the peak of the BICEP2 controversy). Some of the most popular posts have not been about science at all, including  my rant about Virgin Media and a post about the last episode of Inspector Morse.

There have been 30,372 comments published on here and  2,213,145 rejected by my filters. The vast majority of the rejected comments were from automated spam bots, but a small number have been removed for various violations, usually for abuse of some kind. And, yes, I do get to decide what is published. It is my blog!

While I am on the subject of comments, I’ll just repeat here the policy stated on the home page of this blog:

Feel free to comment on any of the posts on this blog but comments may be moderated; anonymous comments and any considered by me to be abusive will not be accepted. I do not necessarily endorse, support, sanction, encourage, verify or agree with the opinions or statements of any information or other content in the comments on this site and do not in any way guarantee their accuracy or reliability.

It does mean a lot to me to know that there are people who find my ramblings on this `shitty wordpress blog’ interesting enough to look at, or even read, and sometimes even to come back for more, so I’d like to take this opportunity to send my best wishes to all those who follow this blog and especially those who take the trouble to comment on it in such interesting and unpredictable ways!

The last decade has been eventful, to say the least, both personally and professionally. I started blogging not long after I’d moved into my house in Pontcanna, Cardiff. Since then I moved to Sussex, and then back to Cardiff, and now to Ireland. More importantly we’ve seen the discovery of the Higgs Boson and gravitational waves, both of which resulted in Nobel Prizes, as did the studies of high-redshift supernovae. The Planck mission mission was launched, did its stuff, and came to a conclusion in this decade too. Science has moved forward, even if there are many things in this world that seem to be going backwards.

I don’t know how long I’ll keep blogging – vitae summa brevis spem nos vetat incohare longam – but I’ve got no immediate plans to stop.

by telescoper at September 16, 2018 01:38 PM

John Baez - Azimuth

The 5/8 Theorem

This is a well-known, easy group theory result that I just learned. I would like to explain it more slowly and gently, and I hope memorably, than I’ve seen it done.

It’s called the 5/8 theorem. Randomly choose two elements of a finite group. What’s the probability that they commute? If it exceeds 62.5%, the group must be abelian!

This was probably known for a long time, but the first known proof appears in a paper by Erdös and Turan.

It’s fun to lead up to this proof by looking for groups that are “as commutative as possible without being abelian”. This phrase could mean different things. One interpretation is that we’re trying to maximize the probability that two randomly chosen elements commute. But there are two simpler interpretations, which will actually help us prove the 5/8 theorem.

How big can the center be?

How big can the center of a finite group be, compared to the whole group? If a group G is abelian, its center, say Z, is all of G. But let’s assume G is not abelian. How big can |Z|/|G| be?

Since the center is a subgroup of G, we know by Lagrange’s theorem that |G|/|Z| is an integer. To make |Z|/|G| big we need this integer to be small. How small can it be?

It can’t be 1, since then |Z| = |G| and G would be abelian. Can it be 2?

No! This would force G to be abelian, leading to a contradiction! The reason is that the center is always a normal subgroup of G, so G/Z is a group of size |G/Z| = |G|/|Z|. If this is 2 then G/Z has to be \mathbb{Z}/2. But this is generated by one element, so G must be generated by its center together with one element. This one element commutes with everything in the center, obviously… but that means G is abelian: a contradiction!

For the same reason, |Z|/|G| can’t be 3. The only group with 3 elements is \mathbb{Z}/3, which is generated by one element. So the same argument leads to a contradiction: G is generated by its center and one element, which commutes with everything in the center, so G is abelian.

So let’s try |Z|/|G| = 4. There are two groups with 4 elements: \mathbb{Z}/4 and \mathbb{Z}/2 \times \mathbb{Z}/2. The second, called the Klein four-group, is not generated by one element. It’s generated by two elements! So it offers some hope.

If you haven’t studied much group theory, you could be pessimistic. After all, \mathbb{Z}/2 \times \mathbb{Z}/2 is still abelian! So you might think this: “If G/Z \cong \mathbb{Z}/2 \times \mathbb{Z}/2, the group G is generated by its center and two elements which commute with each other, so it’s abelian.”

But that’s false: even if two elements of G/Z commute with each other, this does not imply that the elements of G mapping to these elements commute.

This is a fun subject to study, but best way for us to see this right now is to actually find a nonabelian group G with G/Z \cong \mathbb{Z}/2 \times \mathbb{Z}/2. The smallest possible example would have \mathbb{Z}/2, and indeed this works!

Namely, we’ll take G to be the 8-element quaternion group

Q = \{ \pm 1, \pm i, \pm j, \pm k \}

where

i^2 = j^2 = k^2 = -1
i j = k, \quad j k = i, \quad k i = j
j i = -k, \quad k j = -i, \quad i k = -j

and multiplication by -1 works just as you’d expect, e.g.

(-1)^2 = 1

You can think of these 8 guys as the unit quaternions lying on the 4 coordinate axes. They’re the vertices of a 4-dimensional analogue of the octahedron. Here’s a picture by David A. Richter, where the 8 vertices are projected down from 4 dimensions to the vertices of a cube:

The center of Q is Z = \{ \pm 1 \}, and the quotient Q/Z is the Klein four-group, since if we mod out by \pm 1 we get the group

\{1, i, j, k\}

with

i^2 = j^2 = k^2 = 1
i j = k, \quad j k = i, \quad k i = j
j i = k, \quad k j = i, \quad i k = j

So, we’ve found a nonabelian finite group with 1/4 of its elements lying in the center, and this is the maximum possible fraction!

How big can the centralizer be?

Here’s another way to ask how commutative a finite group G can be, without being abelian. Any element g \in G has a centralizer C(g), consisting of all elements that commute with g.

How big can C(g) be? If g is in the center of G, then C(g) is all of G. So let’s assume g is not in the center, and ask how big the fraction |C(g)|/|G| can be.

In other words: how large can the fraction of elements of G that commute with g be, without it being everything?

It’s easy to check that the centralizer C(g) is a subgroup of G. So, again using Lagrange’s theorem, we know |G|/|C(g)| is an integer. To make the fraction |C(g)|/|G| big, we want this integer to be small. If it’s 1, everything commutes with g. So the first real option is 2.

Can we find an element of a finite group that commutes with exactly 1/2 the elements of that group?

Yes! One example is our friend the quaternion group Q. Each non-identity element commutes with exactly half the elements. For example, i commutes only with its own powers: 1, i, -1, -i.

So we’ve found a finite group with a non-central element that commutes with 1/2 the elements in the group, and this is maximum possible fraction!

What’s the maximum probability for two elements to commute?

Now let’s tackle the original question. Suppose G is a nonabelian group. How can we maximize the probability for two randomly chosen elements of G to commute?

Say we randomly pick two elements g,h \in G. Then there are two cases. If g is in the center of G it commutes with h with probability 1. But if g is not in the center, we’ve just seen it commutes with h with probability at most 1/2.

So, to get an upper bound on the probability that our pair of elements commutes, we should make the center Z \subset G as large as possible. We’ve seen that |Z|/|G| is at most 1/4. So let’s use that.

Then with probability 1/4, g commutes with all the elements of G, while with probability 3/4 it commutes with 1/2 the elements of G.

So, the probability that g commutes with h is

\frac{1}{4} \cdot 1 + \frac{3}{4} \cdot \frac{1}{2} = \frac{2}{8} + \frac{3}{8} = \frac{5}{8}

Even better, all these bounds are attained by the quaternion group Q. 1/4 of its elements are in the center, while every element not in the center commutes with 1/2 of the elements! So, the probability that two elements in this group commute is 5/8.

So we’ve proved the 5/8 theorem and shown we can’t improve this constant.

Further thoughts

I find it very pleasant that the quaternion group is “as commutative as possible without being abelian” in three different ways. But I shouldn’t overstate its importance!

I don’t know the proof, but the website groupprops says the following are equivalent for a finite group G:

• The probability that two elements commute is 5/8.

• The inner automorphism group of G has 4 elements.

• The inner automorphism group of G is \mathbb{Z}/2 \times \mathbb{Z}/2.

Examining the argument I gave, it seems the probability 5/8 can only be attained if

|Z|/|G| = 1/4

|C(g)|/|G| = 1/2 for every g \notin Z.

So apparently any finite group with inner automorphism group \mathbb{Z}/2 \times \mathbb{Z}/2 must have these other two properties as well!

There are lots of groups with inner automorphism group \mathbb{Z}/2 \times \mathbb{Z}/2. Besides the quaternion group, there’s one other 8-element group with this property: the group of rotations and reflections of the square, also known as the dihedral group of order 8. And there are six 16-element groups with this property: they’re called the groups of Hall–Senior class two. And I expect that as we go to higher powers of two, there will be vast numbers of groups with this property.

You see, the number of nonisomorphic groups of order 2^n grows alarmingly fast. There’s 1 group of order 2, 2 of order 4, 5 of order 8, 14 of order 16, 51 of order 32, 267 of order 64… but 49,487,365,422 of order 1024. Indeed, it seems ‘almost all’ finite groups have order a power of two, in a certain asymptotic sense. For example, 99% of the roughly 50 billion groups of order ≤ 2000 have order 1024.

Thus, if people trying to classify groups are like taxonomists, groups of order a power of 2 are like insects.

In 1964, the amusingly named pair of authors Marshall Hall Jr. and James K. Senior classified all groups of order 2^n for n \le 6. They developed some powerful general ideas in the process, like isoclinism. I don’t want to explain it here, but which involves the quotient G/Z that I’ve been talking about. So, though I don’t understand much about this, I’m not completely surprised to read that any group of order 2^n has commuting probability 5/8 iff it has ‘Hall–Senior class two’.

There’s much more to say. For example, we can define the probability that two elements commute not just for finite groups but also compact topological groups, since these come with a god-given probability measure, called Haar measure. And here again, if the group is nonabelian, the maximum possible probability for two elements to commute is 5/8!

There are also many other generalizations. For example Guralnick and Wilson proved:

• If the probability that two randomly chosen elements of G generate a solvable group is greater than 11/30 then G itself is solvable.

• If the probability that two randomly chosen elements of G generate a nilpotent group is greater than 1/2 then G is nilpotent.

• If the probability that two randomly chosen elements of G generate a group of odd order is greater than 11/30 then G itself has odd order.

The constants are optimal in each case.

I’ll just finish with two questions I don’t know the answer to:

• For exactly what set of numbers p \in (0,1] can we find a finite group where the probability that two randomly chosen elements commute is p? If we call this set S we’ve seen

S \subseteq (0,5/8] \cup \{1\}

But does S contain every rational number in the interval (0,5/8], or just some? Just some, in fact—but which ones? It should be possible to make some progress on this by examining my proof of the 5/8 theorem, but I haven’t tried at all. I leave it to you!

• For what properties P of a finite group is there a theorem of this form: “if the probability of two randomly chosen elements generating a subgroup of G with property P exceeds some value p, then G must itself have property P”? Is there some logical form a property can have, that will guarantee the existence of a result like this?

References

Here is a nice discussion, where I learned some of the facts I mentioned, including the proof I gave:

• MathOverflow, 5/8 bound in group theory.

Here is an elementary reference, free online if you jump through some hoops, which includes the proof for compact topological groups, and other bits of wisdom:

• W. H. Gustafson, What is the probability that two group elements commute?, American Mathematical Monthly 80 (1973), 1031–1034.

For example, if G is finite simple and nonabelian, the probability that two elements commute is at most 1/12, a bound attained by \mathrm{A}_5.

Here’s another elementary article:

• Desmond MacHale, How commutative can a non-commutative group be?, The Mathematical Gazette 58 (1974), 199–202.

If you get completely stuck on Puzzle 1, you can look here for some hints on what values the probability of two elements to commute can take… but not a complete solution!

The 5/8 theorem seems to have first appeared here:

• P. Erdös and P. Turán, On some problems of a statistical group-theory, IV, Acta Math. Acad. Sci. Hung. 19 (1968) 413–435.

by John Baez at September 16, 2018 04:27 AM

September 15, 2018

Peter Coles - In the Dark

Mahler: Symphony No. 2 (`Resurrection’) at the National Concert Hall, Dublin

Last night I had the pleasure of attending the opening performance of the new season of the RTÉ National Symphony Orchestra at the National Concert Hall in Dublin. As well as being the first concert of the season, it was also my first ever visit to the National Concert Hall. To mark the occasion we were in the presence of the Uachtarán na hÉireann, Michael D Higgins, and his wife Sabena. By `occasion’ I of course mean the first concert of the season, rather than my first visit to the NCH. After the concert the audience were all treated to a glass of Prosecco on the house too!

I’ve done quite a few reviews from St David’s Hall in Cardiff over the years, so before writing about the music I thought I’d compare the venues a little. The National Concert Hall was built in 1865 and soon after its construction it was converted into the main building of University College Dublin. It was converted to a concert venue when UCD moved out of the city centre, and fully re-opened in 1981. It is a bit smaller than St David’s – capacity 1200, compared with 2000 – and does not have such a fine acoustic, but it is a very nice venue with a distinctive and decidedly more intimate vibe all of its own. I had a seat in the centre stalls, which cost me €40, which is about the same as one would expect to pay in Cardiff.

The NCH is situated close to St Stephen’s Green, which is a 15 minute walk from Pearse Station or a 30 minute walk from Connolly (both of which are served by trains from Maynooth). The weather was pleasant yesterday evening so I walked rather than taking the bus or Luas from Connolly. I passed a number of inviting hostelries on the way but resisted the temptation to stop for a pint in favour of a glass of wine in the NCH bar before the performance.

Anyway, last night’s curtain-raiser involved just one piece – but what a piece! – Symphony No.2 (“Resurrection”) by Gustav Mahler. This is a colossal work, in five movements, that lasts about 90 minutes. The performance involved not only a huge orchestra, numbering about a hundred musicians, but also two solo vocalists and a sizeable choir (although the choir does not make its entrance until the start of the long final movement, about an hour into the piece). The choir in this case was the RTÉ Philharmonic Choir. At various points trumpets and/or French horns moved offstage into the wings and, for the finale, into the gallery beside the choir.

About two years ago I blogged about the first performance I had ever heard of the same work. Hearing it again in a different environment in no way diminished its impact.

Stunning though the finale undoubtedly was, I was gripped all the way through, from the relatively sombre but subtly expressive opening movement, through the joyously dancing second that recalls happier times, the third which is based on a Jewish folk tune and which ends in a shattering climax Mahler described as “a shriek of despair”, and the fourth which is built around a setting of one of the songs from Des Knaben Wunderhorn, sung beautifully by Jennifer Johnson (standing in wonderfully for Patricia Bardon, who was unfortunately indisposed). Jennifer Johnson has a lovely velvety voice very well suited to this piece, which seems more like a contralto part than a mezzo. The changing moods of the work are underlined by a tonality that shifts from minor to major and back again. All that was very well performed, but as I suspect is always the case in performances of this work, it was the climactic final movement – which lasts almost half an hour and is based on setting of a poem mostly written by Mahler himself, sung by Orla Boylan – that packs the strongest emotional punch.

The massed ranks of the RTÉ Philharmonic Choir (all 160 of them) weren’t called upon until this final movement, but as soon as they started to sing they made an immediate impact. As the symphony moved inexorably towards its climax the hairs on the back of my neck stood up in anticipation of a thrilling sound to come. I wasn’t disappointed. The final stages of this piece are sublime, jubilant, shattering, transcendent but, above all, magnificently, exquisitely loud! The Choir, responding in appropriate fashion to Mahler’s instruction to sing mit höchster Kraft, combined with the full force of the Orchestra and the fine concert organ of the NCH to create an overwhelming wall of radiant sound.

Mahler himself wrote of the final movement:

The increasing tension, working up to the final climax, is so tremendous that I don’t know myself, now that it is over, how I ever came to write it.

Well, who knows where genius comes from, but Mahler was undoubtedly a genius. People often stay that his compositions are miserable, angst-ridden and depressing. I don’t find that at all. It’s true that this, as well as Mahler’s other great works, takes you on an emotional journey that is at times a difficult one. There are passages that are filled with apprehension or even dread. But without darkness there is no light. The ending of the Resurrection Symphony is all the more triumphant because of what has come before.

The end of the performance was greeted with rapturous applause (and a well-deserved standing ovation). Congratulations to conductor Robert Trevino, the soloists, choir and all the musicians for a memorable concert. On my way out after the Prosecco I picked up the brochure for the forthcoming season by the RTÉ National Symphony Orchestra, which runs until next May. I won’t be attend all the Friday-night concerts, but I will try to make as many as I can of the ones that don’t involve harpsichords.

Update: I hadn’t realised that the concert was actually broadcast on TV and then put on YouTube; here is a video of the whole thing:

by telescoper at September 15, 2018 02:32 PM

Jon Butterworth - Life and Physics

Rising up to the challenge: My Brexit plan
Even a stopped clock gives the right time twice a day. And the brexit ultras and associated careerists are correct that the so-called “Chequers” proposal is indeed “worse than status quo”. Damning indeed if you take “Marguerita Time” into consideration. … Continue reading

by Jon Butterworth at September 15, 2018 12:00 PM

September 14, 2018

ZapperZ - Physics and Physicists

Bismuthates Superconductors Appear To Be Conventional
A lot of people overlooked the fact that during the early days of the discovery of high-Tc superconductors, there was another "family" of superconductors beyond just the cuprates (i.e. those compounds having copper-oxide layers). These compounds are called bismuthates, where instead of having copper-oxide layers, they have bismuth-oxide layers. Otherwise, their crystal structures are similar to the cuprates.

They didn't make that much of a noise at that time because Tc for this family of material tends to be lower than the cuprates. And, even back then, there were already evidence that the bismuthates superconductors might be "boring", i.e. the results that they have produced looked like they might be a conventional superconductor. This is supported by several experiments, including a tunneling experiment[1] that showed that the phonon density of states obtained from tunneling data matches that of the density of states obtained from neutron scattering.

Now it seems that there is more evidence that the bismuthates are conventional BCS superconductors, and it comes from ARPES experiment[2]. There have been no ARPES measurement done on bismuthates before this because it had been a serious challenge to get a single-crystal of this compound large enough to perform such an experiment. But obviously, large-enough single-crystals have been synthesized.

In this latest experiment, they look at the band structure of this compound, and extract, among others, the strong electron-phonon coupling that matches the superconducting gap. This strongly indicates that phonons are the "glue" in the superconducting mechanism for this compound.

So this adds another piece of the puzzle for the whole mystery of the origin of superconductivity in the cuprates. Certainly, having similar layered crystal structure does not discount being a conventional superconductor. Yet, the cuprates have very different behavior when we perform tunneling and ARPES experiments, and they certainly have higher Tc's.

The mystery continues.

Zz.

[1] Q. Huang et al. Nature v347, p369 (1990).
[2] CHP. Wen et al. PRL  121, 117002 (2018). https://arxiv.org/abs/1802.10507

by ZapperZ (noreply@blogger.com) at September 14, 2018 04:12 PM

September 13, 2018

ZapperZ - Physics and Physicists

Human Eye Can Detect Cosmic Radiation
Well, not in the way you think.

I recently found this video of an appearance of astronaut Scott Kelly on The Late Show with Stephen Colbert. During this segment, he talked about the fact that when he went to sleep on the Space Station and closed his eyes, he occasionally detected flashes of light. He attributed it to the cosmic radiation  passing through his body, and his eyes in particular.

Check out the video at minute 3:30



My first inclination is to say that this is similar to how we detect neutrinos, i.e. the radiation particles interact with the medium in his yes, either the vitreous or the medium that makes up the lens, and this interaction causes the ejection of relativistic electron and subsequently, a Cerenkov radiation. The Cerenkov radiation is then detected by the eye.

Of course, there are other possibilities, such as the cosmic particle causes an excitation of an atom or molecules when they collided, and this then caused a light emission. But Scott Kelly mentioned that these flashes appeared like fireworks. So my guess here is that it is more of a very short cascade of events, and probably the Cerenkov light scenario.

This, BTW, is almost how we detect neutrinos, especially at Super Kamiokande and all the neutrino detectors around the world. Neutrinos come into the detector, and those that interact with the medium inside the detector (water, for example), cause the emission of relativistic electrons that move faster than the speed of light inside the medium. This creates the Cerenkov radiation, and typically, the light is blueish white. It's the same glow that you see if you look in a pool of fuel rods in a nuclear reactor.

So there! You can detect something with your eyes closed!

Zz.

by ZapperZ (noreply@blogger.com) at September 13, 2018 01:51 PM

September 12, 2018

John Baez - Azimuth

Noether’s Theorem

 

I’ve been spending the last month at the Centre of Quantum Technologies, getting lots of work done. This Friday I’m giving a talk, and you can see the slides now:

• John Baez, Getting to the bottom of Noether’s theorem.

Abstract. In her paper of 1918, Noether’s theorem relating symmetries and conserved quantities was formulated in term of Lagrangian mechanics. But if we want to make the essence of this relation seem as self-evident as possible, we can turn to a formulation in term of Poisson brackets, which generalizes easily to quantum mechanics using commutators. This approach also gives a version of Noether’s theorem for Markov processes. The key question then becomes: when, and why, do observables generate one-parameter groups of transformations? This question sheds light on why complex numbers show up in quantum mechanics.

At 5:30 on Saturday October 6th I’ll talk about this stuff at this workshop in London:

The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame).

This workshop celebrates the 100th anniversary of Noether’s famous paper connecting symmetries to conserved quantities. Her paper actually contains two big theorems. My talk is only about the more famous one, Noether’s first theorem, and I’ll change my talk title to make that clear when I go to London, to avoid getting flak from experts. Her second theorem explains why it’s hard to define energy in general relativity! This is one reason Einstein admired Noether so much.

I’ll also give this talk at DAMTP—the Department of Applied Mathematics and Theoretical Physics, in Cambridge—on Thursday October 4th at 1 pm.

The organizers of London workshop on the philosophy and physics of Noether’s theorems have asked me to write a paper, so my talk can be seen as the first step toward that. My talk doesn’t contain any hard theorems, but the main point—that the complex numbers arise naturally from wanting a correspondence between observables and symmetry generators—can be expressed in some theorems, which I hope to explain in my paper.

 

by John Baez at September 12, 2018 08:49 AM

September 10, 2018

Lubos Motl - string vacua and pheno

Why string theory is quantum mechanics on steroids
In many previous texts, most recently in the essay posted two blog posts ago, I expressed the idea that string theory may be interpreted as the wisdom of quantum mechanics that is taken really seriously – and that is applied to everything, including the most basic aspects of the spacetime, matter, and information.

People like me are impressed by the power of string theory because it really builds on quantum mechanics in a critical way to deduce things that would have been impossible before. On the contrary, morons typically dislike string theory because their mezzoscopic peabrains are already stretched to the limit when they think about quantum mechanics – while string theory requires the stretching to go beyond these limits. Peabrains unavoidably crack and morons, writing things that are not even wrong about their trouble with physics, end up lost in math.

Other physicists have also made the statement – usually in less colorful ways – that string theory is quantum mechanics on steroids. It may be a good idea to explain what all of us mean – why string theory depends on quantum mechanics so much and why the power of quantum mechanics is given the opportunity to achieve some new amazing things within string theory.



At the beginning, I must say that the non-experts (including many pompous fools who call themselves "experts") usually overlook the whole "beef" of string theory just like they overlook the "beef" of quantum mechanics.

They imagine that quantum mechanics "is" a new equation, Schrödinger's equation, that plays the same role as Newton's, Maxwell's, Einstein's, and other equations. But quantum mechanics is much more – and much more universal and revolutionary – than another addition to classical physics. The actual heart of quantum mechanics is that the objects in its equations are connected to the observations very differently than the classical counterparts have been.

In the same way, they imagine that string theory is a theory of a new random dynamical object, a rubber band, and they imagine either downright classical vibrating strings or quantum mechanical strings that just don't differ from other quantum mechanical objects. But this understanding doesn't go beyond the (unavoidably oversimplified) name of string theory. If you analyze the composition of the term "string theory" as a linguist, you may think it's just a "theory of some strings". But that's not really the lesson one should draw. The real lesson is that if certain operations are done well with particular things, one ends with some amazing set of equations that may explain lots of things about the Universe.

Strings are exceptionally powerful – and only exceptionally powerful – at the quantum level. And the point of string theory isn't that it's a theory of another object. The point is that string theory is special among theories that would initially look "analogous".

Why is it special? And why is the magic of string theory so intertwined with quantum mechanics?

Discrete types of Nature's building blocks

For centuries, people knew something about chemistry. Matter around us is made of compounds which are mixtures of elements – such as hydrogen, helium, lithium, and I am sure you have memorized the rest. The number of types of atoms around us is finite. If arbitrarily large nuclei were allowed or stable, it would be countably infinite. But the number would still be discrete – not continuous.



For some century, people realized that the elements are probably made out of identical atoms. Each element has its own kind of atoms. The concept of atoms was first promoted by Democritus in ancient Greece. But in chemistry, atoms became more specific.

Sometime in the late 19th and early 20th century, people began to understand that the atom isn't as indivisible as the Greek name suggested. It is composed of a dense nucleus and electrons that live somewhere around the nucleus. Nucleus was later found to be composed of protons and neutrons. Quantum mechanics of 1925 allowed the physicists to study the quantized motion of electrons around the nuclei – and the motion of the electrons is the crucial thing that decides about the energy levels of all atoms and, consequently, their chemical properties.

In the 1960s, protons and neutrons were found to be composite as well. First, matter was composed of atoms – different kinds of building blocks for every element. Later, matter was reduced to bound states of electrons, protons, and neutrons. Later, protons and neutrons were replaced with quarks while electrons remained and became an important example of leptons, a group of fermions that is considered "on par" with quarks. The Standard Model deals with fermions, namely quarks and leptons, and bosons, namely the gauge boson and the Higgs boson. The bosons are particularly capable of mediating forces between all the fermions (and bosons).

But even in this "nearly final" picture, there are still finitely many but relatively many species of elementary particles. Their number is slightly lower than the number of atoms that were considered indivisible a century earlier. But the difference isn't too big – neither qualitatively nor quantitatively. We have dozens of types of basic "atoms" or "elementary particles" and each of them must be equipped with some properties (yes, the properties of elementary particles in the Standard Model look more precise and fundamental than the properties of atoms of the elements used to). The different particle species amount to many independent assumptions about Nature that have to be added to the mix to build a viable theory.

Can we do better? Can we derive the species from a smaller number of assumptions – and from one kind of matter?

String theory – let's assume that Nature is described by a weakly-coupled heterotic string theory (closed strings only), to make it simpler – describes all elementary particles, bosons and fermions, as discrete energy eigenstates of a vibrating closed string. All interactions boil down to splitting and merging of these oscillating strings. Quantum mechanics is needed for the energy levels to be discrete – just like in the case of the energy levels of atoms. But for the first time, there is only one underlying building block in Nature, a vibrating closed string.

Like in atomic and molecular physics, quantum mechanics is needed for the discrete – finite or countable – number of species of small bound objects that exist.

Also, the number of spacetime dimensions was always arbitrary in classical physics. When constructing a theory, you had to assume a particular number – in other words, you had to add the coordinates \(t,x,y,z\) to your theory manually, one by one – and because the choice of the spacetime dimension was one of the first steps in the construction of any theory, there was no way to treat the theories in different spacetime dimensions simultaneously, and there were consequently no conceptual ways how to derive the right spacetime dimension.

In string theory, it's different because even the spacetime dimensions – scalar fields on the world sheet – are "things" that contribute to various quantities (such as the conformal anomaly) and string theory is therefore capable of picking the preferred (critical) dimension of the spacetime. Even the individual spacetime dimensions are sort of made of the "same convertible stuff" within string theory. This would be unthinkable in classical physics.

Prediction of gravity and other special forces: state-operator correspondence

String theory is not only the world's only known theory that allows Einsteinian gravity in \(D\geq 4\) to co-exist with quantum mechanics. String theory makes the Einsteinian gravity unavoidable. It predicts gravitons, spin-two particles that interact in agreement with the equivalence principle (all objects accelerate at the same acceleration in a gravitational field).

Why is it so? I gave an explanation e.g. in 2007. It is because a particular energy level of the vibrating closed string looks like a spin-two massless particle and it may be shown that the addition of a coherent state of such "graviton strings" into a spacetime is equivalent to the change of the classical geometry on which all other objects – all other vibrating strings – propagate. In this way, the dynamical curved geometry (or at least any finite change of it) may be literally built out of these gravitons.

(Similarly, the addition of strings in another mode, the photon mode, may have the effect that is indistinguishable from the modification of the background electromagnetic field and it is true for all other low-energy fields, too.)

Why is it so? What is the most important "miracle" or a property of string theory that allows this to work? I have picked the state-operator correspondence. And the state-operator correspondence is an entirely quantum mechanical relationship – something that wouldn't be possible in a classical world.

What is the state-operator correspondence? Consider a closed string. It has some Hilbert space. In terms of energy eigenstates, the Hilbert space has a zero mode described by the usual \(x_0,p_0\) degrees of freedom that make the string behave as a quantum mechanical particle. And then the strings may be stretched and the amount of vibrations may be increased by adding oscillators – excitations by creation operators of many quantum harmonic oscillators. So a basis vector in this energy basis of the closed string's Hilbert space is e.g.\[

\alpha^\kappa_{-2}\alpha^\lambda_{-3} \tilde \alpha^\mu_{-4} \tilde\alpha_{-1}^\nu \ket{0; p^\rho}.

\] What is this state? It looks like a momentum eigenstate of a particle whose spacetime momentum is \(p^\rho\). However, for a string, the "lightest" state with this momentum is just a ground state of an infinite-dimensional harmonic oscillator. We may excite that ground state with the oscillators \(\alpha\). These excitations are vaguely analogous to the kicking of the electrons in the atoms from the ground state to higher states, e.g. from \(1s\) to \(2p\). Those oscillators without a tilde are left-moving, those with a tilde are right-moving waves on the string. The (negative) subscript labels the number of periods along the closed string (which Fourier mode we pick). The superscript \(\kappa\) etc. labels in which transverse spacetime direction the string's oscillation is increased.

The total squared mass is given by \(2+3=4+1\) in some string units. The sum of the tilded and untilded subscripts must be equal (five, in this case) for the "beginning" of the closed string to be immaterial, technically because \(L_0-\tilde L_0 = 0\). Great. This was a basis of the closed string's Hilbert space.

But we may also discuss the linear operators on that Hilbert space. They're constructed as functionals of \(X^\kappa(\sigma)\) and \(P^\kappa(\sigma)\) – I am omitting some extra fields (ghosts) that are needed in some descriptions, plus I am omitting a discussion about the difference between transverse and longitudinal directions of the excitations etc. – there are numerous technicalities you have to master when you study string theory at the expert level but they don't really affect the main message I want to convey.

OK, the Hilbert space is infinite-dimensional but its dimension \(d\) must be squared, to get \(d^2\), if you want to quantify the dimension of the space of matrices on that space, OK? A matrix is "larger" than a column vector. The number \(d^2\) looks much higher than \(d\) but nevertheless, for \(d=\infty\), as long as it is the right "stringy infinity", there exists a very natural one-to-one map between the states and the local operators. Let me immediately tell you what is the operator corresponding to the state above:\[

(\partial_z)^2 X^\kappa
(\partial_z)^3 X^\lambda
(\partial_{\bar z})^4 X^\mu
(\partial_{\bar z})^1 X^\nu
\exp(ip\cdot X(\sigma))

\] There should be some normal ordering here. All the four operators \(X^{\kappa,\lambda,\mu,\nu}\) are evaluated at the point of the string \(\sigma\), too. You see that the superscripts \(\kappa,\lambda,\mu,\nu\) were copied to natural places, the subscripts \(2,3,4,1\) were translated to powers of the world sheet derivative with respect to \(z\) or \(\bar z\), the holomorphic or antiholomorphic complex coordinates on the Euclideanized worldsheet. Tilded and untilded oscillators were translated to the holomorphic and antiholomorphic derivatives. An exponential of \(X^\rho\) operator was inserted to encode the ordinary "zero mode", particle-like total momentum of the string. And the total operator looks like some very general product of a function of \(X^\rho\) – the imaginary exponentials are a good basis, ask Mr Fourier why it is so – and its derivatives (of arbitrarily high orders). By the combination of the "Fourier basis wisdom" and a simple decomposition to monomials, every function of \(X^\rho\) and its worldsheet derivatives may be expanded to a sum of such terms.

The map between operators and states isn't quite one-to-one. We only considered "local operators at point \(\sigma\) of the string" where the value of \(\sigma\) remains unspecified. But the "number of possible values of \(\sigma\)" looks like a smaller factor than the factor \(d\) that distinguishes \(d,d^2\), the dimension of the Hilbert space and the space of operators, so the state-operator correspondence is "almost" a one-to-one map.

Such a map would be unthinkable in classical physics. In classical physics, a pure state would be a point in the phase space. On the other hand, the observable of classical physics is any coordinate on the phase space – such as \(x\) or \(p\) or \(ax^2+bp^2\). Is there a canonical way to assign a coordinate on the phase space – a scalar function on the phase space – to a particular point \((x,p)\) on that space? There's clearly none. These mathematical objects carry completely different information – and the choice of the coordinate depends on much more information. You would have a chance to map a probability distribution (another scalar function) on the phase space to a general coordinate on the phase space – except that the former is non-negative. But that map wouldn't be shocking in quantum mechanics, either, because the probability distribution is upgraded to a density matrix which is a similar matrix as the observables. The magic of string theory is that there is a dictionary between pure states and operators.

This state-operator correspondence is important – it is a part of the most conceptual proof of the string theory's prediction of the Einsteinian gravity. Why does the state-operator correspondence exist? What is the recipe underlying this magic?

Well, you can prove the state-operator correspondence by considering a path integral on an infinite cylinder. By conformal transformations – symmetries of the world sheet theory – the infinite cylinder may be mapped to the plane with the origin removed. The boundary conditions on the tiny removed circle at the origin (boundary conditions rephrased as a linear insertion in the path integral) correspond to a pure state; but the specification of these boundary conditions must also be equivalent to a linear action at the origin, i.e. a local operator.

Another "magic player" that appeared in the previous paragraph – a chain of my explanations – is the conformal symmetry. A solution to the world sheet theory works even if you conformally transform it (a conformal transformation is a diffeomorphism that doesn't change the angles even if you keep the old metric tensor field). Conformal symmetries exist even in purely classical field theories. Lots of the self-similar or scale-invariant "critical" behavior exhibits the conformal symmetry in one way or another. But what's cool about the combination of conformal symmetry and quantum mechanics is that a particular, fully specified pure state (and the ground state of a string or another object, e.g. the spacetime vacuum) may be equivalent to a particular state of the self-similar fog.

The combination of quantum mechanics and conformal symmetry is therefore responsible for many nontrivial abilities of string theory such as the state-operator correspondence (see above) or holography in the AdS/CFT correspondence. At the classical level, the conformal symmetry of the boundary theory is already isomorphic to the isometry of the AdS bulk. But that wouldn't be enough for the equivalence between "field theory" in spacetimes of different dimensions. Holography i.e. the ability to remove the holographic dimension in quantum gravity may only exist when the conformal symmetry exists within a quantum mechanical framework.

Dualities, unexpected enhanced symmetries, unexpected numerous descriptions

The first quantum mechanical X-factor of quantum mechanics is the state-operator correspondence and its consequences – either on the world sheet (including the prediction of forces mediated by string modes) or on in the boundary CFT in the holographic AdS/CFT correspondence.

To make the basic skeleton of this blog post simple, I will only discuss the second class of stringy quantum muscles as one package – the unexpected symmetries, enhanced symmetries, and numerous descriptions. For some discussion of the enhanced symmetries, try e.g. this 2012 blog post.

In theoretical physicists' jargon, dualities are relationships between seemingly different descriptions that shouldn't represent the same physics but for some deep, nontrivial, and surprising reasons, the physical behavior is completely equivalent, including the quantitative properties such as the mass spectrum of some bound states etc.

The enhanced symmetries such as the \(SU(2)\) gauge group of the compactification on a self-dual circle (under T-duality) are a special example of dualities, too. The action of this \(SU(2)\), except for the simple \(U(1)\) subgroup, looks like some weird mixing of states with different winding numbers etc. Nothing like that could be a symmetry in classical physics. In particular, we need quantum mechanics to make the momenta quantized – just like the winding numbers (the integer saying how many times a string is wound around a non-contractible circle in the spacetime) are quantized – if we want to exchange momenta and windings as in T-duality. But within string theory, those symmetries become possible.

Many stringy vacua have larger symmetry groups than expected classically. You may identify 16+16 fermions on the heterotic string's world sheet and figure out that the theory will have an \(SO(16)\times SO(16)\) symmetry. But if you look carefully, the group is actually enhanced to an \(E_8\times E_8\). Similarly, a string theory on the Leech lattice could be expected to have a Conway group of symmetries – the isometry of such a lattice – but instead, you get a much cooler, larger, and sexier monster group of symmetries, the largest sporadic finite group.

Two fermions on the world sheet may be bosonized – they are equivalent to one boson. This is also a simple example of a "stringy duality" between two seemingly very different theories. The conformal symmetry and/or the relative scarcity of the number of possible conformal field theories may be used in a proof of this equivalence. Wess-Zumino-Witten models involving strings propagating on group manifolds are equivalent to other "simple" theories, too.

I don't want to elaborate on all the examples – their number is really huge and I have discussed many of them in the past. They may often be found in different chapters of string theory textbooks. Here, I want to emphasize their general spirit and where this spirit comes from. Quantum mechanics is absolutely essential for this phenomenon.

Why is it so? Why don't we see almost any of these enhanced symmetries, dualities, and equivalences between descriptions in classical physics? An easy answer is unlikely to be a rigorous proof but it may be rather apt, anyway. My simplest explanation would be: You don't see dualities and other things in classical physics because classical physics allows you the "infinite sharpness and resolution" which means that if two things look different, they almost certainly are different.

(Well, some symmetries do exist classically. For example, Maxwell's equations – with added magnetic monopoles or subtracted electric charges – have the symmetry of exchanging the electric fields with the magnetic fields, \(\vec E\to \vec B\), \(\vec B\to -\vec E\). This is a classical seed of the stringy S-dualities – and of stringy T-dualities if the electromagnetic duality is performed on a world sheet. But quantum mechanics is needed for the electromagnetic duality to work in the presence of particles with well-defined non-zero charges in the S-duality case; and in the presence of quantized stringy winding charges in the T-duality example because the T-dual momenta have to be quantized as well.)

On the other hand, quantum mechanics brings you the uncertainty principle which introduces some fog and fuzziness. The objects don't have sharp boundaries and shapes given by ordinary classical functions. Instead, the boundaries are fuzzy and may be interpreted in various ways. It doesn't mean that the whole theory is ill-defined. Quantum mechanics is completely quantitative and allows an arbitrarily high precision.

Instead, the quantum mechanical description often leads to a discrete spectrum and allows you to describe all the "invariant" properties of an energy-like operator by its discrete spectrum – by several or countably many eigenvalues. And there are many classical models whose quantization may yield the same spectrum. The spectrum – perhaps with an extra information package that is still relatively small – may capture all the physically measurable, invariant properties of the physical theory.

We may see the seed of this multiplicity of descriptions in basic quantum mechanics. The multiplicity exists because there are many – and many clever – unitary transformations on the Hilbert space and many bases and clever bases we may pick. The Fourier-like transformation from one basis to another makes the theory look very different than before. Such integral transformations would be very unnatural in classical physics because they would map a local theory to a non-local one. But in quantum mechanics, both descriptions may often be equally local.

OK, so string theory, due to its being a special theory that maximizes the number of clever ways in which the novel features of quantum mechanics are exploited, is the world champion in predicting things that were believed to be "irreducible assumptions whose 'why' questions could never be answered by science" and allowing new perspectives to look at the same physical phenomena. String theory allows to derive the spacetime dimension, the spectrum of elementary particles (given some discrete information about the choice of the compactification, a vacuum solution of the stringy equations), and it allows you to describe the same physics by bosonized or fermionized descriptions, descriptions related by S-dualities, T-dualities (including mirror symmetries), U-dualities, string-string-dualities which exhibit enhanced gauge symmetries, holography as in the AdS/CFT correspondence, the matrix model description representing any system as a state of bound D-branes with off-diagonal matrix entries for each coordinate, the ER-EPR correspondence for black holes, and many other things.

If you feel why quantum mechanics smells like progress relatively to classical physics, string theory should smell like progress relatively to the previous quantum mechanical theories because the "quantum mechanical thinking" is applied even to things that were envisioned as independent classical assumptions. That's why string theory is quantum mechanics squared, quantum mechanics with an X-factor, or quantum mechanics on steroids. Deep thinkers who have loved the quantum revolution and who have looked into string theory carefully are likely to end up loving string theory, and those who have had psychological problems with quantum mechanics must have even worse problems with string theory.

Throughout the text above, I have repeatedly said that "quantum mechanics is applied to new properties and objects" within string theory. When I was proofreading my remarks, I felt uneasy about these formulations because the comment about the "application" indicates that we just wanted to use quantum mechanics more universally and seriously, and it was guaranteed that we could have done so. But this isn't the case. The existence of string theory (where the deeper derivations of seemingly irreducible classical assumptions about the world may arise) is a sort of a miracle, much like the existence of quantum mechanics itself. (Well, a miracle squared.) Before 1925, people didn't know quantum mechanics. They didn't know it was possible. But it was possible. Quantum mechanics was discovered as a highly constrained, qualitatively different replacement for classical physics that nevertheless agrees with the empirical data – and allows us to derive many more things correctly. In the same way, string theory is a replacement for local quantum field theories that works in almost the same way but not quite. Just like quantum mechanics allows us to derive the spectrum and states of atoms from a deeper point, string theory allows us to derive the properties of elementary particles and even the spacetime dimension and other things from a deeper, more starting point. Like quantum mechanics itself, string theory feels like something important that wasn't invented or constructed by humans. It pre-existed and it was discovered.

by Luboš Motl (noreply@blogger.com) at September 10, 2018 03:33 PM

Jon Butterworth - Life and Physics

Ten years after the “Big Bang”
Ten years ago it was Wednesday, and at 10:28 in the morning Geneva time the first protons had just made the 27 km journey through the Large Hadron Collider at CERN. The media referred to it as “Big Bang Day”, and … Continue reading

by Jon Butterworth at September 10, 2018 07:20 AM

September 04, 2018

Clifford V. Johnson - Asymptotia

Beach Scene…


The working title for this was “when you forget to bring your camera on holiday...” but I know you won’t believe that's why I drew it! (This was actually a quick sketch done at the beach on Sunday, with a few tweaks added over dinner and some shadows added using iPad.)

I'm working toward doing finish work on a commissioned illustration for a magazine (I'll tell you about it more when I can - check instagram, etc., for updates/peeks), and am finding my drawing skills very rusty --so opportunities to do sketches, whenever I can find them, are very welcome.

-cvj Click to continue reading this post

The post Beach Scene… appeared first on Asymptotia.

by Clifford at September 04, 2018 09:08 PM

Jon Butterworth - Life and Physics

Anti-protons, Dark Matter and Helium
First post of “Postcards from the Energy Frontier” at the Cosmic Shambles Network. A new measurement at CERN tells us something about the way particles travel through interstellar space. Which in turn may help a satellite on the International Space … Continue reading

by Jon Butterworth at September 04, 2018 11:59 AM

September 01, 2018

Jon Butterworth - Life and Physics

Geneva Monopoly
Just returned from a couple of weeks at CERN. Saw this in Geneva and had to buy it – you can probably tell why. So, CERN is Oxford Street (which, for those of you who don’t know London, is much … Continue reading

by Jon Butterworth at September 01, 2018 08:56 AM

August 31, 2018

Lubos Motl - string vacua and pheno

Light Stückelberg bosons deported to the swampland
Conjecture would also imply that photons have to be strictly massless

I am rather happy about the following new hep-th preprint that adds 21 pages of somewhat nontrivial thoughts to some heuristic arguments that I always liked to spread. Just to be sure, Harvard's Matt Reece released his paper
Photon Masses in the Landscape and the Swampland
What's going on? Quantum field theory courses usually start with scalar fields and the Klein-Gordon Lagrangian. At some moment, people want to learn about some empirically vital quantum field, the electromagnetic field, whose Lagrangian is\[

{\mathcal L}_\gamma = -\frac 14 F_{\mu\nu} F^{\mu\nu}.

\] The action is invariant under the \(U(1)\) gauge invariance which is why 3+1 polarizations of the \(A_\mu\) field are reduced to the \((D-2)\) i.e. two transverse physical polarizations of the spin-1 photon. Are there also massive spin-one bosons?



Yes, there are, e.g. W-bosons and Z-bosons that were discovered at CERN more than 30 years ago. The addition of masses naively corresponds to a simple mass term\[

{\mathcal L}_{\rm mass} = \frac {m^2}{2} A_\mu A^\mu.

\] A problem is that this term isn't gauge-invariant. So the theory must be defined without the gauge invariance and we can't consistently reduce the 3+1 polarizations (including one, time-like polarization that has the wrong sign of the norm so it would lead to negative probabilities) to 3 (for a massless photon, 2) polarizations.

However, the Standard Model allows massive spin-1 bosons by the Higgs mechanism. The fundamental Lagrangian actually is gauge-invariant and the gauge-invariance-violating mass term above isn't included directly. Instead, it is generated from the Higgs field's vacuum expectation value \(\langle h\rangle = v\) through the interactions of the gauge field \(W_\mu\) or \(Z_\mu\) with the Higgs field that is included in the Higgs boson's kinetic term \(\partial_\mu h \cdot\partial^\mu h\) once the partial derivatives are replaced with the covariant derivatives. These covariant derivatives \(D_\mu=\partial_\mu - i g A_\mu\) are not only allowed but needed to construct gauge-invariant kinetic terms



So the W-bosons and Z-bosons get their masses via the interaction with the Higgs boson (that's also true for the fermions – leptons and quarks). This is the pretty way to generate masses of spin-1 bosons. It is exploited by the Standard Model and the Higgs mechanism is the last big clear discovery of experimental particle physicists. So massive gauge bosons automatically point to the Higgs mechanism.

But then there's the "ugly" way – and I've always considered it an ugly way – to make spin-1 bosons massive, the Stückelberg mechanism. The mass term for the photons is rewritten as\[

{\mathcal L}_{\rm mass} = \frac 12 f_{\theta}^2 (\partial_\mu \theta - eA_\mu)^2.

\] We added a new scalar field \(\theta\) and preserved the gauge invariance \(A_\mu\to A_\mu +(1/e)\partial_\mu \alpha\) but the new scalar field must also transform under it, \(\theta\to \theta+\alpha\). Because we have the same "amount" of gauge invariance as we have in the massless photons, but there is one scalar field added, we end up with 3 physical polarizations of the massive particle instead of the massless photon's two polarizations. They're the ordinary three spatial or transverse polarizations of a massless vector particle, \(x,y,z\).

One may gauge-fix the Stückelberg action by setting \(\theta=0\) which reduces the system to the Proca action for the "regular" massive spin-1 boson. But the advantage of the Stückelberg form is that you know how to write down the field's interactions with others in a gauge-invariant way.

The mass of the (Swiss) Ernst Stückelberg's boson is \(m_A = ef_\theta\). You may send it zero either by sending the gauge coupling \(e\to 0\) or sending \(f_\theta\to 0\) or some combination of both. Note that \(e\to 0\) is something that the weak gravity conjecture labels dangerous and, under certain assumptions, forbidden. OK, this kind of a description of a massive spin-1 boson doesn't seem to be exploited by the Standard Model. It's ugly because the scalar field transforms in a suicidal way and the theory doesn't point to any non-Abelian gauge symmetry and other pretty things.

In principle, people would always say that the photon that we know and love (and especially see) can in principle be massive, thanks to a Stückelberg mechanism. Well, I always protested when someone presented it as a real possibility. If a photon were massive, we still know that the mass must be much smaller than the inverse radius of the Earth – because we know that the magnetic fields around the Earth behave as those in the proper massless electromagnetism, not in some Proca-Yukawa way. And if the photon were massive but this light, it would at least amount to a new, unsubstantiated fine-tuning. It's more likely and we are encouraged to assume that the photon is exactly massless.

Reece places this "negative sentiment" of mine into a potentially axiomatic if not provable framework. He argues that the limit of the very light photon is "very far in the configuration space" and in consistent theories of quantum gravity, the swampland reasoning implies the existence of some light enough particles (well, a whole tower of them) and/or other reasons why the effective field theory has to break at relatively low energy scales. Quantitatively, Reece claims that the effective field theory has to break above\[

\Lambda_{UV} = \sqrt{ \frac{m_\gamma M_{\rm Planck}}{e} }.

\] Well, the theory would have to break down earlier, at the scale \(e^{1/3} M_{\rm Planck}\), if the latter scale were even lower. At any rate, using the scale in the displayed equation above, we know that the photon mass is rather tiny (recall my comments about the geomagnetic field etc.) and the geometric average with the Planck mass sends us to an atomic physics scale where QED still seems OK, and that's how the massive photon hypothesis could be strictly refuted.

We're not quite sure about any of these swampland-based principles but I tend to think that many of them, when properly formulated, are right and powerful. I find this picture intriguing. Lots of the constructions in effective field theory, like the Stückelberg masses, looked ugly and heuristically "less consistent" to the people who had as good a taste as your humble correspondent. Finally, we may be becoming able to clearly articulate the arguments showing that this "feeling of reduced consistency" is not just some emotion. When coupled to quantum gravity, these ugly scenarios could indeed be strictly forbidden.

Quantum gravity and/or string theory could only allow the solutions that seemed "more pretty" than their ugly competitors. And you could stop issuing politically correct disclaimers such as "we are assuming that the photon mass is exactly zero; if it had a nonzero mass, we would have to revise the whole analysis".

Reece paper has no direct relationship to the de Sitter vacua and the cosmological controversies. But if it's right or at least accepted, it clearly strengthens the Vafa Team in that dispute. There are really different sketches of the general spirit of the stringy research in the future. In Team Stanford's plan, we're satisfied with some Rube Goldberg-style construction, we don't know which one (or which class) is the right one, we get used to it, and we train ourselves to be happy that we won't learn anything new.

On the other hand, in Team Vafa's plan for the future, string theory research continues to do actual progress, trying to answer well-defined questions about the world around us that weren't previously answered, such as "Can some massive bosons we will produce have Stückelberg masses? Is our photon allowed by string theory to be massive?" Truly curious physicists simply want new answers like that to be found. It may be impossible to answer some of these questions, especially if our vacuum is a relatively random one in a set of vacua that have different properties. But this possibility is not a proven fact and even if it is true for some properties, it is not true for all questions.

We can't ever accept the belief that all questions that haven't been answered so far will remain unanswered forever! That would be a clear religious attitude that stops progress in science – and that could have stopped it at every moment in the past. Harvard's Reece sketched some arguments that may prohibit Stückelberg masses in quantum gravity and you – I am primarily talking about you, dear reader in Palo Alto – should better think about it and decide whether he's right or not.

In some technical questions within the de Sitter controversy, I am uncertain, and so are others. But I am certain about certain principles of the scientific method. The real pleasure of science is to find ways to answer questions – to discriminate between possible answers. Many people in Northern California (which includes Palo Alto) may have adopted a non-discrimination approach to society and science (all people and answers and vacua are equally good) – but without discrimination, there is no science.

by Luboš Motl (noreply@blogger.com) at August 31, 2018 07:20 AM

August 30, 2018

ZapperZ - Physics and Physicists

Where Do Elementary Particle Names Come From?
In this video, Fermilab's Don Lincoln tackles less about physics, but more about history and classification of our current Standard Model of elementary particles.



Zz.

by ZapperZ (noreply@blogger.com) at August 30, 2018 03:55 PM

The n-Category Cafe

Exceptional Quantum Geometry and Particle Physics

It would be great if we could make sense of the Standard Model: the 3 generations of quarks and leptons, the 3 colors of quarks vs. colorless leptons, the way only the weak force notices the difference between left and right, the curious gauge group <semantics>SU(3)×SU(2)×U(1)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(2)\times \mathrm{U}(1)</annotation></semantics>, the role of the Higgs boson, and so on. I can’t help but hope that all these facts are clues that we have not yet managed to interpret.

These papers may not be on the right track, but I feel a duty to explain them:

After all, the math is probably right. And they use the exceptional Jordan algebra, which I’ve already wasted a lot of time thinking about — so I’m in a better position than most to summarize what they’ve done.

Don’t get me wrong: I’m not claiming this paper is important for physics! I really have no idea. But it’s making progress on a quirky, quixotic line of thought that has fascinated me for years.

Here’s the main result. The exceptional Jordan algebra contains a lot of copies of 4-dimensional Minkowski spacetime. The symmetries of the exceptional Jordan algebra that preserve any one of these copies form a group…. which happens to be exactly the gauge group of the Standard Model!

Formally real Jordan algebras were invented by Jordan to serve as algebras of observables in quantum theory, but they also turn out to describe spacetimes equipped with a highly symmetrical causal structure. For example, <semantics>𝔥 2()<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{C})</annotation></semantics>, the Jordan algebra of <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> self-adjoint complex matrices, is the algebra of observables for a spin-<semantics>1/2<annotation encoding="application/x-tex">1/2</annotation></semantics> particle — but it can also be identified with 4-dimensional Minkowski spacetime! This dual role of formally real Jordan algebras remains somewhat mysterious, though the connection is understood in this case.

When Jordan, Wigner and von Neumann classified formally real Jordan algebras, they found 4 infinite families and one exception: the exceptional Jordan algebra <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>, consisting of <semantics>3×3<annotation encoding="application/x-tex">3\times 3</annotation></semantics> self-adjoint octonion matrices. Ever since then, physicists have wondered what this thing is good for.

Now Todorov and Dubois–Violette claim they’re getting the gauge group of the Standard Model from the symmetry group of the exceptional Jordan algebra by taking the subgroup that

  1. preserves a copy of 10d Minkowski spacetime inside this Jordan algebra, and

  2. also preserves a copy of the complex numbers inside the octonions — which is just what we need to pick out a copy of 4d Minkowski spacetime inside 10d Minkowski spacetime!

But let me explain this in more detail. First, some old stuff:

If you pick a unit imaginary octonion and call it <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>, you get a copy of the complex numbers inside the octonions <semantics>𝕆<annotation encoding="application/x-tex">\mathbb{O}</annotation></semantics>. This lets us split <semantics>𝕆<annotation encoding="application/x-tex">\mathbb{O}</annotation></semantics> into <semantics>V<annotation encoding="application/x-tex">\mathbb{C} \oplus V</annotation></semantics>, where <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is a 3-dimensional complex Hilbert space. The subgroup of the automorphism group of the octonions that fixes <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> is <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics>. This is the gauge group of the strong force. It acts on <semantics>V<annotation encoding="application/x-tex">\mathbb{C} \oplus V</annotation></semantics> in exactly the way you’d need for a lepton and a quark.

The exceptional Jordan algebra <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics> contains the Jordan algebra <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> of <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> self-adjoint octonion matrices in various ways. <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> can be identified with 10-dimensional Minkowski spacetime, with the determinant serving as the Minkowski metric. Picking a unit imaginary octonion <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> then chooses a copy of <semantics>𝔥 2()<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{C})</annotation></semantics> inside <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics>, and <semantics>𝔥 2()<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{C})</annotation></semantics> can be identified with 4-dimensional Minkowski spacetime.

All this is well-known to people who play these games. Now for the new part.

1) First, suppose we take the automorphism group of the exceptional Jordan algebra and look at the subgroup that preserves the splitting of <semantics>𝕆<annotation encoding="application/x-tex">\mathbb{O}</annotation></semantics> into <semantics>V<annotation encoding="application/x-tex">\mathbb{C} \oplus V</annotation></semantics> for each entry of these octonion matrices. This subgroup is

<semantics>SU(3)×SU(3)/3<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(3) \times \mathrm{SU}(3) } {\mathbb{Z}/3} } </annotation></semantics>

It’s not terribly hard to see why this might be true. We can take any element of <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics> and spit it into two parts using <semantics>𝕆=V<annotation encoding="application/x-tex">\mathbb{O} = \mathbb{C} \oplus V</annotation></semantics>, getting a decomposition one can write as <semantics>𝔥 3(𝕆)=𝔥 3()𝔥 3(V)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O}) = \mathfrak{h}_3(\mathbb{C}) \oplus \mathfrak{h}_3(V)</annotation></semantics>. One copy of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> acts by conjugation on <semantics>𝔥 3()<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{C})</annotation></semantics> while another acts by conjugation on <semantics>𝔥 3(V)<annotation encoding="application/x-tex">\mathfrak{h}_3(V)</annotation></semantics>. These two actions commute. The center of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> is <semantics>/3<annotation encoding="application/x-tex">\mathbb{Z}/3</annotation></semantics>, consisting of diagonal matrices that are cube roots of the identity matrix. So, we get an inclusion of <semantics>/3<annotation encoding="application/x-tex">\mathbb{Z}/3</annotation></semantics> in the diagonal of <semantics>SU(3)×SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(3)</annotation></semantics> and this subgroup acts trivially on <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>.

2) Next, take the subgroup of <semantics>(SU(3)×SU(3))//3<annotation encoding="application/x-tex">(\mathrm{SU}(3) \times \mathrm{SU}(3))/\mathbb{Z}/3</annotation></semantics> that also preserves a copy of <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> inside <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>. This subgroup, Dubois-Violette and Todorov claim, is

<semantics>SU(3)×SU(2)×U(1)/6<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(3) \times \mathrm{SU}(2) \times \mathrm{U}(1) } {\mathbb{Z}/6} } </annotation></semantics>

And this is the true gauge group of the Standard Model!

People often say the Standard Model has gauge group is <semantics>SU(3)×SU(2)×U(1)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(2) \times \mathrm{U}(1)</annotation></semantics>, which is okay, but this group has a <semantics>/6<annotation encoding="application/x-tex">\mathbb{Z}/6</annotation></semantics> subgroup that acts trivially on all particles—a fact that arises only because quarks have the exact charges they do! So, the ‘true’ gauge group of the Standard model is the quotient <semantics>(SU(3)×SU(2)×U(1))//6<annotation encoding="application/x-tex">(\mathrm{SU}(3) \times \mathrm{SU}(2) \times \mathrm{U}(1))/\mathbb{Z}/6</annotation></semantics>. And this is fundamental to the <semantics>SU(5)<annotation encoding="application/x-tex">\mathrm{SU}(5)</annotation></semantics> grand unified theory—a well-known fact that John Huerta and I explained a while ago here. The point is that while <semantics>SU(3)×SU(2)×U(1)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(2)\times \mathrm{U}(1)</annotation></semantics> is not a subgroup of <semantics>SU(5)<annotation encoding="application/x-tex">\mathrm{SU}(5)</annotation></semantics>, its quotient by <semantics>/6<annotation encoding="application/x-tex">\mathbb{Z}/6</annotation></semantics> is.

I’ll admit, I don’t fully get how

<semantics>SU(3)×SU(2)×U(1)/6<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(3) \times \mathrm{SU}(2) \times \mathrm{U}(1) } {\mathbb{Z}/6} } </annotation></semantics>

shows up inside

<semantics>SU(3)×SU(3)/3<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(3) \times \mathrm{SU}(3) } {\mathbb{Z}/3} } </annotation></semantics>

as the subgroup that preserves an <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> inside <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>.

I think it works like this. I described <semantics>SU(3)×SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(3)</annotation></semantics> one way, but there should be another essentially equivalent way to get two copies of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> acting on <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>. Namely, let the first copy act componentwise on each entry of your <semantics>3×3<annotation encoding="application/x-tex">3 \times 3</annotation></semantics> octonionic matrix, and let the second act by conjugation on the whole matrix. In this alternative picture the <semantics>/3<annotation encoding="application/x-tex">\mathbb{Z}/3</annotation></semantics> subgroup lies wholly in the second copy of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics>. Then, figure out those elements of <semantics>SU(3)×SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(3)</annotation></semantics> that preserve a copy of <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> inside <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>: say, the matrices where the last row and last column vanish. All the elements of the first copy of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> preserve this <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics>, because they act componentwise. But not all elements of the second copy do: only the block diagonal ones with a <semantics>2×2<annotation encoding="application/x-tex">2\times 2</annotation></semantics> block and a <semantics>1×1<annotation encoding="application/x-tex">1 \times 1</annotation></semantics> block. The matrices in <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> with this block diagonal form look like

<semantics>(αg 0 0 α 2)<annotation encoding="application/x-tex"> \left( \begin{array}{cc} \alpha g & 0 \\ 0 & \alpha^{-2} \end{array} \right) </annotation></semantics>

where <semantics>gSU(2)<annotation encoding="application/x-tex">g \in \mathrm{SU}(2)</annotation></semantics> and <semantics>αU(1)<annotation encoding="application/x-tex">\alpha \in \mathrm{U}(1)</annotation></semantics>. These form a group isomorphic to

<semantics>SU(2)×U(1)/2<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(2) \times \mathrm{U}(1)}{\mathbb{Z}/2} } </annotation></semantics>

If all this works out, it’s very pretty: the 2 and the 1 in <semantics>SU(2)×U(1)<annotation encoding="application/x-tex">\mathrm{SU}(2) \times \mathrm{U}(1)</annotation></semantics> arise from the choice of a <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> block and <semantics>1×1<annotation encoding="application/x-tex">1 \times 1</annotation></semantics> block in <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>… which is also the choice that lets us find Minkowski spacetime inside <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>. If true, that suggests that the <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> in electroweak theory is ‘more directly connected to Minkowski spacetime’ than the hypercharge <semantics>U(1)<annotation encoding="application/x-tex">\mathrm{U}(1)</annotation></semantics>. This is already suggested by another fact: in the Standard Model, only the <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> force notices the difference between left and right.

But I need to check some things, like how we get the <semantics>/6<annotation encoding="application/x-tex">\mathbb{Z}/6</annotation></semantics>.

by john (baez@math.ucr.edu) at August 30, 2018 08:21 AM

August 29, 2018

Jon Butterworth - Life and Physics

First ever acceleration of electrons in a proton-driven plasma wave
Turning out to be a busy August in physics. This breakthrough was just published in Nature: Nature news and the paper itself (open access, hurrah!). Really nice, extensive summary from CERN by Achintya Rao As you can see from the Nature News … Continue reading

by Jon Butterworth at August 29, 2018 08:09 PM

Lubos Motl - string vacua and pheno

Team Stanford launches Operation Barbarossa against quintessence
The disagreement between Team Stanford – which defends its paradigm with a large landscape of de Sitter solutions of string theory – and Team Vafa – which suggests that de Sitter spaces could be banned due to general stringy "swampland" principles (and which proposes quintessence as an alternative) – has been seemingly confined to short enough exchanges in the questions-and-answers periods of various talks.

The arguments couldn't have been properly analyzed and compared in such a limited context. In science, it is better to write them down. You may look at these arguments and equations for hours – and so can your antagonists – which usually increases the quality of the analyses. Team Stanford clearly believes that the de Sitter vacua are here to stay, the criticisms are wrong, and quintessence has fatal problems. But can they back these opinions by convincing arguments?



Today, in the list of new hep-th preprints, we received an avalanche of papers that say something about the deSitter-vs-quintessence controversy in string theory. Using the [numbers] from the daily ordering of papers, we talk about the following papers:
[3] De Sitter vs Quintessence in String Theory (by Cicoli+4, 49 pages)

[4] A comment on effective field theories of flux vacua (by Kachru+Trivedi, 22 pages)

[15] dS Supergravity from 10d (by Kallosh+Wrase, 18 pages)

[16] de Sitter Vacua with a Nilpotent Superfield (by Kallosh+3, 6 pages)

[18] The landscape, the swampland and the era of precision cosmology (by Akrami+3, 43 pages)
I have omitted Tadashi Takayanagi's paper(s) although one of them also talks about de Sitter spaces.

First, concerning the affiliations: I include all of the collaborations into "Team Stanford" because they defend de Sitter solutions in string theory. But the first paper is really international (Bologna-Boulder-India-Cambridge), the second paper is Stanford-Bombay, the third paper is Stanford-Vienna, the fourth paper is Stanford-Brown-Leuven (Belgium), and the last paper is Stanford-Leiden (the Netherlands).



Well, you may hopefully see that Stanford is overrepresented in these papers. Moreover, it seems to play the role of the "headquarters" of this campaign. And the first paper among the five which is the only Stanford-free is arguably the least combative one, too. ;-) I think it's fair to say that the stringy landscape picture of cosmology is the greatest source of pride for Stanford's theoretical physicists in recent 15 years. At some human level, we could understand why they could be anxious if someone were basically saying that those 15 years revolved around a mistake or some sloppiness. But the pride doesn't imply that those papers were right and safe, of course.

Now, the number of papers – five – is rather large and the salvos had to be at least partially coordinated. Can the colleagues be expected to swallow a reasonably high percentage of the content? Wasn't the number of papers chosen to be high to simply intimidate the opposition? To replace the quality of the arguments with the quantity of papers? I am not saying that. I am just asking. The high number of papers leads me to similar feelings as the proposed large number of de Sitter vacua. Less is sometimes more.

Let's talk about the separate papers. The middle paper, one by Kallosh and Wrase, claims that the anti-D3-branes in the KKLT "uplifting" procedure may be replaced by anti D5, D6, D7, or D9-branes, too. That seems like a bold statement to me. If this were the case, why wouldn't have KKLT noticed these four new possible dimensions right away? Fifteen years ago, I was surely asking the question why anti-D3-branes were used and not some branes of other dimensions and I was surely given a – not so convincing – answer implying that it had to be anti-D3-branes. If one says that 4 possible dimensionalities of the antibranes are just as OK, and one does so 15 years after the game-changing paper is released, it doesn't exactly help both of these papers to be trustworthy.

I would probably choose to disbelieve the new Kallosh-Wrase paper. One general problem with this paper (but, to some extent, with many other papers and perhaps with Team Stanford's papers in general) is that it seems to be a supergravity paper, not a full-blown stringy paper. And I think it's fair to describe both Kallosh and Wrase as supergravity experts, not string theory experts. Shouldn't a full-blown string theory expert validate claims that D-branes may be used in a certain new way? My answer is that he or she should.

At their supergravity level of analysis, many things are possible and they may change the dimensionality of the uplifting antibrane. Great. But have they actually demonstrated that string theory allows such solutions, especially the new ones? I don't think that they have made the full-blown string analysis. Whatever is intrinsically stringy is treated in a sloppy way. For example, search for an "open string" in the Kallosh-Wrase paper. You will get three hits – and all of them just say that they have ignored the open string moduli.

The more stringy a given concept or structure is, the more it is ignored in this paper. Again, I think that this criticism applies to most of the Team Stanford papers in general. But the whole point of the Vafa Team is to carefully study the fine, characteristically stringy features, phenomena, and constraints that are completely invisible at the level of supergravity – i.e. at the level of effective field theory. I have doubts about every particular, precise enough "swampland statement" made by Vafa or any disciple (including our "weak gravity conjecture" group). On the other hand, I have no doubts that it is extremely important to appreciate that string theory is not just supergravity and most of the particular low-energy supergravity-based effective field theories have no consistent quantum gravity or stringy completion.

Kallosh and Wrase – and, as I said, much of the Team Stanford – seem to use string theory as the "ultimate justification of the 'anything goes' paradigm in supergravity". You may do anything you want in supergravity, add any string-inspired object, fluxes, branes, whatever you like, and then you use the term "string theory" as if it were the ultimate and universal justification of the validity of all such constructions. For them, string theory is just a knife that always unties your hands. Like with Elon Musk's promises, anything goes with string theory.

OK, I am sure that this is just a wrong usage or interpretation of "string theory". String theory offers some new tools, new objects, new transitions, phenomena, and relationships between the objects. But string theory also – and maybe primarily – brings us new constraints, new bans, new universal, and particular predictions. For me, string theory may have produced new ingredients and possibilities but it's still primarily a theory that has a greater predictive power than the effective quantum field theory. It's clearly a sloppy, skewed way to use string theory if someone only uses string theory as the "source of many new objects and possibilities" – and not as a "book full of new constraints, universal laws and principles, and previously impossible predictions for particular situations".

(There has been a community of "extremely applied" string theorists – whom I would surely call non-string theorists – who have used the term "string theory" as an excuse for really non-standard pieces of physics including the Lorentz symmetry violation and the violation of the equivalence principle. I believe that string theory is, on the contrary, a solid framework that bans or at least greatly discourages such experiments.)

Because we are discussing the question whether the carefully and accurately studied string/M-theory allows de Sitter vacua, the KKLT construction, and similar things, another supergravity-level sloppy analysis just cannot possibly be relevant for the big question defining the Team Stanford vs Team Vafa controversy. To resolve this controversy, one simply needs a higher stringy precision of the arguments. The paper by Kallosh and Wrase doesn't have it and it's questionable whether they could make such an analysis in any other paper.

OK, let's now look at the fourth paper among the five about a "nilpotent superfield". The new paper is a response to a 2017 paper Towards de Sitter from 10D by Moritz, Retolaza, and Westphal. OK, those authors have claimed that the KKLT didn't work because during the uplift, there was a stronger backreaction than previously thought and the compactification remains AdS and doesn't become dS. In the new paper, they claim that the nilpotent superfield as a SUSY breaking tool isn't compatible with the nonlinearly realized SUSY. But that doesn't really matter because even if one allows it, they do get a de Sitter, not anti de Sitter.

I would believe that one of these groups must admit defeat soon enough because the claims and arguments look rather straightforward.

Now, let's turn our attention to the new Kachru-Trivedi paper. It's written as a "positive paper" on effective field theories of the KKLT-style flux vacua. I haven't read the paper in its entirety but the abstract and the general organization of the paper does suggest that they're reviewing the thoughts that have been around from the KKLT. It seems to me that concerning the validity and existence of the effective field theories for the stringy situations, they always rely on field-theory-based, e.g. Wilsonian arguments. I am not persuaded that this is good enough. String theory may invalidate the effective field theories by making sure that an energy-\(E\) effective theory isn't a local quantum field theory at all.

What really bothers me is the superficial approach of Kachru and Trivedi to the arguments given by the Vafa Team:
A recent paper [46 Obied Ooguri Spodyneiko Vafa], motivated largely by no-go theorems with limited applicability to a partial set of classical ingredients, made a provocative conjecture implying that quantum gravity does not support de Sitter solutions. [Footnote about two previous papers saying similar things.] Our analysis – and more importantly, effective field theory applied to the full set of ingredients available in string theory – is in stark conflict with this conjecture. This leads us to believe that the conjecture is false.
Do Kachru and Trivedi consider this non-technical, judgmental paragraph to be enough to deal with the proposed alternative picture? OK, let's rephrase what they are saying:
We may repeat what we said 15 years ago. We may pay no attention whatsoever to the detailed arguments given by the Vafa Team. We don't need to be impartially interested in the validity of the proposed new principles, inequalities, and no-go theorems. We just don't want to learn any and we prefer to believe that no such new insights exist. Instead, it's enough to dismiss all these papers with a simple slogan, with slurs such as "provocative" that make the Vafa Team look limited while we look unlimited, repeat that everything we have ever claimed to be true must be true, and that's enough to "prove" that we are right and they are wrong.
I am sorry but it doesn't seem enough to me. The claim that the Vafa Team's statements are limited to a "partial set of classical ingredients" while Team Stanford is better because effective field theory is "applied to all ingredients available in string theory" seems utterly demagogic to me. KKLT and followers have used lots of ingredients from string theory but there's no proof that they are "all" ingredients of string theory. New ingredients kept on emerging and we still can't prove that we know "all of them" because we don't have a universal definition of string theory. Moreover, the high number of such ingredients makes it more likely, and not less likely, that one of them breaks down and invalidates the KKLT construction as a whole. So they have many, not "all", ingredients of string theory and this fact makes their construction more vulnerable, not less so!

And the very conclusion that "Vafa seems to disagree with something we wrote so he must be wrong" simply looks childish. This is not a rational way to argue. Vafa might say exactly the same thing – he doesn't – but none of these two stubborn propositions would imply a convincing argument in one way or another.

Finally, we have the first paper by Cicoli et al.; and the last, fifth paper by Akrami et al. These two papers explicitly claim to discuss the "de Sitter versus quintessence" controversy in string theory. The Akrami paper seems to have one proposed counterargument against quintessence that Akrami et al. are proud about and that they want to be carefully read by the reader. What is it? They pick the constant \(c\) from the Vafa Team inequality, claim that it should be equal to one (or being of order one, they're not sure), and then they claim that the cosmological observations rule out \(c\gt 1\) at the 3-sigma i.e. 99.7% level.

I am sorry but the right value of \(c\) isn't really known, at least not too reliably, so they can't determine the statistical significance well, either. The right \(c\) could be \(1/3\) and there would be no exclusion at all. It seems to me that this overemphasis on the \(c\sim 1\) "prediction" and its weak exclusion by the observational data is their strongest argument. If that's so, I find it extremely weak. Even if the calculation of the 3-sigma confidence level were solid, which it doesn't seem to be at all, it is still just a 3-sigma confidence. A few years ago, the LHC diphoton bump was "discovered" at four sigma and it was fake. A potential universal new principle of string theory is a different caliber. In my list of priorities, if I become sufficiently certain about a new universal principle of physics, it may beat even 5-sigma deviations from the predictions.

Finally, the first, Stanford-free paper is less arrogant than the Stanfordful papers. They prefer the de Sitter, KKLT-style models because they look concrete, there seems to be a calculational control, and it's apparently getting better with time. Quintessence is more "challenging" and requires more fine-tuning, we read. Well, Vafa et al. disagree with the second point, probably both points. At any rate, they're potentially subjective. You can't use your feelings that something is "challenging" – without any particular argument or quantification of the "challenge" – as a persuasive argument against an alternative theory.

So I am afraid that this Cicoli et al. paper is going to be too vague when arguing against the alternative picture based on the new general principles proposed by the Vafa Team. One problem, as I have mentioned, is that these two paradigms are very different from each other. They have completely different advantages, very different numbers of requires solutions or corners of the stringy configuration space, different importance of the precision needed to analyze things, and so on. Depending on one's philosophy, the prior probabilities assigned to these paradigms may be very different. The probability ratio may very well be more extreme than 300-to-1 in either direction – which makes some 3-sigma empirical arguments weaker than a weak tea.

At the end, it should be possible to resolve the controversy. But one simply needs to study the purely stringy effects in these compactifications (or would-be vacua) more accurately or more reliably than ever before. This increased control over the stringy effects or the increased reliability of the stringy arguments is probably necessary for any progress in resolving this open question. I haven't read the papers in their entirety but I am afraid it is obvious that they haven't really made any progress in resolving the actual disagreement. They are basically repeating the things that were done before and that's not a good path to progress.

And that's the memo.

by Luboš Motl (noreply@blogger.com) at August 29, 2018 08:14 AM

August 25, 2018

The n-Category Cafe

Compositionality: Now Open For Submissions

Our new journal Compositionality is now open for submissions!

It’s an open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline. Topics may concern foundational structures, an organizing principle, or a powerful tool. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition.

Compositionality is free of cost for both readers and authors.

CALL FOR PAPERS

We invite you to submit a manuscript for publication in the first issue of Compositionality (ISSN: 2631-4444), a new open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline.

To submit a manuscript, please visit www.compositionality-journal.org/for-authors/.

SCOPE

Compositionality refers to complex things that can be built by sticking together simpler parts. We welcome papers using compositional ideas, most notably of a category-theoretic origin, in any discipline. This may concern foundational structures, an organising principle, a powerful tool, or an important application. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition.

Related conferences and workshops that fall within the scope of Compositionality include the Symposium on Compositional Structures (SYCO), Categories, Logic and Physics (CLP), String Diagrams in Computation, Logic and Physics (STRING), Applied Category Theory (ACT), Algebra and Coalgebra in Computer Science (CALCO), and the Simons Workshop on Compositionality.

SUBMISSION AND PUBLICATION

Submissions should be original contributions of previously unpublished work, and may be of any length. Work previously published in conferences and workshops must be significantly expanded or contain significant new results to be accepted. There is no deadline for submission. There is no processing charge for accepted publications; Compositionality is free to read and free to publish in. More details can be found in our editorial policies at www.compositionality-journal.org/editorial-policies/.

STEERING BOARD

John Baez, University of California, Riverside, USA
Bob Coecke, University of Oxford, UK
Kathryn Hess, EPFL, Switzerland
Steve Lack, Macquarie University, Australia
Valeria de Paiva, Nuance Communications, USA

EDITORIAL BOARD

Corina Cirstea, University of Southampton, UK
Ross Duncan, University of Strathclyde, UK
Andree Ehresmann, University of Picardie Jules Verne, France
Tobias Fritz, Max Planck Institute, Germany
Neil Ghani, University of Strathclyde, UK
Dan Ghica, University of Birmingham, UK
Jeremy Gibbons, University of Oxford, UK
Nick Gurski, Case Western Reserve University, USA
Helle Hvid Hansen, Delft University of Technology, Netherlands
Chris Heunen, University of Edinburgh, UK
Aleks Kissinger, Radboud University, Netherlands
Joachim Kock, Universitat Autonoma de Barcelona, Spain
Martha Lewis, University of Amsterdam, Netherlands
Samuel Mimram, Ecole Polytechnique, France
Simona Paoli, University of Leicester, UK
Dusko Pavlovic, University of Hawaii, USA
Christian Retore, Universite de Montpellier, France
Mehrnoosh Sadrzadeh, Queen Mary University, UK
Peter Selinger, Dalhousie University, Canada
Pawel Sobocinski, University of Southampton, UK
David Spivak, MIT, USA
Jamie Vicary, University of Birmingham and University of Oxford, UK
Simon Willerton, University of Sheffield, UK

Sincerely,

The Editorial Board of Compositionality

by john (baez@math.ucr.edu) at August 25, 2018 05:13 AM

August 24, 2018

Clifford V. Johnson - Asymptotia

Science Friday Book Club Wrap!

Don't forget, today live on Science Friday we (that's SciFri presenter Ira Flatow, producer Christie Taylor, Astrophysicist Priyamvada Natarajan, and myself) will be talking about Hawking's "A Brief History of Time" once more, and also discussing some of the physics discoveries that have happened since he wrote that book. We'll be taking (I think) caller's questions too! Also we've made recommendations for further reading to learn more about the topics discussed in Hawking's book.

Join us!

-cvj

(P.S. The picture above was one I took when we recorded for the launch of the book club, back in July. I used the studios at Aspen Public Radio.) Click to continue reading this post

The post Science Friday Book Club Wrap! appeared first on Asymptotia.

by Clifford at August 24, 2018 05:04 PM

John Baez - Azimuth

Compositionality – Now Open For Submissions

Our new journal Compositionality is now open for submissions!

It’s an open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline. Topics may concern foundational structures, an organizing principle, or a powerful tool. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition.

Compositionality is free of cost for both readers and authors.



CALL FOR PAPERS

We invite you to submit a manuscript for publication in the first issue of Compositionality (ISSN: 2631-4444), a new open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline.

To submit a manuscript, please visit http://www.compositionality-journal.org/for-authors/.

SCOPE

Compositionality refers to complex things that can be built by sticking together simpler parts. We welcome papers using compositional ideas, most notably of a category-theoretic origin, in any discipline. This may concern foundational structures, an organising principle, a powerful tool, or an important application. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition.

Related conferences and workshops that fall within the scope of Compositionality include the Symposium on Compositional Structures (SYCO), Categories, Logic and Physics (CLP), String Diagrams in Computation, Logic and Physics (STRING), Applied Category Theory (ACT), Algebra and Coalgebra in Computer Science (CALCO), and the Simons Workshop on Compositionality.

SUBMISSION AND PUBLICATION

Submissions should be original contributions of previously unpublished work, and may be of any length. Work previously published in conferences and workshops must be significantly expanded or contain significant new results to be accepted. There is no deadline for submission. There is no processing charge for accepted publications; Compositionality is free to read and free to publish in. More details can be found in our editorial policies at http://www.compositionality-journal.org/editorial-policies/.

STEERING BOARD

John Baez, University of California, Riverside, USA
Bob Coecke, University of Oxford, UK
Kathryn Hess, EPFL, Switzerland
Steve Lack, Macquarie University, Australia
Valeria de Paiva, Nuance Communications, USA

EDITORIAL BOARD

Corina Cirstea, University of Southampton, UK
Ross Duncan, University of Strathclyde, UK
Andree Ehresmann, University of Picardie Jules Verne, France
Tobias Fritz, Max Planck Institute, Germany
Neil Ghani, University of Strathclyde, UK
Dan Ghica, University of Birmingham, UK
Jeremy Gibbons, University of Oxford, UK
Nick Gurski, Case Western Reserve University, USA
Helle Hvid Hansen, Delft University of Technology, Netherlands
Chris Heunen, University of Edinburgh, UK
Aleks Kissinger, Radboud University, Netherlands
Joachim Kock, Universitat Autonoma de Barcelona, Spain
Martha Lewis, University of Amsterdam, Netherlands
Samuel Mimram, Ecole Polytechnique, France
Simona Paoli, University of Leicester, UK
Dusko Pavlovic, University of Hawaii, USA
Christian Retore, Universite de Montpellier, France
Mehrnoosh Sadrzadeh, Queen Mary University, UK
Peter Selinger, Dalhousie University, Canada
Pawel Sobocinski, University of Southampton, UK
David Spivak, MIT, USA
Jamie Vicary, University of Birmingham and University of Oxford, UK
Simon Willerton, University of Sheffield, UK

Sincerely,

The Editorial Board of Compositionality

by John Baez at August 24, 2018 03:16 PM

August 22, 2018

The n-Category Cafe

Kan

Jake Bian works on the topology and geometry of neural networks. But now he’s created a new add-on—okay, let’s say it, an extension—for Firefox, designed to make nLab entries look more like textbook chapters:

He made a YouTube video illustrating it.

The overall look is like this, but you can also mouse on words and see more:

by john (baez@math.ucr.edu) at August 22, 2018 10:58 AM

John Baez - Azimuth

Complex Adaptive System Design (Part 8)

John Foley, Joe Moeller and I have made some nice progress on compositional tasking for the Complex Adaptive System Composition and Design Environment project.

‘Compositional tasking’ means assigning tasks to networks agents in such a way that you can connect or even overlay such tasked networks and get larger ones. This lets you build up complex plans from smaller pieces.

In my last post in this series, I sketched an approach using ‘commitment networks’. A commitment network is a graph where nodes represent agents and edges represent commitments, like “A should move toward B either for 3 hours or until they meet, whichever comes first”. By overlaying such graphs we can build up commitment networks that describe complex plans of action. The rules for overlaying incorporate ‘automatic deconflicting’. In other words: don’t need to worry about agents being given conflicting duties as you stack up plans… because you’ve decided ahead of time what they should do in these situations.

I still like that approach, but we’ve been asked to develop some ideas more closely connected to traditional methods of tasking, like PERT charts, so now we’ve done that.

‘PERT’ stands for ‘program evaluation and review technique’. PERT charts were developed by the US Navy in 1957, but now they’re used all over industry to help plan and schedule large projects.

Here’s simple example:

The nodes in this graph are different states, like “you have built the car but not yet put on the tires”. The edges are different tasks, like “put the tires on the car”. Each state is labelled with an arbitrary name: 10, 20, 30, 40 and 50. The tasks also have names: A, B, C, D, E, and F. More importantly, each task is labelled by the amount of time that task requires!

Your goal is to start at state 10 and move all the way to state 50. Since you’re bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have done all the tasks leading up to that state. For example, you can’t reach state 50 unless you have already done all of tasks C, E, and F. Some typical questions are:

• What’s the minimum amount of time it takes to get from state 10 to state 50?

• Which tasks could take longer, without changing the answer to the previous question? How much longer could each task take, without changing the answer? This amount of time is called the slack for that task.

There are known algorithms for solving such problems. These help big organizations plan complex projects. So, connecting compositional tasking to PERT charts seems like a good idea.

At first this seemed confusing because in our previous work the nodes represented agents, while in PERT charts the nodes represent states. Of course graphs can be used for many things, even in the same setup. But the trick was getting everything to fit together nicely.

Now I think we’re close.

John Foley has been working out some nice example problems where a collection of agents need to move along the edges of a graph from specified start locations to specified end locations, taking routes that minimize their total fuel usage. However, there are some constraints. Some edges can only be traversed by specified teams of agents: they can’t go alone. Also, no one agent is allowed to run out of fuel.

This is a nice problem because while it’s pretty simple and specific, it’s representative of a large class of problems where a collection of agents are trying to carry out tasks together. ‘Moving along the edge of a graph’ can stand for a task of any sort. The constraint that some edges can only be traversed by specified teams is then a way of saying that certain tasks can only be accomplished by teams.

Furthermore, there are nice software packages for optimization subject to constraints. For example, John likes one called Choco. So, we plan to use one of these as part of the project.

What makes this all compositional is that John has expressed this problem using our ‘network model’ formalism, which I began sketching in Part 6. This allows us to assemble tasks for larger collections of agents from tasks for smaller collections.

Here, however, an idea due to my student Joe Moeller turned out to be crucial.

In our first examples of network models, explained earlier in this series, we allowed a monoid of networks for any set of agents of different kinds. A monoid has a binary operation called ‘multiplication’, and the idea here was this could describe the operation of ‘overlaying’ networks: for example, laying one set of communication channels, or committments, on top of another.

However, Joe knew full well that a monoid is a category with one object, so he pushed for a generalization that allowed not just a monoid but a category of networks for any set of agents of different kinds. I didn’t know what this was good for, but I figured: what the heck, let’s do it. It was a mathematically natural move, and it didn’t make anything harder—in fact it clarified some of our constructions, which is why Joe wanted to do it.

Now that generalization is proving to be crucial! We can take our category of networks to have states as objects and tasks (ways of moving between states) as morphisms! So, instead of ‘overlaying networks’, the basic operation is now composing tasks.

So, we now have a framework where if you specify a collection of agents of different kinds, we can give you the category whose morphisms are tasks those agents can engage in.

An example is John’s setup where the agents are moving around on a graph.

But this framework also handles PERT charts! While the folks who invented PERT charts didn’t think of them this way, one can think of them as describing categories of a certain specific sort, with states as objects and tasks as morphisms.

So, we now have a compositional framework for PERT charts.

I would like to dive deeper into the details, but this is probably enough for one post. I will say, though, that we use some math I’ve just developed with my grad student Jade Master, explained here:

Open Petri nets (part 3), Azimuth, 19 August 2018.

The key is the relation between Petri nets and PERT charts. I’ll have more to say about that soon, I hope!


Some posts in this series:

Part 1. CASCADE: the Complex Adaptive System Composition and Design Environment.

Part 2. Metron’s software for system design.

Part 3. Operads: the basic idea.

Part 4. Network operads: an easy example.

Part 5. Algebras of network operads: some easy examples.

Part 6. Network models.

Part 7. Step-by-step compositional design and tasking using commitment networks.

Part 8. Compositional tasking using category-valued network models.

by John Baez at August 22, 2018 10:03 AM

August 20, 2018

Clifford V. Johnson - Asymptotia

And So it Begins…

It’s that time of year again! The new academic year’s classes begin here at USC today. I’m already snowed under with tasks I must get done, several with hard deadlines, and so am feeling a bit bogged down already, I must admit. Usually I wander around the campus a bit and soak up the buzz of the new year that you can pick up in all the campus activity swarming around. But instead I sit at my desk, prepping my syllabus, planning important dates, adjusting my calendar, exchanging emails, (updating my blog), and so forth. I hope that after class I can do the wander.

What will I be teaching this semester? The second part of graduate electromagnetism, as I often do. Yes, in a couple of hours, I’ll be again (following Maxwell) pointing out a flaw in one of the equations of electromagnetism (Ampere’s), introducing the displacement current term, and then presenting the full completed set of the equations - Maxwell’s equations, one of the most beautiful sets of equations ever to have been written down. (And if you wonder about the use of the word beautiful here, I can happily refer you to look at The Dialogues, starting at page 15, for a conversation about that very issue…!)

Speaking of books, if you’ve been part of the Science Friday Summer reading adventure, reading Hawking’s A Brief History of Time, you should know that I’ll be back on the show on Friday talking with Priyamvada Natarajan, producer Christie Taylor, and presenter Ira flatow about the book one more time. There may also be an opportunity to phone in with questions! And do look at their website for some of the extra material they’ve bene posting about the book, including extracts from last week’s live tweet Q&A.

Anyway, I’d better get back to prepping my class. I’ll be posting more about the semester (and many other matters) soon, so do come back.

-cvj Click to continue reading this post

The post And So it Begins… appeared first on Asymptotia.

by Clifford at August 20, 2018 07:48 PM

August 18, 2018

Lubos Motl - string vacua and pheno

Quintessence is a form of dark energy
Tristan asked me what I thought about Natalie Wolchover's new Quanta Magazine article,
Dark Energy May Be Incompatible With String Theory,
exactly when I wanted to write something. Well, first, I must say that I already wrote a text about this dispute, Vafa, quintessence vs Gross, Silverstein, in late June 2018. You may want to reread the text because the comments below may be considered "just an appendix" to that older text. Since that time, I exchanged some friendly e-mails with Cumrun Vafa. I am obviously more skeptical towards their ideas than they are but I think that I have encountered some excessive certainty of some of their main critics.

Wolchover's article sketches some basic points about this rather important disagreement about cosmology among string theorists. But there are some very unfortunate details. The first unfortunate detail appears in the title. Wolchover actually says that "dark energy might be incompatible with string theory". That's the statement she seems to attribute to Cumrun Vafa and co-authors.



But that misleading formulation is really invalid – it's not what Cumrun is saying. Here, the misunderstanding may be blamed on some sloppy "translation" of the technical terms that has become standard in the pop science press – and the excessively generalized usage of some jargon.




OK, what's going on? First of all, the Universe is expanding, isn't it? We're talking about cosmology, the big bang theory (which I don't capitalize – to make sure that I am not talking about the sitcom), and the expansion of the Universe was already seen in the 1920s although people only became confident about it some 50 years ago.

In the late 1990s, it was observed that the expansion wasn't slowing down, as widely expected, but speeding up. The accelerated expansion may be explained by dark energy. Dark energy is anything that is present everywhere in the vacuum and that tends to accelerate the expansion of the Universe. Dark energy, like dark matter, is invisible by optical telescopes (that's why both of them are called dark). But unlike dark matter which has (like all matter or dust) the pressure \(p=0\), the dark energy has nonzero pressure, namely \(p\lt 0\) or \(p\approx -\rho\) where \(\rho\) is the energy density. That's how dark energy and dark matter differ; dark energy's negative pressure is needed for its ability to accelerate the expansion of the Universe.

Dark energy is supposed to be a rather general, umbrella term that may be represented by several known, slightly different theoretical concepts described by equations of physics. So far, the by far most widespread and "canonical" or "minimalist" kind of dark energy was the cosmological constant. That's really a number that is independent of space and especially time (it's why it's called a constant) which Einstein added to the original Einstein's equations of the general theory of relativity. Einstein's original goal was to allow the size of the Universe to be stable in time – because his equations seemed to imply that the Universe's size should evolve, much like the height of a freely falling apple. It just can't sit at a constant value – just like the apple usually doesn't sit in the air in the middle of the room.

But the expansion of the Universe was discovered. Einstein could have predicted it because it follows from the simplest form of Einstein's equations, as I said. That could have earned him another Nobel prize when the expansion was seen by Hubble. (Well, Einstein's stabilization by the cosmological constant term wouldn't really work even theoretically, anyway. The balance would be unstable, tending to turn to an expansion or the implosion, like a pencil standing on the tip. Any tiny perturbation would be enough for this instability to grow exponentially.)

That's probably the main reason why Einstein labeled the introduction of the cosmological constant term "the greatest blunder of his life". Well, it wasn't the greatest blunder of his life: the denial of quantum mechanics and state-of-the-art physics in general in the last 30 years of his life was almost certainly a greater blunder.

In the late 1990s, the Universe's expansion was seen to accelerate which is why it seemed obvious that Einstein's blunder wasn't a blunder at all, let alone the worst one: the cosmological constant term seems to be there and it's responsible for the acceleration of the Universe. Suddenly, Einstein's cosmological term (with a different numerical value than Einstein needed – but one that is of the same order) seemed like a perfect, minimalistic explanation of the accelerated expansions. Recall that Einstein's equations say\[

G_{\mu\nu} +\Lambda g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}.

\] Note that even in the complicated SI units, there is no \(\hbar\) here – Einstein's general relativity is a classical theory that doesn't depend on quantum mechanics at all. Here, \[

G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu}

\] is the Einstein curvature tensor, constructed from the Ricci tensor and the Ricci scalar \(R\). It's some function of the metric and its first and especially second partial derivatives in the spacetime. On the right hand side of Einstein's equations, \(T_{\mu\nu}\) is the stress-energy tensor that knows about the sources, the density of mass/energy and momentum and their flow.

The \(\Lambda g_{\mu\nu}\), a simple term that adds an additional mixture of the metric tensor to Einstein's equations, is the cosmological constant term. It naturally reappeared in the late 1990s. It's a rather efficient theory. The term doesn't have to be there but in some sense, it's even "simpler" than Einstein's tensor, so why should it be absent? And it seems to explain the accelerated expansion, so we need it.

The theory is really natural which is why the standard cosmological model was the \(\Lambda{CDM}\) model, i.e. a big bang theory with the cold dark matter (CDM) and the cosmological constant term \(\Lambda\).

What about string theory?

String theory really predicts gravity. You may derive Einstein's equations, including the equivalence principle, from the vibrating strings. Einstein's theory of gravity is a prediction of string theory, which is still one of the main reasons to be confident that string theory is on the right track to find a deeper or final theory in physics, to say the least. Aside from gravitons and gravity (and Einstein's equations that may be derived from string theory for this force), string theory also predicts gauge fields and matter fields such as leptons and quarks. They have their (Dirac, Maxwell...) equations and their stress-energy tensors also enter as terms in \(T_{\mu\nu}\) on the right hand side of Einstein's equations.

String theory demonstrably predicts Einstein's equations as the low-energy limit for the massless, spin-two field (the graviton field) that unavoidably arises as a low-lying excitation of a vibrating string. To some extent, this appearance of Einstein's equations is guaranteed by consistency of the theory (or by the relevant gauge invariance, namely the diffeomorphisms) – and string theory is consistent (which is a highly unusual, and probably unprecedented, virtue of string theory among quantum mechanical theories dealing with massless spin-two fields).

Does string theory also predict the cosmological constant term, one that Einstein originally included in the equations? At this level, the answer is unquestionably Yes and Cumrun Vafa and pals surely agree. To say the least, string theory predicts lots of vacua with a negative value of the cosmological constant, the anti de Sitter (AdS) vacua. In fact, those are the vacua where the holographic principle of quantum gravity may be shown rather rigorously – holography takes the form of Maldacena's AdS/CFT correspondence.

There are lots of Minkowski, \(\Lambda=0\), vacua in string theory. And there are also lots of AdS, \(\Lambda\lt 0\), vacua in string theory. I think that the evidence is clear and no one who is considered a real string theorist by most string theorists disputes the statement that both groups of vacua, flat Minkowski vacua and AdS vacua, are predicted by string theory.

The real open question is whether string theory allows the existence of \(\Lambda \gt 0\) (de Sitter or dS) vacua. Those seem to be needed to describe the accelerated expansion of the Universe in terms of the cosmological constant. After 2000, the widespread view – if counted by the number of heads or number of papers – was that string theory allowed the positive cosmological constant. Even though I still find de Sitter vacua in string theory plausible, I believe that it's fair to say that the frantic efforts to spread this de Sitter view – and write papers about de Sitter in string theory – may be described as a sign of group think in the community.

There have always been reason to doubt whether string theory allows de Sitter vacua at all. At the end of the last millennium, Maldacena and Nunez wrote a paper with a no-go theorem. It was mostly based on supergravity, a supersymmetric extension of Einstein's general relativity and a low-energy limit of superstring theories, but people generally believed that this approximation of string theory was valid in the context of the proof.

Sociologically, you may also want to know that in the 1990s, Edward Witten was "predicting" that the cosmological constant had to be exactly zero (and a symmetry-like principle would be found that implies the vanishing value). He was motivated by the experience with string theory. Even before Maldacena and Nunez and lots of similar work, it looked very hard to establish de Sitter, \(\Lambda \gt 0\) vacua in string theory. However, some of these problems could have been – and were – considered just technical difficulties. Why? Because if the cosmological constant is positive, you don't have any time-like Killing vectors and there can be no unbroken spacetime supersymmetry. Controlled stringy calculations only work when the spacetime supersymmetry is present (and guarantees lots of cancellations etc.) which is why people were willing to think that the difficulties in finding de Sitter vacua in string theory were only technical difficulties – caused by the hard calculations in the case of a broken supersymmetry.

However, aside from Maldacena-Nunez, we got additional reasons to think that string theory might prohibit de Sitter vacua in general. Cumrun Vafa's Swampland – the term for an extension of the (nice stringy) landscape that also includes effective field theories that string theory wouldn't touch, not even with a long stick – implies various general (sometimes qualitative, sometimes quantitative) predictions of string theory that hold in all the stringy vacua, despite their high number. Along with his friend Donald Trump, Cumrun Vafa has always wanted to drain the swamp. ;-)

The Swampland program has produced several, more or less established, general laws of string theory – that may also be considered consequences of a consistent theory of quantum gravity. Wolchover mentions that the most well-established example of a Swampland law is our "weak gravity conjecture". Gravity (among elementary particles) is much weaker than other forces in our Universe – and in fact, it probably has to be the case in all Universes that are consistent at all.

The Swampland business contains many other laws like that, some of them are more often challenged than the weak gravity conjecture. Cumrun Vafa and his co-authors have presented an incomplete sketch of a proof that de Sitter vacua could be banned in string theory for Swampland reasons – for similar general reasons that guarantee that gravity is the weakest force.

This assertion is unsurprisingly disputed by lots of people, especially people around Stanford, because Stanford University (with Linde, Kallosh, Susskind, Kachru, Silverstein, and many others) has been the hotbed of the "standard stringy cosmology" after 2000. They wrote lots of papers about cosmology, starting from the KKLT paper, and the most famous ones have thousands of citations. At some level, authors of such papers may be tempted to think that their papers just can't be wrong.

But even the main claims of papers with thousands of citations ultimately may be wrong, of course. Sadly, I must say that some of this Stanford environment likes to use group think – and arguments about authorities and number of papers – that resembles the "consensus science" about the global warming. Sorry, ladies and gentlemen, but that's not how science works.

Doubts about the KKLT construction are reasonable because the KKLT and similar papers still build on certain assumptions and approximations. I am confident it is correct to say that the authors of some of the critical papers questioning the KKLT (especially the final, de Sitter "uplift" of some intermediate AdS vacua, an uplift that is achieved by the addition of some anti-D3-branes) are competent physicists – at least "basically indistinguishable" in competence from the Stanford folks. See e.g. Thomas Van Riet's TRF guest blog from November 2014 (time is fast, 1 year per year).

Cumrun Vafa et al. don't want to say that string theory has been ruled out. Instead, they say that in string theory, the observed dark energy is represented by quintessence which is just a form of dark energy (read the first sentence of the Wikipedia article I just linked to) – and that's why Wolchover's title that "dark energy is incompatible with string theory" is so misleading. I think that the previous sentence is enough for everyone to understand the main unfortunate terminological blunder in Wolchover's article. Cumrun and pals say that dark energy is described by quintessence, a form of dark energy, in string theory. They don't say that dark energy is impossible in string theory.

Wolchover's blunder may be blamed upon the habit to consider the phrase "dark energy" to be the pop science equivalent of the "cosmological constant". Well, they are not quite equivalent and to understand the proposals by Cumrun Vafa et al., the difference between the terms "dark energy" and "cosmological constant" is absolutely paramount.

Quintessence is a philosophically if not spiritually sounding word but in cosmology, it's just a fancy word for an ordinary time-dependent generalization of the cosmological constant – that results from the potential energy of a new, inflaton-like scalar field. String theory often predicts many scalar fields, some of them may play the role of the inflaton, others – similar ones – may be the quintessence that fills our Universe with the dark energy which is responsible for the accelerated expansion.

Now, the disagreement between "Team Vafa" and "Team Stanford" may be described as follows:
Team Stanford uses the seemingly simplest description, one using Einstein's old cosmological constant. It's really constant, string theory allows it, and elaborate – but not quite exact – constructions with antibranes exist in the literature. They use lots of sophisticated equations, do many details very accurately and technically, but the question whether these de Sitter vacua exist remains uncertain because approximations are still used. Team Stanford ignores the uncertainty and sometimes intimidates other people by sociology – by a large number of authors who have joined this direction. The cosmological constant may be positive, they believe, and there are very many, like the notorious number \(10^{500}\), ways to obtain de Sitter vacua in string theory. We may live in one of them. Because of the high number, the predictive power of string theory may be reduced and some form of the multiverse or even the anthropic principle may be relevant.

Team Vafa uses a next-to-simplest description of dark energy, quintessence, which is a scalar field. This scalar field evolves and the potential normally needs to be fine-tuned even more so than the cosmological constant. But Team Vafa says that due to some characteristically stringy relationships, the new, added fine-tuning is actually not independent from the old one, the tuning of the apparently tiny cosmological constant, so from this viewpoint, their picture might be actually as bad (or as good) as the normal cosmological constant. The very large hypothetical landscape may be an illusion – all these constructions may be inconsistent and therefore non-existent, due to subtle technical bugs overlooked by the approximations or, equivalently, due to very general Swampland-like principles that may be used to kill all these hypothetical vacua simultaneously. Team Vafa doesn't have too many fancy mathematical calculations of the potential energy and it doesn't have a very large landscape. So in this sense, Team Vafa looks less technical and more speculative than Team Stanford. But one may argue that Team Stanford's fancy equations are just a way to intimidate the readers and they don't really increase the probability that the stringy de Sitter vacua exist.
These are just two very different sketches how dark energy is actually incorporated in string theory. They differ by some basic statements, by the expectation "how very technical certain adequate papers answering a question should be", and in many other respects. I think we can't be certain which of them, if any, is right – even though Team Stanford would be tempted to disagree. But their constructions simply aren't waterproof and they look arbitrary or contrived from many points of view. And yes, as you could have figured out, I do have some feeling that the way of argumentation by Team Stanford has always been similar to the "consensus science" behind the global warming hysteria. Occasional references to the "consensus" and a large number of papers and authors – and equations that seem complicated but if you think about their implications, they don't really settle the basic question (whether the de Sitter vacua – or the dangerous global warming – exist at all).

Team Vafa proposes a new possibility and I surely believe it deserves to be considered. It's "controversial" in the sense that Team Stanford is upset, especially some of the members such as E.S. But I dislike Wolchover's subtitle:
A controversial new paper argues that universes with dark energy profiles like ours do not exist in the “landscape” of universes allowed by string theory.
What's the point of labeling it "controversial"? It may still be right. Strictly speaking, the KKLT paper and the KKLT-based constructions by Team Stanford are controversial as well. These a priori labels just don't belong to the science reporting, I think – they belong to the reporting about pseudosciences such as the global warming hysteria. Reasonable people just don't give a damn about these labels. They care about the evidence. Cumrun Vafa is a top physicist, he and pals have proposed some ideas and presented some evidence, and this evidence hasn't really been killed by solid counter-evidence as of now.

Incidentally, after less than two months, Team Vafa already has 23+19 citations. So it doesn't look like some self-evidently wrong crackpot papers, like papers claiming that the Standard Model is all about octonions.

I was also surprised by another adjective used by Wolchover:
In the meantime, string theorists, who normally form a united front, will disagree about the conjecture.
Do they form a united front? What is it supposed to mean and what's the evidence that the statement is correct whatever it means? Are all string theorists members of Marine Le Pen's National Front? Boris Pioline could be one but I think that even he is not. ;-) String theorists are theoretical physicists at the current cutting-edge of fundamental physics and they do the work as well as they can. So when something looks clearly proven by some papers, they agree about it. When something looks uncertain, they are individually uncertain – and/or they disagree about the open questions. When a possible new loophole is presented that challenges some older lore or no-go not-yet-theorems, people start to think about the new possibilities and usually have different views about it, at least for a while.

What is Wolchover's "front" supposed to be "united" for or against? String theorists are united in the sense that they take string theory seriously. Well, that's a tautology. They wouldn't be called string theorists otherwise. String theory also implies something so they of course take these implications – as far as they're clearly there – seriously. But is there any valid, non-tautological content in Wolchover's statement about the "united front"?

It's complete nonsense to say that string theories are "more united as a front" than folks in any other typical scientific discipline that does things properly. String theorists have disagreed about numerous things that didn't seem settled to some of them. I could list many technical examples but one recent example is very conceptual – the firewall by late Joe Polchinski and his team. There were sophisticated constructions and equations in the papers by Polchinski et al. but the existence of the firewalls obviously remained disputed, and I think that almost all string theorists think that firewalls don't exist in any useful operational sense. But they followed the papers by Polchinski et al. to some extent. Polchinski and others weren't excommunicated for a heresy in any sense – despite the fact that the statement "the black holes don't have any interior at all" would unquestionably be a radical change of the lore.

This disagreement about the representation of dark energy within string theory is comparably deep and far-reaching as the firewall wars.

Again, I still assign the probability above 50% to the basic picture of Team Stanford which leads to a cosmological constant from string theory. But I don't think it has been proven (a similar warning I have said about \(P\neq NP\) and other things). I have communicated with many apparently smart and technically powerful folks who had sensible arguments against the validity of the basic conclusions of the KKLT. I am extremely nervous about the apparent efforts of some Stanford folks to "ban any disagreement" about the KKLT-based constructions, a ban that would be "justified" by the existence of many papers and their mutual citations.

That's not how actual science may progress for a very long time. If folks like Vafa have doubts about de Sitter vacua in string theory and all related constructions, and they propose quintessence models that could be more natural than once believed (the simple reasons why quintessence would be dismissed by string theorists including myself just a few years ago), they must have the freedom – not just formally, but also in practice – to pursue these alternative scenarios, regardless of the number of papers in literature that take KKLT for granted! Only when the plausibility and attractiveness of these ideas really disappears according to the body of the experts, it could make sense to suggest that Vafa seems to be losing.

These two pictures offer very different sketches how the real world is realized within string theory. Indeed, the string phenomenological communities that would work on these two possibilities could easily evolve into "two separated species" that can't talk to each other usefully (although both of them would still be trained with the help of the same textbooks up to a basic textbook of string theory). But as long as we're uncertain, this splitting of the research to several different possibilities is simply the right thing that should happen. Putting eggs to one basket when we're not quite sure about the right basket would simply be wrong.

Wolchover also mentions the work of Dr Wrase. I haven't read that so I won't comment.

But I will comment on some remarks by Matt Kleban (trained at Team Stanford, now NYU) such as
Maybe string theory doesn’t describe the world. [Maybe] dark energy has falsified it.
Well, that's nice. String theory is surely falsifiable and such things might happen which would be a big event. But I think it's obvious that Kleban isn't really taking the side of the string theory critics. Instead this statement – that dark energy may have falsified string theory – is a subtle demagogic attack against Team Vafa which is whom he actually cares about (he doesn't care about Šm*its). Effectively, Matt is trying to compare Vafa et al. to Šmoits. If the dark energy in string theory doesn't work in the Stanford way, I will scream and cry, Matt says, and you will give it up. Matt knows that the real people whom he cares about wouldn't consider string theory ruled out for similar reasons so he's effectively saying that they shouldn't buy Team Vafa's claims, either.

Sorry, Matt, but that's a demagogy. Team Vafa doesn't really claim that they have falsified string theory. There is a genuine new possibility whether you like to admit it or not. Also, Matt expressed his attacks against Team Vafa using a different verbal construction:
He stresses that the new swampland conjecture is highly speculative and an example of “lamppost reasoning,"...
Cute, Matt. I always love when people complain about lamppost reasoning. I've had funny discussions both with Brian Greene and Lisa Randall about this phrase before they published their popular books. Lisa felt very entertained when I said it was actually rational to spend more time by looking under the lamppost. But it is rational.

I must explain the proverb here. There exists some mathematical set of possibilities in theoretical physics or string theory but only some of them have been discovered or understood, OK? So we call those things that have been understood or studied (intensely enough) "the insights under the lamppost". Now, the "lamppost reasoning" is a criticism used by some people who accuse others from a specific kind of bias. What is this sin or bias supposed to be? Well, the sin is that these people only search for their lost keys under the lamppost.

Now, this is supposed to be funny and immediately mock the perpetrators of the "sin" and kill their arguments. If you lose your keys somewhere, it's a matter of luck whether the keys are located under a lamppost, where you could see them, or elsewhere, where you couldn't. So obviously, you should look for the keys everywhere, including places that aren't illumined by the lamp, Kleban and Randall say, among others.

But there's a problem with this recommendation. You can't find the keys in the dark too easily – because you don't see anything there. Perhaps if you sweep the whole surface by your fingers. But it's harder and the dark area may be very large. If you want to increase the probability that you find something, you should appreciate the superiority of vision and primarily look at the places where you can see something! You aren't guaranteed to find the keys but your probability to find them per unit time may be higher because you can see there.

And there might even exist reasons why the keys are even more likely to be under the lamppost. When you were losing them, you probably preferred to walk at places where you could see, too. You may have lost them while checking the content of your wallet, and you were more likely to do it under the lamppost. So that's why you were more likely under the lamppost at that time, too! Similarly, when God was creating the world, assuming her similar mathematical skills, She was likely to start with discovering things that were relatively easy for us to discover and clarify, too. So she was more likely to drop our Universe under the lamppost, too, and that's why it's right to focus our attention there, too.

For a researcher, it's damn reasonable to focus on things that are easier to be understood properly.

The two situations (keys, physics) aren't quite analogous but they're close enough. My claim is even clearer in the metaphorical "lamppost" of physics. If you want to settle a question, such as the existence of de Sitter vacua, you simply have to build primarily on the concepts – both general principles and the particular constructions – that have been understood well enough. You can't build on the things that are completely unknown. And if you build on things that are only known vaguely or with a lot of uncertainty, you can be misled easily!

So in some sense, I am saying that you should look for your keys under the lamppost, and then increase the sensitivity of your retinas and increase your range that you have a control over. That's how knowledge normally grows – but there always exist regions in the space of ideas and facts that aren't understood yet. The suggestion that claims in physics may be supported by constructions that are either completely unknown or badly understood are just ludicrous. They may sound convincing to them because the keys may be anywhere, the keys may be in the dark. But in the dark of ignorance, science can't be applied and we must appreciate that all our scientific conclusions may only be based on the things that have been illuminated – all of our legitimate science is built out of the insights about the vicinity of the lamppost.

Whoever claims to have knowledge derived from the dark is a charlatan – sorry but it's true, Lisa and Matt! In this particular case, it's totally sensible for Team Vafa to evaluate the experience with the known constructions of the vacua and conclude that it seems rather convincing that no de Sitter vacua exist in string theory and the existing counterexamples are fishy and likely to be inconsistent. This evidence is circumstantial because it builds on the "set of constructions" that have been studied or illuminated – constructions under the lamppost – but that's still vastly better than if you make up your facts and make far-reaching claims about the "world in the dark" that we have no real evidence of!

You surely expect comparisons to politics as well. I can't avoid the feeling that the Team Stanford claim that de Sitter vacua simply have to exist is just another example of some egalitarianism or non-discrimination. Like men and women, anti de Sitter and de Sitter vacua must be treated as equal. But sorry to say, like men and women, de Sitter and anti de Sitter vacua are simply not equal. The constructions of these two classes within string theory look very different and unlike the anti de Sitter vacua, it's plausible and at least marginally compatible with the evidence that the de Sitter vacua don't exist at all. A Palo Alto leftist could prefer a non-discrimination policy but the known facts, evidence, and constructions surely do discriminate between de Sitter and anti de Sitter spaces – and Team Vafa, like any honest scientist who actually cares about the evidence, assigns some importance to this highly asymmetric observation!

by Luboš Motl (noreply@blogger.com) at August 18, 2018 08:15 AM

Lubos Motl - string vacua and pheno

Search for ETs is more speculative than modern theoretical physics
Edwin has pointed out a new tirade against theoretical physics,
Theoretical Physics Is Pointless without Experimental Tests,
that Abraham Loeb published at pages of Scientific American which used to be an OK journal some 20 years ago. The title itself seems plagiarized from Deutsche or Aryan Physics – which may be considered ironic for Loeb who was born in Israel. And in fact, like his German role models, Loeb indeed tries to mock Einstein as well – and blame his mistakes on the usage of thought experiments:
Einstein made great discoveries based on pure thought, but he also made mistakes. Only experiment and observation could determine which was which.

Albert Einstein is admired for pioneering the use of thought experiments as a tool for unraveling the truth about the physical reality. But we should keep in mind that he was wrong about the fundamental nature of quantum mechanics as well as the existence of gravitational waves and black holes...
Loeb has a small, unimportant plus for acknowledging that Einstein was wrong on quantum mechanics. However, as an argument against theoretical physics based on thought experiments and on the emphasis on the patient and careful mental work in general, the sentences above are at most demagogic.

The fact that Einstein was wrong about quantum mechanics, gravitational waves, or black holes don't imply anything wrong about the usage of thought experiments and other parts of modern physics. There's just no way to credibly show such an implication. Other theorists have used better thought experiments, have thought about them more carefully, and some of them have correctly figured out that quantum mechanics had to be right and gravitational waves and black holes had to exist.

The true fathers of quantum mechanics, especially Werner Heisenberg, were really using Einstein's new approach based on thought experiments, principles, and just like Einstein, they carefully tried to remove the assumptions about physics that couldn't have been operationally established (such as the absolute simultaneity killed by special relativity; and the objective existence of values of observables before an observation, killed by quantum mechanics).

Note that gravitational waves as well as black holes were detected many decades after their theoretical discovery. The theoretical discoveries almost directly followed from Einstein's equations. So Einstein's mistakes meant that he didn't trust (his) theory enough. It surely doesn't mean and cannot mean that Einstein trusted theories and theoretical methods too much. Because Loeb has made this wrong conclusion, it's quite some strong evidence in favor of a defect in Loeb's central processing unit.



The title may be interpreted in a way that makes sense. Experiments surely matter in science. But everything else that Loeb is saying is just wrong and illogical. In particular, Loeb wrote this bizarre paragraph about Galileo and timing:
Similar to the way physicians are obliged to take the Hippocratic Oath, physicists should take a “Galilean Oath,” in which they agree to gauge the value of theoretical conjectures in physics based on how well they are tested by experiments within their lifetime.
Well, I don't know how I could judge theories according to experiments that will be done after I die, after my lifetime. That's clearly impossible so this restriction is vacuous. On the other hand, is it OK to judge theories according to experiments that were done before our lifetimes or before physicists' careers?

You bet. Experimental or empirical facts that have been known for a long time are still experimental or empirical facts. In most cases, they may be repeated today, too. People often don't bother to repeat experiments that re-establish well-established truths. But these old empirical facts are still crucial for the work of every theorist. They are sufficient to determine lots of theoretical principles.

You know, it's correct to say that science is a dialogue between the scientist and Nature. But this is only true in the long run. It doesn't mean that every day or every year, both of them have to speak. If Nature doesn't want to speak, She has the right to stay silent. And She often stays silent even if you complained that She doesn't have the right. She ignores your restrictions on Her rights! So at the LHC after the Higgs boson discovery, Nature chose to remain silent so far – or She kept on saying "the Standard Model will look fine to you, human germ".

You can't change this fact by some wishful thinking about "dialogues". Theorists just didn't get new post-Higgs data from the LHC because so far, there are no new data at the LHC. They need to keep on working which makes it obvious that they have to use older facts and new theoretical relationships between them, new hypotheses etc. In the absence of new theoretical data, it is obvious that theorists' work has to be overwhelmingly theoretical or, in Loeb's jargon, it has to be a monologue! When Nature has something new and interesting to say (through experiments), Nature will say it. But theorists can't be silent or "doing nothing" just because Nature is silent these years! Only a complete idiot may fail to realize these points or agree with Loeb.




What Loeb actually wants to say is that a theorist should be obliged to plan the experiments that will settle all his theoretical ideas within his lifetime. But that's not possible. The whole point of scientific research in physics is to study questions about the laws of Nature that haven't been answered yet. And because they haven't been answered yet, people don't know and can't know what the answer will be – and even when it will be found.

An experimenter (or a boss or a manager of an experimental team) may try to plan what the experiment will do, when it will do these things, and what are the answers that it could provide us with. Even this planning sometimes goes wrong, there are delays etc. But this is not the main problem here. The real problem is that the result of a particular experiment is almost never the real question that people want to be answered. An experiment is often just a step towards adjusting our opinions about a question – and whether this step is a big or small one depends on what the experimental outcome actually is, and this is not known in advance.

Loeb has mentioned examples of such questions himself. People actually wanted to know whether there were black holes and gravitational waves. But a fixed experiment with a fixed budget, predetermined sensitivity etc. simply cannot be guaranteed to produce the answer. That's the crucial point that kills Loeb's Aryan Physics as a proposed (not so) new method to do science.

For example, both gravitational waves and black holes are rather hard to see. Similarly, the numerical value of the cosmological constant (or vacuum energy density) is very small. It's this smallness that has implied that one needed a long – and impossible to plan – period of time to discover these things experimentally.

Because black holes, gravitational waves, and a positive cosmological constant needed fine gadgets – and it was not known in advance how fine they had to be – does it mean that the theorists should be banned from studying these questions and concepts? The correct answer is obviously No – while Loeb's answer is Yes. Almost all of theoretical physics is composed of such questions. We just can't know in advance how much time will be needed to settle the questions we care about (and, as Edwin emphasized, there is nothing special about the timescale given by "our lifespan"). We can't know what the answers will be. We can't know whether the evidence that settles these questions will be theoretical in character, dependent on somewhat new experimental tools, or dependent on completely new experimental tools, discoveries, and inventions.

None of these things about the future flow of evidence can be known now (otherwise we could settle all these things now!) which is why it's impossible for these unknown answers to influence what theorists study now! The influences that Loeb demands would violate causality. If the theorists knew in advance when the answer is obtained, they would really have to know what the answer is – as I mentioned above, the confirmation of a null hypothesis always means that the answer to the interesting qualitative question was postponed. But then the whole research would be pointless.

So if science followed Loeb's Aryan Physics principles, it would be pointless! The real science follows the scientific method. Scientists must make decisions and conclusions, often conclusions blurred by some uncertainty, right now, based on the facts that are already known right now – not according to some 4-year plans, 5-year plans, or 50-year plans. And if their research depends on some assumptions, they have to articulate them and go through the possibilities (ideally all of them).

It's also utterly demagogic for him to talk about the "Galilean Oath" because Galileo Galilei disagreed with ideas that were very similar to Loeb's. In particular, Galileo has never avoided the formulation of hypotheses that could have needed a long time to be settled. One example where he was wrong was Galileo's belief that comets were atmospheric phenomena. That belief looks rather silly to me (didn't they already observe the periodicity of some comets, by the way?) but the knowledge was very different then. Science needed a long time to really settle the question.

But more generally, Galileo did invent lots of conjectures and hypotheses because those were the real new concepts that became widespread once he started the new method, the scientific method. Google search for "Galileo conjectured" or "Galileo hypothesized". Of course you get lots of hits.

As e.g. Feynman said in his simple description of the scientific method, the scientific method to search for new laws works as follows: First, we guess the laws. Then we compute consequences. And then we compare the consequences to the empirical data.

Note the order of the steps: the guess must be at the very beginning, scientists must be free to present all such possible hypotheses and guesses, and the computation of the consequences must still be close to the beginning. Loeb proposes something entirely different. He wants some planning of future experiments to be placed at the beginning, and this planning should restrict what the physicists are allowed to think about in the first place.

Sorry, that wouldn't be science and it couldn't have produced interesting results, at least not systematically. And these restrictions are indeed completely analogous to the bogus restrictions that the church officials – and later various philosophers etc. – tried to place on the scientific research. Like Loeb, the church hierarchy also wanted the evidence to be direct at all cases. But one of the ingenious insights by Galileo was that he realized that the evidence may often be indirect or very indirect but one may still learn a great deal of insights out of it.

The simplest example of this "direct vs indirect" controversy are the telescopes. Galileo has improved the telescope technology and made numerous new observations – such as those of the Jovian moons. The church hierarchy actually disputed that those satellites existed because the observation by telescopes wasn't direct enough for them. It took many years before people realized how incredibly idiotic such an argument was. It would be a straight denial of the evidence. The telescopes really see the same thing as the eyes when both see something. Sometimes, telescopes see more details than the eyes – so they must be considered nothing else than improved eyes. The observations from eyes and telescopes are equally trustworthy. But telescopes have a better resolution.

The laymen trust telescopes today even though the telescope observations are "indirect" ways to see something. But the tools to observe and deduce things in physics have become vastly more indirect than they were in Galileo's lifetime. And most laymen – including folks like Loeb – simply get lost in the long chains of reasoning. That's one reason why many people distrust science. Because they haven't verified them individually (and most laymen wouldn't be smart or patient enough to do so), they believe that the long chains of reasoning and evidence just cannot work. But they do work and they are getting longer.

The importance of reasoning and theory-based generalizations was increasing much more quickly during Newton's lifetime – and it kept on increasing at an accelerating rate. Newton united the celestial and terrestrial gravity, among other things. The falling apple and the orbiting Moon move because of the very same force that he described by a single formula. Did he have a "direct proof" that the apple is doing the same thing in the Earth's gravitational field as the Moon? Well, you can't really have a direct proof of such a statement – which could be described as a metaphor by some. His theory was natural enough and compatible with the available tests. Some of these tests were quantitative yet not guaranteed at the beginning. So of course they increased the probability that the unification of celestial and terrestrial gravity was right. But whether such confirmations would arise, how strong and numerous they would be, and when they would materialize just isn't know at the beginning.
The risk for physics stems primarily from mathematically beautiful “truths,” such as string theory, accepted prematurely for decades as a description of reality just because of their elegance.
OK, this criticism of "elegance" is mostly a misinterpretation of pop science. Scientists sometimes describe their feelings – how their brains feel nicely when things fit together. Sometimes they only talk about these emotional things in order to find some common ground with a journalist or another layman. But at the end, this type of beauty or elegance is very different from the beauty or elegance experienced by the laymen or artists. The theoretical physicists' version of beauty or elegance reflects some rather technical properties of the theories and the statement that these traits increase the probability that the theory is right may be pretty much proven.

But even if you disagree with these proofs, it doesn't matter because the scientific papers simply don't use the beauty or elegance arguments prominently. When you read a new paper about some string dualities, string vacua, or anything of the sort, you don't really read "this would be beautiful, and therefore the value of some quantity is XY". Only when there are some calculations of XY, the authors claim that there is some evidence. Otherwise they call their propositions conjectures or hypotheses. And sometimes they use these words that remind us of the uncertainty even when there is a rather substantial amount of evidence available, too.

But the uncertainty is unavoidable in science. A person who feels sick whenever there is some uncertainty just cannot be a scientist. Despite the uncertainty, a scientist has to determine what seems more likely and less likely right now. When some things look very likely, they may be accepted as facts at a preliminary basis. Some other people's belief in these propositions may be weaker – and they may claim that the proposition was accepted prematurely. But at the end, some preliminary conclusions are being made about many things. Science just couldn't possibly work without them.

By the way, I forgot to discuss the subtitle of Loeb's article:
Our discipline is a dialogue with nature, not a monologue, as some theorists would prefer to believe
Note that he emphasizes that theoretical physics is "his discipline". It sounds similar to Smolin's fraudulent claims that he was a "string theorist". Smolin isn't a string theorist and doesn't have the intellectual abilities to ever become a string theorist. Whether Loeb is a theoretical physicist is at least debatable. He's the boss of Harvard's astronomy department. The words "astrophysicist" would surely be defensible. But the phrase "theoretical physicist" isn't quite the same thing. I hope that you remember Sheldon Cooper's explanation of the difference between a rocket scientist and a theoretical physicist.

Why doesn't Missy just tell them that Sheldon is a toll taker at the Golden Gate Bridge? ;-)

Given Loeb's fundamental problems with the totally basic methodology of theoretical physics – including thought experiments and long periods of careful and patient thinking uninterrupted by experimental distractions – I think it is much more reasonable to say that Loeb clearly isn't a theoretical physicist so his subtitle is a fraudulent effort to claim some authority that he doesn't possess.

OK, Loeb tried to hijack Galileo's name for some delusions about (or against) modern physics that Galileo would almost certainly disagree with. Galileo wouldn't join these Aryan-Physics-style attacks on theoretical physics. At some level, we may consider him a founder of theoretical physics, too.

SETI vs string theory

But my title refers to a particular bizarre coincidence in Loeb's criticism of theorists' thinking that could be experimentally inaccessible for the rest of our (or some living person's?) lifetimes. He wants the experimental results right now, doesn't he? A funny thing is that Loeb is also a key official at the Breakthrough Starshot Project, Yuri Milner's $100 million kite to be sent to greet the oppressed extraterrestrial minorities who live near Alpha Centauri, the nearest star of ours except for the Sun.

String theory is too speculative for him but the discussions with the ETs are just fine, aren't they? Loeb seems aware of the ludicrous situation in which he has maneuvered himself:
At the same time, many of the same scientists that consider the study of extra dimensions as mainstream regard the search for extraterrestrial intelligence (SETI) as speculative. This mindset fails to recognize that SETI merely involves searching elsewhere for something we already know exists on Earth, and by the knowledge that a quarter of all stars host a potentially habitable Earth-size planet around them.
From his perspective, the efforts to chat with the extraterrestrial aliens are less speculative than modern theoretical physics. Wow. Why is it so? His argument is cute as well. SETI is just searching for something that is known to exist – intelligent life. However, the thing that just searches for something that is known to exist – intelligent life – would have the acronym SI only and it would be completely pointless because the answer is known. SETI also has ET in the middle, you know, which stands for "extraterrestrial". And Loeb must have overlooked these two letters altogether.

It is not known at all whether there are other planets where intelligent life exists, and if they exist, what is their density, age, longevity, appearance, and degree of similarity to the life on Earth. It's even more unknown or speculative how these hypothetical ETs, if they exist near Alpha Centauri, would react to Milner's kite. We couldn't even reliably predict how our civilization would react to a similar kite that would arrive to Earth. How could we make realistic plans about the reactions of a hypothetical extraterrestrial civilization?

On the other hand, string theory is just a technical upgrade of quantum field theory – one that looks unique even 50 years after the birth of string theory. Quantum field theory and string theory yield basically the same predictions for the doable experiments, quantum field theory is demonstrably the relevant approximation of stringy physics, and this approximation has been successfully compared to the empirical data. Everything seems to work.

The extra dimensions are just scalar fields analogous to those that are known to exist that are added on the stringy world sheet (and in this sense, the addition of the extra dimension is as mundane as the addition of an extra flavor of leptons or quarks). We have theoretical reasons to think that the total number of spacetime dimensions should be 10 or 11. Unlike the expectations about the ETs, this is not mere prejudice. There are actually calculations of the critical dimension. Joe Polchinski's "String Theory" textbook contains 7 different calculations of \(D=26\) for the bosonic string in the first volume; the realistic superstring analogously has \(D=10\). This is not like saying "there should be cow-like aliens near Alpha Centauri because the stars look alike and I like this assertion".

How can someone say that this research of extensions of successful quantum field theories is as speculative as Skyping with extraterrestrial aliens, let alone more speculative than those big plans with the ETs? At some moments, you can see that some people have simply lost it. And Loeb has lost it. It makes no sense to talk to him about these matters. He seems to hate theoretical physics so fanatically that he's willing to team up not only with the Šmoit-like crackpots but also with extraterrestrial aliens in his efforts to fight against modern theoretical physics.

Too bad, Mr Loeb, but even if extraterrestrial intelligent civilizations exist, it won't help your case because these civilizations – because of the adjective "intelligent" – know that string theory is right and you are full of šit.

And that's the memo.



P.S.: I forgot to discuss the "intellectual power" paragraph:
Given our academic reward system of grades, promotions and prizes, we sometimes forget that physics is a learning experience about nature rather than an arena for demonstrating our intellectual power. As students of experience, we should be allowed to make mistakes and correct our prejudices.
Now, this is a bizarre combination of statements. Loeb says "physics is about" learning, not demonstrating our intellectual power. "Physics is about" is a vague sequence of words, however. We should distinguish two questions: What drives people to do physics? And what decides about their success?

What primarily drives the essential people to do physics is curiosity. Physicists want to know how Nature works. String theorists want lots of more detailed questions about Nature to be answered. Their curiosity is real and they don't give a damn whether an ideologue wants to prevent them from studying some questions: the curiosity is real, they know that they want to know, and some obnoxious Loeb-style babbling can't change anything about it.

Some people are secondary researchers. They do it because it's a good source of income or prestige or whatever. They study it because others have made it possible, they created the jobs, chairs, and so on. But the primary motivation is curiosity.

But then we have the question whether one succeeds. The intellectual power isn't everything but it's obviously important. Loeb clearly wants to deny this importance – but he doesn't want to do it directly because the statement would sound idiotic, indeed. But why does he feel so uncomfortable about the need for intellectual power in theoretical physics?

He presents the intellectual power as the opposite of the validity of physical theories. This contrast is the whole point of the paragraph above. But this contrast is complete nonsense. There is no negative correlation between "intellectual power" and "validity of the theories that are found". On the contrary, the correlation is pretty much obviously positive.

At the end, his attack against the intellectual power is fully analogous to the statement that ice-hockey isn't about the demonstration of one's physical strength and skills, it's about scoring goals. When some parts are emphasized, the sentence is correct. But not too correct. The demonstration of the physical skills and strength is also "what ice-hockey is about". It's what drives some people. And the skills and strength are needed to do it well, too. The rhetorical exercise "either strength, or goals" – which is so completely analogous to Loeb's "either intellectual power, or proper learning of things about Nature" – is just a road to hell. The only possible implication of such a proposition would be to say that "people without the intellectual power should be made theoretical physicists". Does he really believe this makes any sense? Or why does he mix the validity of theories with the intellectual power in this negative way?

Well, let me tell you why. Because he is jealous about some people's superior intellectual powers compared to his. And he is making the bet – probably correctly – that the readers of Scientific American's pages are dumb enough not to notice that his rant is completely illogical, from the beginning to the end.

by Luboš Motl (noreply@blogger.com) at August 18, 2018 08:15 AM

August 13, 2018

Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

by Andrew at August 13, 2018 10:07 PM

Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

by Axel Maas (noreply@blogger.com) at August 13, 2018 02:46 PM

August 08, 2018

Clifford V. Johnson - Asymptotia

Science Friday Book Club Q&A

Between 3 and 4 pm Eastern time today (very shortly, as I type!) I’ll be answering questions about Hawking’s “A Brief History of Time” as part of a Live twitter event for Science Friday’s Book Club. See below. Come join in! Hey SciFri Book Clubbers! Do you have had any … Click to continue reading this post

The post Science Friday Book Club Q&A appeared first on Asymptotia.

by Clifford at August 08, 2018 06:52 PM

August 01, 2018

Clifford V. Johnson - Asymptotia

DC Moments…

I'm in Washington DC for a very short time. 16 hours or so. I'd have come for longer, but I've got some parenting to get back to. It feels a bit rude to come to the American Association of Physics Teachers annual meeting for such a short time, especially because the whole mission of teaching physics in all the myriad ways is very dear to my heart, and here is a massive group of people devoted to gathering about it.

It also feels a bit rude because I'm here to pick up an award. (Here's the announcement that I forgot to post some months back.)

I meant what I said in the press release: It certainly is an honour to be recognised with the Klopsteg Memorial Lecture Award (for my work in science outreach/engagemnet), and it'll be a delight to speak to the assembled audience tomorrow and accept the award.

Speaking in an unvarnished way for a moment, I and many others who do a lot of work to engage the public with science have, over the years, had to deal with not being taken seriously by many of our colleagues. Indeed, suffering being dismissed as not being "serious enough" about our other [...] Click to continue reading this post

The post DC Moments… appeared first on Asymptotia.

by Clifford at August 01, 2018 04:54 AM

July 26, 2018

Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

We’ve already had a bunch of cool guests, check these out:

And there are more exciting episodes on the way. Enjoy, and spread the word!

by Sean Carroll at July 26, 2018 04:15 PM

July 20, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Summer days, academics and technological universities

The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh.

 

 

 

                                       Counsellor’s Strand in Dunmore East

So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish.

However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st.

And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities.

This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone?

by cormac at July 20, 2018 03:32 PM

July 19, 2018

Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: PlanckSpectra (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

by Andrew at July 19, 2018 06:51 PM

Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

by Andrew at July 19, 2018 12:02 PM

July 16, 2018

Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ? 
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist. 

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC). 

read more

by Tommaso Dorigo at July 16, 2018 09:13 AM

July 12, 2018

Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!

 

by Matt Strassler at July 12, 2018 04:59 PM

July 08, 2018

Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence (3\sigma) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})

and CMS (see here)

\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from 35.9{\rm fb}^{-1} data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below 2\sigma (see here). For the WW decay, ATLAS does not see anything above 1\sigma (see here).

So, although there is something to take under attention with the increase of data, that will reach 100 {\rm fb}^{-1} this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.

 

by mfrasca at July 08, 2018 10:58 AM

July 04, 2018

Tommaso Dorigo - Scientificblogging

Chasing The Higgs Self Coupling: New CMS Results
Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...).

In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin. 

read more

by Tommaso Dorigo at July 04, 2018 12:57 PM

June 25, 2018

Sean Carroll - Preposterous Universe

On Civility

Alex Wong/Getty Images

White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.

I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.

On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.

This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.

More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.

The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.

The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.

This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.

 

 

by Sean Carroll at June 25, 2018 06:00 PM

June 24, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

7th Robert Boyle Summer School

This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

220px-Robert_Boyle_0001

The Irish-born scientist and aristocrat Robert Boyle   

IMG_1745[1]

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

trio

IMG_9390

IMG_9398 (1)

Images from the garden party in the grounds of Lismore Castle

by cormac at June 24, 2018 08:19 PM

June 22, 2018

Jester - Resonaances

Both g-2 anomalies
Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...

by Mad Hatter (noreply@blogger.com) at June 22, 2018 11:04 PM

June 16, 2018

Tommaso Dorigo - Scientificblogging

On The Residual Brightness Of Eclipsed Jovian Moons
While preparing for another evening of observation of Jupiter's atmosphere with my faithful 16" dobsonian scope, I found out that the satellite Io will disappear behind the Jovian shadow tonight. This is a quite common phenomenon and not a very spectacular one, but still quite interesting to look forward to during a visual observation - the moon takes some time to fully disappear, so it is fun to follow the event.
This however got me thinking. A fully eclipsed jovian moon should still be able to reflect back some light picked up from the still lit other satellites - so it should not, after all, appear completely dark. Can a calculation be made of the effect ? Of course - and it's not that difficult.

read more

by Tommaso Dorigo at June 16, 2018 04:47 PM

June 12, 2018

Axel Maas - Looking Inside the Standard Model

How to test an idea
As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

by Axel Maas (noreply@blogger.com) at June 12, 2018 10:49 AM

June 10, 2018

Tommaso Dorigo - Scientificblogging

Modeling Issues Or New Physics ? Surprises From Top Quark Kinematics Study
Simulation, noun:
1. Imitation or enactment
2. The act or process of pretending; feigning.
3. An assumption or imitation of a particular appearance or form; counterfeit; sham.

Well, high-energy physics is all about simulations. 

We have a theoretical model that predicts the outcome of the very energetic particle collisions we create in the core of our giant detectors, but we only have approximate descriptions of the inputs to the theoretical model, so we need simulations. 

read more

by Tommaso Dorigo at June 10, 2018 11:18 AM

June 09, 2018

Jester - Resonaances

Dark Matter goes sub-GeV
It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. 
                       
It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.   

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.
     
Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

by Mad Hatter (noreply@blogger.com) at June 09, 2018 05:39 PM

June 08, 2018

Jester - Resonaances

Massive Gravity, or You Only Live Twice
Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).   

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...           

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl~10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ~300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m~10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ~1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.   

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

by Mad Hatter (noreply@blogger.com) at June 08, 2018 08:35 AM

June 07, 2018

Jester - Resonaances

Can MiniBooNE be right?
The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.


This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.     

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.     

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

by Mad Hatter (noreply@blogger.com) at June 07, 2018 01:20 PM

June 01, 2018

Jester - Resonaances

WIMPs after XENON1T
After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.
 
To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field. 

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?

by Mad Hatter (noreply@blogger.com) at June 01, 2018 05:30 PM

Tommaso Dorigo - Scientificblogging

MiniBoone Confirms Neutrino Anomaly
Neutrinos, the most mysterious and fascinating of all elementary particles, continue to puzzle physicists. 20 years after the experimental verification of a long-debated effect whereby the three neutrino species can "oscillate", changing their nature by turning one into the other as they propagate in vacuum and in matter, the jury is still out to decide what really is the matter with them. And a new result by the MiniBoone collaboration is stirring waters once more.

read more

by Tommaso Dorigo at June 01, 2018 12:49 PM

May 26, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A festschrift at UCC

One of my favourite academic traditions is the festschrift, a conference convened to honour the contribution of a senior academic. In a sense, it’s academia’s version of an Oscar for lifetime achievement, as scholars from all around the world gather to pay tribute their former mentor, colleague or collaborator.

Festschrifts tend to be very stimulating meetings, as the diverging careers of former students and colleagues typically make for a diverse set of talks. At the same time, there is usually a unifying theme based around the specialism of the professor being honoured.

And so it was at NIALLFEST this week, as many of the great and the good from the world of Einstein’s relativity gathered at University College Cork to pay tribute to Professor Niall O’Murchadha, a theoretical physicist in UCC’s Department of Physics noted internationally for seminal contributions to general relativity.  Some measure of Niall’s influence can be seen from the number of well-known theorists at the conference, including major figures such as Bob WaldBill UnruhEdward Malec and Kip Thorne (the latter was recently awarded the Nobel Prize in Physics for his contribution to the detection of gravitational waves). The conference website can be found here and the programme is here.

IMG_1640

IMG_1644

IMG_1642

University College Cork: probably the nicest college campus in Ireland

As expected, we were treated to a series of high-level talks on diverse topics, from black hole collapse to analysis of high-energy jets from active galactic nuclei, from the initial value problem in relativity to the search for dark matter (slides for my own talk can be found here). To pick one highlight, Kip Thorne’s reminiscences of the forty-year search for gravitational waves made for a fascinating presentation, from his description of early designs of the LIGO interferometer to the challenge of getting funding for early prototypes – not to mention his prescient prediction that the most likely chance of success was the detection of a signal from the merger of two black holes.

All in all, a very stimulating conference. Most entertaining of all were the speakers’ recollections of Niall’s working methods and his interaction with students and colleagues over the years. Like a great piano teacher of old, one great professor leaves a legacy of critical thinkers dispersed around their world, and their students in turn inspire the next generation!

 

by cormac at May 26, 2018 12:16 AM

May 21, 2018

Andrew Jaffe - Leaves on the Line

Leon Lucy, R.I.P.

I have the unfortunate duty of using this blog to announce the death a couple of weeks ago of Professor Leon B Lucy, who had been a Visiting Professor working here at Imperial College from 1998.

Leon got his PhD in the early 1960s at the University of Manchester, and after postdoctoral positions in Europe and the US, worked at Columbia University and the European Southern Observatory over the years, before coming to Imperial. He made significant contributions to the study of the evolution of stars, understanding in particular how they lose mass over the course of their evolution, and how very close binary stars interact and evolve inside their common envelope of hot gas.

Perhaps most importantly, early in his career Leon realised how useful computers could be in astrophysics. He made two major methodological contributions to astrophysical simulations. First, he realised that by simulating randomised trajectories of single particles, he could take into account more physical processes that occur inside stars. This is now called “Monte Carlo Radiative Transfer” (scientists often use the term “Monte Carlo” — after the European gambling capital — for techniques using random numbers). He also invented the technique now called smoothed-particle hydrodynamics which models gases and fluids as aggregates of pseudo-particles, now applied to models of stars, galaxies, and the large scale structure of the Universe, as well as many uses outside of astrophysics.

Leon’s other major numerical contributions comprise advanced techniques for interpreting the complicated astronomical data we get from our telescopes. In this realm, he was most famous for developing the methods, now known as Lucy-Richardson deconvolution, that were used for correcting the distorted images from the Hubble Space Telescope, before NASA was able to send a team of astronauts to install correcting lenses in the early 1990s.

For all of this work Leon was awarded the Gold Medal of the Royal Astronomical Society in 2000. Since then, Leon kept working on data analysis and stellar astrophysics — even during his illness, he asked me to help organise the submission and editing of what turned out to be his final papers, on extracting information on binary-star orbits and (a subject dear to my heart) the statistics of testing scientific models.

Until the end of last year, Leon was a regular presence here at Imperial, always ready to contribute an occasionally curmudgeonly but always insightful comment on the science (and sociology) of nearly any topic in astrophysics. We hope that we will be able to appropriately memorialise his life and work here at Imperial and elsewhere. He is survived by his wife and daughter. He will be missed.

by Andrew at May 21, 2018 09:27 AM

May 14, 2018

Sean Carroll - Preposterous Universe

Intro to Cosmology Videos

In completely separate video news, here are videos of lectures I gave at CERN several years ago: “Cosmology for Particle Physicists” (May 2005). These are slightly technical — at the very least they presume you know calculus and basic physics — but are still basically accurate despite their age.

  1. Introduction to Cosmology
  2. Dark Matter
  3. Dark Energy
  4. Thermodynamics and the Early Universe
  5. Inflation and Beyond

Update: I originally linked these from YouTube, but apparently they were swiped from this page at CERN, and have been taken down from YouTube. So now I’m linking directly to the CERN copies. Thanks to commenters Bill Schempp and Matt Wright.

by Sean Carroll at May 14, 2018 07:09 PM

May 10, 2018

Sean Carroll - Preposterous Universe

User-Friendly Naturalism Videos

Some of you might be familiar with the Moving Naturalism Forward workshop I organized way back in 2012. For two and a half days, an interdisciplinary group of naturalists (in the sense of “not believing in the supernatural”) sat around to hash out the following basic question: “So we don’t believe in God, what next?” How do we describe reality, how can we be moral, what are free will and consciousness, those kinds of things. Participants included Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Daniel Dennett, Owen Flanagan, Rebecca Newberger Goldstein, Janna Levin, Massimo Pigliucci, David Poeppel, Nicholas Pritzker, Alex Rosenberg, Don Ross, and Steven Weinberg.

Happily we recorded all of the sessions to video, and put them on YouTube. Unhappily, those were just unedited proceedings of each session — so ten videos, at least an hour and a half each, full of gems but without any very clear way to find them if you weren’t patient enough to sift through the entire thing.

No more! Thanks to the heroic efforts of Gia Mora, the proceedings have been edited down to a number of much more accessible and content-centered highlights. There are over 80 videos (!), with a median length of maybe 5 minutes, though they range up to about 20 minutes and down to less than one. Each video centers on a particular idea, theme, or point of discussion, so you can dive right into whatever particular issues you may be interested in. Here, for example, is a conversation on “Mattering and Secular Communities,” featuring Rebecca Goldstein, Dan Dennett, and Owen Flanagan.

The videos can be seen on the workshop web page, or on my YouTube channel. They’re divided into categories:

A lot of good stuff in there. Enjoy!

by Sean Carroll at May 10, 2018 02:48 PM