Particle Physics Planet


August 19, 2018

Christian P. Robert - xi'an's og

asymptotics of M³C²L
In a recent arXival, Blazej Miasojedow, Wojciech Niemiro and Wojciech Rejchel establish the convergence of a maximum likelihood estimator based on an MCMC approximation of the likelihood function. As in intractable normalising constants. The main result in the paper is a Central Limit theorem for the M³C²L estimator that incorporates an additional asymptotic variance term for the Monte Carlo error. Where both the sample size n and the number m of simulations go to infinity. Independently so. However, I do not fully perceive the relevance of using an MCMC chain to target an importance function [which is used in the approximation of the normalising constant or otherwise for the intractable likelihood], relative to picking an importance function h(.) that can be directly simulated.

by xi'an at August 19, 2018 12:18 PM

Peter Coles - In the Dark

All-Ireland Hurling Finals Day

Just a quick post to note that today is a huge day on the sporting calendar here in Ireland. It’s the final of the All-Ireland Hurling Championship, which will be between holders Galway and Limerick, at Croke Park in Dublin.

It will take some doing for this match to be as exciting as the Semi-Final I watched a few weeks ago in a pub in Maynooth, but you never know. That game ended in a draw, and Galway won the Replay. The other semifinal was also a cracker, with Limerick winning in extra time. Galway are favourites to win the game, but there seems to be more support around these parts for the green of Limerick than the maroon of Galway.

Anyway, if you’re bored this afternoon, and have access to cable or satellite TV, then I suggest having a look. If you’ve never seen hurling before then the first thing that strikes you is the phenomenal speed at which the game is played. The sliotar (ball) can travel from one end of the pitch to the other in a second and the players have to be extremely fit. Brave too. This is definitely not a game for faint hearts!

There was heavy rain last night but it has passed over and it should be a good game. I’m sure the atmosphere will be brilliant in the stadium, but Ill be happy to watch in the pub (although it’s sure to be crowded).

UPDATE: Half-time Galway 0-9 Limerick 1-10, the underdogs ahead by 4 points. Frenetic and rather scrappy game with lots of wides. Exciting to watch though. I’m up by two pints of Guinness.

UPDATE: Full-time Galway 2-18 Limerick 3-16. Most of the second half was rather one-sided. When Limerick scored their third goal and went 8 points clear I thought it was all over, but suddenly Galway scored two goals and were right back in it. Nerves jangling, Limerick managef to survive eight minutes of stoppage time. Galway had a free at the end that could have tied the scores but it fell short. Exciting finish but Limerick worthy winners.

by telescoper at August 19, 2018 11:24 AM

John Baez - Azimuth

Open Petri Nets (Part 3)

I’ve been talking about my new paper with Jade Master:

• John Baez and Jade Master, Open Petri nets.

In Part 1 we saw the double category of open Petri nets; in Part 2 we saw the reachability semantics for open Petri nets as a double functor. Now I’d like to wrap up by explaining some of the machinery beneath the hood of our results.

I fell in love with Petri nets when I realized that they were really just presentations of free symmetric monoidal categories. If you like category theory, this turns Petri nets from something mysterious into something attractive.

In any category you can compose morphisms f\colon X \to Y and g\colon Y \to Z and get a morphism gf \colon X \to Z. In a monoidal category you can also tensor morphisms f \colon X \to X' and g \colon Y \to Y' and get a morphism f \otimes g \colon X \otimes X' \to Y \otimes Y'. This of course relies on your ability to tensor objects. In a symmetric monoidal category you also have X \otimes Y \cong Y \otimes X. And of course, there is more to it than this. But this is enough to get started.

A Petri net has ‘places’ and also ‘transitions’ going between multisets of places:

From this data we can try to generate a symmetric monoidal category whose objects are built from the places and whose morphisms are built from the transitions. So, for example, the above Petri net would give a symmetric monoidal category with an object

2 susceptible + 3 infected

and a morphism from this to the object

susceptible + 2 infected

(built using the transition infection), and a morphism
from this to the object

susceptible + infected + resistant

(built using the transition recovery) and so on. Here we are using + to denote the tensor product in our symmetric monoidal category, as usual in chemistry.

When we do this sort of construction, the resulting symmetric monoidal category is ‘free’. That is, we are not imposing any really interesting equations: the objects are freely generated by the places in our Petri net by tensoring, and the morphisms are freely generated by the transitions by tensoring and composition.

That’s the basic idea. The problem is making this idea precise!

Many people have tried in many different ways. I like this approach the best:

• José Meseguer and Ugo Montanari, Petri nets are monoids, Information and Computation 88 (1990), 105–155.

but I think it can be simplified a bit, so let me describe what Jade and I did in our paper.

The problem is that there are different notions of symmetric monoidal category, and also different notions of morphism between Petri nets. We take the maximally strict approach, and work with ‘commutative’ monoidal categories. These are just commutative monoid objects in \mathrm{Cat}, so their associator:

\alpha_{A,B,C} \colon (A + B) + C \stackrel{\sim}{\longrightarrow} A + (B + C)

their left and right unitor:

\lambda_A \colon 0 + A \stackrel{\sim}{\longrightarrow} A

\rho_A \colon A + 0 \stackrel{\sim}{\longrightarrow} A

and even—disturbingly—their braiding:

\sigma_{A,B} \colon A + B \stackrel{\sim}{\longrightarrow} B + A

are all identity morphisms.

The last would ordinarily be seen as ‘going too far’, since while every symmetric monoidal category is equivalent to one with trivial associator and unitors, this ceases to be true if we also require the braiding to be trivial. However, it seems that Petri nets most naturally serve to present symmetric monoidal categories of this very strict sort. There just isn’t enough information in a Petri net to make it worthwhile giving them a nontrivial braiding

\sigma_{A,B} \stackrel{\sim}{\longrightarrow} A + B \cong B + A

It took me a while to accept this, but now it seem obvious. If you want a nontrivial braiding, you should be using something a bit fancier than a Petri net.

Thus, we construct adjoint functors between a category of Petri nets, which we call \textrm{Petri}, and a category of ‘commutative monoidal categories’, which we call \textrm{CMC}.

An object of \textrm{Petri} is a Petri net: that is, a set S of places, a set T of transitions, and source and target functions

s, t \colon T \to \mathbb{N}[S]

where \mathbb{N}[S] is the underlying set of the free commutative monoid on S.

More concretely, \mathbb{N}[S] is the set of formal finite linear combinations of elements of S with natural number coefficients. The set S naturally includes in \mathbb{N}[S], and for any function

f \colon S \to S'

there is a unique monoid homomorphism

\mathbb{N}[f] \colon \mathbb{N}[S] \to \mathbb{N}[S']

extending f.

A Petri net morphism from a Petri net

s, t \colon T \to \mathbb{N}[S]

to a Petri net

s', t' \colon T' \to \mathbb{N}[S']

is a pair of functions

f \colon T \to T'

g \colon S \to S'

making the two obvious diagrams commute:

There is a category \textrm{Petri} with Petri nets as objects and Petri net morphisms as morphisms.

On the other hand, a commutative monoidal category is a commutative monoid object in \mathrm{Cat}. Explicitly, it’s a strict monoidal category (C,+,0) such that for all objects A and B we have

A + B = B + A

and for all morphisms f and g

f + g = g + f

Note that a commutative monoidal category is the same as a strict symmetric monoidal category where the symmetry isomorphisms

\sigma_{A,B} \colon A + B \stackrel{\sim}{\longrightarrow} B+A

are all identity morphisms. Every strict monoidal functor between commutative monoidal categories is automatically a strict symmetric monoidal functor. So, we let \mathrm{CMC} be the category whose objects are commutative monoidal categories and whose morphisms are strict monoidal functors.

There’s a functor

U \colon \mathrm{CMC} \to \mathrm{Petri}

sending any commutative monoidal category C to its underlying Petri net. This Petri net has the set of objects \mathrm{Ob}(C) as its set of places and the set of morphisms \mathrm{Mor}(C) as its set of transitions, and

s, t \colon \mathrm{Mor}(C) \to \mathrm{Ob}(C) \hookrightarrow \mathbb{N}[\mathrm{Ob}(C)]

as its source and target maps.

Proposition. The functor U \colon \mathrm{CMC} \to \mathrm{Petri} has a left adjoint

F \colon \mathrm{Petri} \to \mathrm{CMC}

This is Proposition 10 in our paper, and we give an explicit construction of this left adjoint.

So that’s our conception of the free commutative monoidal category on a Petri net. It’s pretty simple. How could anyone have done anything else?

Montanari and Meseguer do almost the same thing, but our category of Petri nets is a subcategory of theirs: our morphisms of Petri nets send places to places, while they allow more general maps that send a place to a formal linear combination of places. On the other hand, they consider a full subcategory of our \mathrm{CMC} containing only commutative monoidal categories whose objects form a free commutative monoid.

Other papers do a variety of more complicated things. I don’t have the energy to explain them all, but you can see some here:

• Pierpaolo Degano, José Meseguer and Ugo Montanari, Axiomatizing net computations and processes, in Logic in Computer Science 1989, IEEE, New Jersey, 1989, pp. 175–185.

• Vladimiro Sassone, Strong concatenable processes: an approach to the category of Petri net computations, BRICS Report Series, Dept. of Computer Science, U. Aarhus, 1994.

• Vladimiro Sassone, On the category of Petri net computations, in Colloquium on Trees in Algebra and Programming, Springer, Berlin, 1995.

• Vladimiro Sassone, An axiomatization of the algebra of Petri net concatenable processes, in Theoretical Computer Science 170 (1996), 277–296.

• Vladimiro Sassone and Pavel Sobociński, A congruence for Petri nets, Electronic Notes in Theoretical Computer Science 127 (2005), 107–120.

Getting the free commutative monoidal category on a Petri net right is key to developing the reachability semantics for open Petri nets in a nice way. But to see that, you’ll have to read our paper!


Part 1: the double category of open Petri nets.

Part 2: the reachability semantics for open Petri nets.

Part 3: the free symmetric monoidal category on a Petri net.

by John Baez at August 19, 2018 08:31 AM

August 18, 2018

Peter Coles - In the Dark

Humphrey Lyttelton & Elkie Brooks – Trouble in Mind

Mention the name Elkie Brooks to people of my generation or older and most will think of her popular hits from the late 1970s, especially Pearl’s A Singer which made the UK Top Ten in 1977. Elkie Brooks has however had a long and very distinguished career as a Jazz and Blues singer, including regular performances over the years with trumpeter Humphrey Lyttelton and his band. This particular track was recorded in 2002, when Humph was already in his eighties, but I think it’s a lovely performance so I thought I’d share it here.

Trouble in Mind is a very familiar tune that has been recorded countless times by jazz musicians. In fact an earlier manifestation of Humph’s Band made a very nice instrumental version way back in 1950 which I have on an old Parlophone 78. The tune is usually credited to Richard M. Jones, but it has its roots in much older spirituals and folk songs. There are a couple of things worth mentioning about it despite it being so well known.. Although Trouble in Mind is a blues, it is a slightly unusual one because it’s an eight-bar blues rather than the more usual twelve-bar variety. The other thing is that there’s something about this tune that suits a rhythm accompaniment in sixth notes, as exemplified by drummer Adrian Macintosh on this track when the vocal starts.

There’s also some fine trombone on this (by Pete Strange) and a nice bit of banter from Humph at the beginning. Enjoy!

by telescoper at August 18, 2018 11:09 AM

John Baez - Azimuth

Open Petri Nets (Part 2)

I’d like to continue talking about this paper:

• John Baez and Jade Master, Open Petri nets.

Last time I explained, in a sketchy way, the double category of open Petri nets. This time I’d like to describe a ‘semantics’ for open Petri nets.

In his famous thesis Functorial Semantics of Algebraic Theories, Lawvere introduced the idea that semantics, as a map from expressions to their meanings, should be a functor between categories. This has been generalized in many directions, and the same idea works for double categories. So, we describe our semantics for open Petri nets as a map

\blacksquare \colon \mathbb{O}\mathbf{pen}(\mathrm{Petri}) \to \mathbb{R}\mathbf{el}

from our double category of open Petri nets to a double category of relations. This map sends any open Petri net to its ‘reachability relation’.

In Petri net theory, a marking of a set X is a finite multisubset of X. We can think of this as a way of placing finitely ‘tokens’—little black dots—on the elements of X. A Petri net lets us start with some marking of its places and then repeatedly change the marking by moving tokens around, using the transitions. This is how Petri nets describe processes!

For example, here’s a Petri net from chemistry:

Here’s a marking of its places:

In this example there are only 0 or 1 tokens in each place: we’ve got one atom of carbon, one molecule of oxygen, one molecule of sodium hydroxide, one molecule of hydrochloric acid, and nothing else. But in general we could have any natural number of tokens in each place, as long as the total number of tokens is finite.

Using the transitions we can repeatedly change the markings, like this:

 

 

 

People say one marking is reachable from another if you can get it using a finite sequence of transitions in this manner. (Our paper explains this well-known notion more formally.)

We need something slightly fancier in our paper since we are dealing with open Petri nets. Let \mathbb{N}[X] denote the set of markings of X. Given an open Petri net P \colon X \nrightarrow Y there is a reachability relation

\blacksquare P \subseteq \mathbb{N}[X] \times \mathbb{N}[Y]

This relation holds when we can take a given marking of X, feed those tokens into the Petri net P, move them around using transitions in P, and have them pop out and give a certain marking of Y, leaving no tokens behind.

For example, consider this open Petri net P \colon X \nrightarrow Y:

Here is a marking of X:

We can feed these tokens into P and move them around using transitions in P:

They can then pop out into Y, leaving none behind:

This gives a marking of Y that is ‘reachable’ from the original marking of X.

The main result of our paper is that the map sending an open Petri net P to its reachability relation \blacksquare P extends to a ‘lax double functor’

\blacksquare \colon \mathbb{O}\mathbf{pen}(\mathrm{Petri}) \to \mathbb{R}\mathbf{el}

where \mathbb{O}\mathbf{pen}(\mathrm{Petri}) is a double category having open Petri nets as horizontal 1-cells and \mathbb{R}\mathbf{el} is a double category having relations as horizontal 1-cells.

I can give you a bit more detail on those double categories, and also give you a clue about what ‘lax’ means, without it becoming too stressful.

Last time I said the double category \mathbb{O}\mathbf{pen}(\mathrm{Petri}) has:

• sets X, Y, Z, \dots as objects,

• functions f \colon X \to Y as vertical 1-morphisms,

• open Petri nets P \colon X \nrightarrow Y as horizontal 1-cells—they look like this:

• morphisms between open Petri nets as 2-morphisms—an example would be the visually obvious map from this open Petri net:

to this one:

What about \mathbb{R}\mathbf{el}? This double category has

• sets X, Y, Z, \dots as objects,

• functions f \colon X \to Y as vertical 1-morphisms,

• relations R \subseteq X \times Y as horizontal 1-cells from X to Y, and

• maps between relations as 2-morphisms. Here a map between relations is a square

that obeys

(f \times g) R \subseteq S

So, the idea of the reachability semantics is that it maps:

• any set X to the set \mathbb{N}[X] consisting of all markings of that set.

• any function f \colon X \to Y to the obvious function

\mathbb{N}(f) \colon \mathbb{N}[X] \to \mathbb{N}[Y]

(Yes, \mathbb{N} is a really a functor.)

• any open Petri net P \colon X \nrightarrow Y to its reachability relation

\blacksquare P \colon \mathbb{N}[X] \to \mathbb{N}[Y]

• any morphism between Petri nets to the obvious map between their reachability relations.

Especially if you draw some examples, all this seems quite reasonable and nice. But it’s important to note that \blacksquare is a lax double functor. This means that it does not send a composite open Petri net PQ to the composite of the reachability relations for P and Q. So, we do not have

\blacksquare Q \; \blacksquare P = \blacksquare (QP)

Instead, we just have

\blacksquare Q \; \blacksquare P \subseteq \blacksquare (QP)

It’s easy to see why. Take P \colon X \nrightarrow Y to be this open Petri net:

and take Q \colon Y \nrightarrow Z to be this one:

Then their composite QP \colon X \nrightarrow Y is this:

It’s easy to see that \blacksquare (QP) is a proper subset of \blacksquare Q \; \blacksquare P. In QP a token can move all the way from point 1 to point 5. But it does not do so by first moving through P and then moving through Q! It has to take a more complicated zig-zag path where it first leaves P and enters Q, then comes back into P, and then goes to Q.

In our paper, Jade and I conjecture that we get

\blacksquare Q \; \blacksquare P = \blacksquare (QP)

if we restrict the reachability semantics to a certain specific sub-double category of \mathbb{O}\mathbf{pen}(\mathrm{Petri}) consisting of ‘one-way’ open Petri nets.

Finally, besides showing that

\blacksquare \colon \mathbb{O}\mathbf{pen}(\mathrm{Petri}) \to \mathbb{R}\mathbf{el}

is a lax double functor, we also show that it’s symmetric monoidal. This means that the reachability semantics works as you’d expect when you run two open Petri nets ‘in parallel’.

In a way, the most important thing about our paper is that it illustrates some methods to study semantics for symmetric monoidal double categories. Kenny Courser and I will describe these methods more generally in our paper “Structured cospans.” They can be applied to timed Petri nets, colored Petri nets, and various other kinds of Petri nets. One can also develop a reachability semantics for open Petri nets that are glued together along transitions as well as places.

I hear that the company Statebox wants these and other generalizations. We aim to please—so we’d like to give it a try.

Next time I’ll wrap up this little series of posts by explaining how Petri nets give symmetric monoidal categories.


Part 1: the double category of open Petri nets.

Part 2: the reachability semantics for open Petri nets.

Part 3: the free symmetric monoidal category on a Petri net.

by John Baez at August 18, 2018 09:00 AM

Lubos Motl - string vacua and pheno

Quintessence is a form of dark energy
Tristan asked me what I thought about Natalie Wolchover's new Quanta Magazine article,
Dark Energy May Be Incompatible With String Theory,
exactly when I wanted to write something. Well, first, I must say that I already wrote a text about this dispute, Vafa, quintessence vs Gross, Silverstein, in late June 2018. You may want to reread the text because the comments below may be considered "just an appendix" to that older text. Since that time, I exchanged some friendly e-mails with Cumrun Vafa. I am obviously more skeptical towards their ideas than they are but I think that I have encountered some excessive certainty of some of their main critics.

Wolchover's article sketches some basic points about this rather important disagreement about cosmology among string theorists. But there are some very unfortunate details. The first unfortunate detail appears in the title. Wolchover actually says that "dark energy might be incompatible with string theory". That's the statement she seems to attribute to Cumrun Vafa and co-authors.



But that misleading formulation is really invalid – it's not what Cumrun is saying. Here, the misunderstanding may be blamed on some sloppy "translation" of the technical terms that has become standard in the pop science press – and the excessively generalized usage of some jargon.




OK, what's going on? First of all, the Universe is expanding, isn't it? We're talking about cosmology, the big bang theory (which I don't capitalize – to make sure that I am not talking about the sitcom), and the expansion of the Universe was already seen in the 1920s although people only became confident about it some 50 years ago.

In the late 1990s, it was observed that the expansion wasn't slowing down, as widely expected, but speeding up. The accelerated expansion may be explained by dark energy. Dark energy is anything that is present everywhere in the vacuum and that tends to accelerate the expansion of the Universe. Dark energy, like dark matter, is invisible by optical telescopes (that's why both of them are called dark). But unlike dark matter which has (like all matter or dust) the pressure \(p=0\), the dark energy has nonzero pressure, namely \(p\lt 0\) or \(p\approx -\rho\) where \(\rho\) is the energy density. That's how dark energy and dark matter differ; dark energy's negative pressure is needed for its ability to accelerate the expansion of the Universe.

Dark energy is supposed to be a rather general, umbrella term that may be represented by several known, slightly different theoretical concepts described by equations of physics. So far, the by far most widespread and "canonical" or "minimalist" kind of dark energy was the cosmological constant. That's really a number that is independent of space and especially time (it's why it's called a constant) which Einstein added to the original Einstein's equations of the general theory of relativity. Einstein's original goal was to allow the size of the Universe to be stable in time – because his equations seemed to imply that the Universe's size should evolve, much like the height of a freely falling apple. It just can't sit at a constant value – just like the apple usually doesn't sit in the air in the middle of the room.

But the expansion of the Universe was discovered. Einstein could have predicted it because it follows from the simplest form of Einstein's equations, as I said. That could have earned him another Nobel prize when the expansion was seen by Hubble. (Well, Einstein's stabilization by the cosmological constant term wouldn't really work even theoretically, anyway. The balance would be unstable, tending to turn to an expansion or the implosion, like a pencil standing on the tip. Any tiny perturbation would be enough for this instability to grow exponentially.)

That's probably the main reason why Einstein labeled the introduction of the cosmological constant term "the greatest blunder of his life". Well, it wasn't the greatest blunder of his life: the denial of quantum mechanics and state-of-the-art physics in general in the last 30 years of his life was almost certainly a greater blunder.

In the late 1990s, the Universe's expansion was seen to accelerate which is why it seemed obvious that Einstein's blunder wasn't a blunder at all, let alone the worst one: the cosmological constant term seems to be there and it's responsible for the acceleration of the Universe. Suddenly, Einstein's cosmological term (with a different numerical value than Einstein needed – but one that is of the same order) seemed like a perfect, minimalistic explanation of the accelerated expansions. Recall that Einstein's equations say\[

G_{\mu\nu} +\Lambda g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}.

\] Note that even in the complicated SI units, there is no \(\hbar\) here – Einstein's general relativity is a classical theory that doesn't depend on quantum mechanics at all. Here, \[

G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu}

\] is the Einstein curvature tensor, constructed from the Ricci tensor and the Ricci scalar \(R\). It's some function of the metric and its first and especially second partial derivatives in the spacetime. On the right hand side of Einstein's equations, \(T_{\mu\nu}\) is the stress-energy tensor that knows about the sources, the density of mass/energy and momentum and their flow.

The \(\Lambda g_{\mu\nu}\), a simple term that adds an additional mixture of the metric tensor to Einstein's equations, is the cosmological constant term. It naturally reappeared in the late 1990s. It's a rather efficient theory. The term doesn't have to be there but in some sense, it's even "simpler" than Einstein's tensor, so why should it be absent? And it seems to explain the accelerated expansion, so we need it.

The theory is really natural which is why the standard cosmological model was the \(\Lambda{CDM}\) model, i.e. a big bang theory with the cold dark matter (CDM) and the cosmological constant term \(\Lambda\).

What about string theory?

String theory really predicts gravity. You may derive Einstein's equations, including the equivalence principle, from the vibrating strings. Einstein's theory of gravity is a prediction of string theory, which is still one of the main reasons to be confident that string theory is on the right track to find a deeper or final theory in physics, to say the least. Aside from gravitons and gravity (and Einstein's equations that may be derived from string theory for this force), string theory also predicts gauge fields and matter fields such as leptons and quarks. They have their (Dirac, Maxwell...) equations and their stress-energy tensors also enter as terms in \(T_{\mu\nu}\) on the right hand side of Einstein's equations.

String theory demonstrably predicts Einstein's equations as the low-energy limit for the massless, spin-two field (the graviton field) that unavoidably arises as a low-lying excitation of a vibrating string. To some extent, this appearance of Einstein's equations is guaranteed by consistency of the theory (or by the relevant gauge invariance, namely the diffeomorphisms) – and string theory is consistent (which is a highly unusual, and probably unprecedented, virtue of string theory among quantum mechanical theories dealing with massless spin-two fields).

Does string theory also predict the cosmological constant term, one that Einstein originally included in the equations? At this level, the answer is unquestionably Yes and Cumrun Vafa and pals surely agree. To say the least, string theory predicts lots of vacua with a negative value of the cosmological constant, the anti de Sitter (AdS) vacua. In fact, those are the vacua where the holographic principle of quantum gravity may be shown rather rigorously – holography takes the form of Maldacena's AdS/CFT correspondence.

There are lots of Minkowski, \(\Lambda=0\), vacua in string theory. And there are also lots of AdS, \(\Lambda\lt 0\), vacua in string theory. I think that the evidence is clear and no one who is considered a real string theorist by most string theorists disputes the statement that both groups of vacua, flat Minkowski vacua and AdS vacua, are predicted by string theory.

The real open question is whether string theory allows the existence of \(\Lambda \gt 0\) (de Sitter or dS) vacua. Those seem to be needed to describe the accelerated expansion of the Universe in terms of the cosmological constant. After 2000, the widespread view – if counted by the number of heads or number of papers – was that string theory allowed the positive cosmological constant. Even though I still find de Sitter vacua in string theory plausible, I believe that it's fair to say that the frantic efforts to spread this de Sitter view – and write papers about de Sitter in string theory – may be described as a sign of group think in the community.

There have always been reason to doubt whether string theory allows de Sitter vacua at all. At the end of the last millennium, Maldacena and Nunez wrote a paper with a no-go theorem. It was mostly based on supergravity, a supersymmetric extension of Einstein's general relativity and a low-energy limit of superstring theories, but people generally believed that this approximation of string theory was valid in the context of the proof.

Sociologically, you may also want to know that in the 1990s, Edward Witten was "predicting" that the cosmological constant had to be exactly zero (and a symmetry-like principle would be found that implies the vanishing value). He was motivated by the experience with string theory. Even before Maldacena and Nunez and lots of similar work, it looked very hard to establish de Sitter, \(\Lambda \gt 0\) vacua in string theory. However, some of these problems could have been – and were – considered just technical difficulties. Why? Because if the cosmological constant is positive, you don't have any time-like Killing vectors and there can be no unbroken spacetime supersymmetry. Controlled stringy calculations only work when the spacetime supersymmetry is present (and guarantees lots of cancellations etc.) which is why people were willing to think that the difficulties in finding de Sitter vacua in string theory were only technical difficulties – caused by the hard calculations in the case of a broken supersymmetry.

However, aside from Maldacena-Nunez, we got additional reasons to think that string theory might prohibit de Sitter vacua in general. Cumrun Vafa's Swampland – the term for an extension of the (nice stringy) landscape that also includes effective field theories that string theory wouldn't touch, not even with a long stick – implies various general (sometimes qualitative, sometimes quantitative) predictions of string theory that hold in all the stringy vacua, despite their high number. Along with his friend Donald Trump, Cumrun Vafa has always wanted to drain the swamp. ;-)

The Swampland program has produced several, more or less established, general laws of string theory – that may also be considered consequences of a consistent theory of quantum gravity. Wolchover mentions that the most well-established example of a Swampland law is our "weak gravity conjecture". Gravity (among elementary particles) is much weaker than other forces in our Universe – and in fact, it probably has to be the case in all Universes that are consistent at all.

The Swampland business contains many other laws like that, some of them are more often challenged than the weak gravity conjecture. Cumrun Vafa and his co-authors have presented an incomplete sketch of a proof that de Sitter vacua could be banned in string theory for Swampland reasons – for similar general reasons that guarantee that gravity is the weakest force.

This assertion is unsurprisingly disputed by lots of people, especially people around Stanford, because Stanford University (with Linde, Kallosh, Susskind, Kachru, Silverstein, and many others) has been the hotbed of the "standard stringy cosmology" after 2000. They wrote lots of papers about cosmology, starting from the KKLT paper, and the most famous ones have thousands of citations. At some level, authors of such papers may be tempted to think that their papers just can't be wrong.

But even the main claims of papers with thousands of citations ultimately may be wrong, of course. Sadly, I must say that some of this Stanford environment likes to use group think – and arguments about authorities and number of papers – that resembles the "consensus science" about the global warming. Sorry, ladies and gentlemen, but that's not how science works.

Doubts about the KKLT construction are reasonable because the KKLT and similar papers still build on certain assumptions and approximations. I am confident it is correct to say that the authors of some of the critical papers questioning the KKLT (especially the final, de Sitter "uplift" of some intermediate AdS vacua, an uplift that is achieved by the addition of some anti-D3-branes) are competent physicists – at least "basically indistinguishable" in competence from the Stanford folks. See e.g. Thomas Van Riet's TRF guest blog from November 2014 (time is fast, 1 year per year).

Cumrun Vafa et al. don't want to say that string theory has been ruled out. Instead, they say that in string theory, the observed dark energy is represented by quintessence which is just a form of dark energy (read the first sentence of the Wikipedia article I just linked to) – and that's why Wolchover's title that "dark energy is incompatible with string theory" is so misleading. I think that the previous sentence is enough for everyone to understand the main unfortunate terminological blunder in Wolchover's article. Cumrun and pals say that dark energy is described by quintessence, a form of dark energy, in string theory. They don't say that dark energy is impossible in string theory.

Wolchover's blunder may be blamed upon the habit to consider the phrase "dark energy" to be the pop science equivalent of the "cosmological constant". Well, they are not quite equivalent and to understand the proposals by Cumrun Vafa et al., the difference between the terms "dark energy" and "cosmological constant" is absolutely paramount.

Quintessence is a philosophically if not spiritually sounding word but in cosmology, it's just a fancy word for an ordinary time-dependent generalization of the cosmological constant – that results from the potential energy of a new, inflaton-like scalar field. String theory often predicts many scalar fields, some of them may play the role of the inflaton, others – similar ones – may be the quintessence that fills our Universe with the dark energy which is responsible for the accelerated expansion.

Now, the disagreement between "Team Vafa" and "Team Stanford" may be described as follows:
Team Stanford uses the seemingly simplest description, one using Einstein's old cosmological constant. It's really constant, string theory allows it, and elaborate – but not quite exact – constructions with antibranes exist in the literature. They use lots of sophisticated equations, do many details very accurately and technically, but the question whether these de Sitter vacua exist remains uncertain because approximations are still used. Team Stanford ignores the uncertainty and sometimes intimidates other people by sociology – by a large number of authors who have joined this direction. The cosmological constant may be positive, they believe, and there are very many, like the notorious number \(10^{500}\), ways to obtain de Sitter vacua in string theory. We may live in one of them. Because of the high number, the predictive power of string theory may be reduced and some form of the multiverse or even the anthropic principle may be relevant.

Team Vafa uses a next-to-simplest description of dark energy, quintessence, which is a scalar field. This scalar field evolves and the potential normally needs to be fine-tuned even more so than the cosmological constant. But Team Vafa says that due to some characteristically stringy relationships, the new, added fine-tuning is actually not independent from the old one, the tuning of the apparently tiny cosmological constant, so from this viewpoint, their picture might be actually as bad (or as good) as the normal cosmological constant. The very large hypothetical landscape may be an illusion – all these constructions may be inconsistent and therefore non-existent, due to subtle technical bugs overlooked by the approximations or, equivalently, due to very general Swampland-like principles that may be used to kill all these hypothetical vacua simultaneously. Team Vafa doesn't have too many fancy mathematical calculations of the potential energy and it doesn't have a very large landscape. So in this sense, Team Vafa looks less technical and more speculative than Team Stanford. But one may argue that Team Stanford's fancy equations are just a way to intimidate the readers and they don't really increase the probability that the stringy de Sitter vacua exist.
These are just two very different sketches how dark energy is actually incorporated in string theory. They differ by some basic statements, by the expectation "how very technical certain adequate papers answering a question should be", and in many other respects. I think we can't be certain which of them, if any, is right – even though Team Stanford would be tempted to disagree. But their constructions simply aren't waterproof and they look arbitrary or contrived from many points of view. And yes, as you could have figured out, I do have some feeling that the way of argumentation by Team Stanford has always been similar to the "consensus science" behind the global warming hysteria. Occasional references to the "consensus" and a large number of papers and authors – and equations that seem complicated but if you think about their implications, they don't really settle the basic question (whether the de Sitter vacua – or the dangerous global warming – exist at all).

Team Vafa proposes a new possibility and I surely believe it deserves to be considered. It's "controversial" in the sense that Team Stanford is upset, especially some of the members such as E.S. But I dislike Wolchover's subtitle:
A controversial new paper argues that universes with dark energy profiles like ours do not exist in the “landscape” of universes allowed by string theory.
What's the point of labeling it "controversial"? It may still be right. Strictly speaking, the KKLT paper and the KKLT-based constructions by Team Stanford are controversial as well. These a priori labels just don't belong to the science reporting, I think – they belong to the reporting about pseudosciences such as the global warming hysteria. Reasonable people just don't give a damn about these labels. They care about the evidence. Cumrun Vafa is a top physicist, he and pals have proposed some ideas and presented some evidence, and this evidence hasn't really been killed by solid counter-evidence as of now.

Incidentally, after less than two months, Team Vafa already has 23+19 citations. So it doesn't look like some self-evidently wrong crackpot papers, like papers claiming that the Standard Model is all about octonions.

I was also surprised by another adjective used by Wolchover:
In the meantime, string theorists, who normally form a united front, will disagree about the conjecture.
Do they form a united front? What is it supposed to mean and what's the evidence that the statement is correct whatever it means? Are all string theorists members of Marine Le Pen's National Front? Boris Pioline could be one but I think that even he is not. ;-) String theorists are theoretical physicists at the current cutting-edge of fundamental physics and they do the work as well as they can. So when something looks clearly proven by some papers, they agree about it. When something looks uncertain, they are individually uncertain – and/or they disagree about the open questions. When a possible new loophole is presented that challenges some older lore or no-go not-yet-theorems, people start to think about the new possibilities and usually have different views about it, at least for a while.

What is Wolchover's "front" supposed to be "united" for or against? String theorists are united in the sense that they take string theory seriously. Well, that's a tautology. They wouldn't be called string theorists otherwise. String theory also implies something so they of course take these implications – as far as they're clearly there – seriously. But is there any valid, non-tautological content in Wolchover's statement about the "united front"?

It's complete nonsense to say that string theories are "more united as a front" than folks in any other typical scientific discipline that does things properly. String theorists have disagreed about numerous things that didn't seem settled to some of them. I could list many technical examples but one recent example is very conceptual – the firewall by late Joe Polchinski and his team. There were sophisticated constructions and equations in the papers by Polchinski et al. but the existence of the firewalls obviously remained disputed, and I think that almost all string theorists think that firewalls don't exist in any useful operational sense. But they followed the papers by Polchinski et al. to some extent. Polchinski and others weren't excommunicated for a heresy in any sense – despite the fact that the statement "the black holes don't have any interior at all" would unquestionably be a radical change of the lore.

This disagreement about the representation of dark energy within string theory is comparably deep and far-reaching as the firewall wars.

Again, I still assign the probability above 50% to the basic picture of Team Stanford which leads to a cosmological constant from string theory. But I don't think it has been proven (a similar warning I have said about \(P\neq NP\) and other things). I have communicated with many apparently smart and technically powerful folks who had sensible arguments against the validity of the basic conclusions of the KKLT. I am extremely nervous about the apparent efforts of some Stanford folks to "ban any disagreement" about the KKLT-based constructions, a ban that would be "justified" by the existence of many papers and their mutual citations.

That's not how actual science may progress for a very long time. If folks like Vafa have doubts about de Sitter vacua in string theory and all related constructions, and they propose quintessence models that could be more natural than once believed (the simple reasons why quintessence would be dismissed by string theorists including myself just a few years ago), they must have the freedom – not just formally, but also in practice – to pursue these alternative scenarios, regardless of the number of papers in literature that take KKLT for granted! Only when the plausibility and attractiveness of these ideas really disappears according to the body of the experts, it could make sense to suggest that Vafa seems to be losing.

These two pictures offer very different sketches how the real world is realized within string theory. Indeed, the string phenomenological communities that would work on these two possibilities could easily evolve into "two separated species" that can't talk to each other usefully (although both of them would still be trained with the help of the same textbooks up to a basic textbook of string theory). But as long as we're uncertain, this splitting of the research to several different possibilities is simply the right thing that should happen. Putting eggs to one basket when we're not quite sure about the right basket would simply be wrong.

Wolchover also mentions the work of Dr Wrase. I haven't read that so I won't comment.

But I will comment on some remarks by Matt Kleban (trained at Team Stanford, now NYU) such as
Maybe string theory doesn’t describe the world. [Maybe] dark energy has falsified it.
Well, that's nice. String theory is surely falsifiable and such things might happen which would be a big event. But I think it's obvious that Kleban isn't really taking the side of the string theory critics. Instead this statement – that dark energy may have falsified string theory – is a subtle demagogic attack against Team Vafa which is whom he actually cares about (he doesn't care about Šm*its). Effectively, Matt is trying to compare Vafa et al. to Šmoits. If the dark energy in string theory doesn't work in the Stanford way, I will scream and cry, Matt says, and you will give it up. Matt knows that the real people whom he cares about wouldn't consider string theory ruled out for similar reasons so he's effectively saying that they shouldn't buy Team Vafa's claims, either.

Sorry, Matt, but that's a demagogy. Team Vafa doesn't really claim that they have falsified string theory. There is a genuine new possibility whether you like to admit it or not. Also, Matt expressed his attacks against Team Vafa using a different verbal construction:
He stresses that the new swampland conjecture is highly speculative and an example of “lamppost reasoning,"...
Cute, Matt. I always love when people complain about lamppost reasoning. I've had funny discussions both with Brian Greene and Lisa Randall about this phrase before they published their popular books. Lisa felt very entertained when I said it was actually rational to spend more time by looking under the lamppost. But it is rational.

I must explain the proverb here. There exists some mathematical set of possibilities in theoretical physics or string theory but only some of them have been discovered or understood, OK? So we call those things that have been understood or studied (intensely enough) "the insights under the lamppost". Now, the "lamppost reasoning" is a criticism used by some people who accuse others from a specific kind of bias. What is this sin or bias supposed to be? Well, the sin is that these people only search for their lost keys under the lamppost.

Now, this is supposed to be funny and immediately mock the perpetrators of the "sin" and kill their arguments. If you lose your keys somewhere, it's a matter of luck whether the keys are located under a lamppost, where you could see them, or elsewhere, where you couldn't. So obviously, you should look for the keys everywhere, including places that aren't illumined by the lamp, Kleban and Randall say, among others.

But there's a problem with this recommendation. You can't find the keys in the dark too easily – because you don't see anything there. Perhaps if you sweep the whole surface by your fingers. But it's harder and the dark area may be very large. If you want to increase the probability that you find something, you should appreciate the superiority of vision and primarily look at the places where you can see something! You aren't guaranteed to find the keys but your probability to find them per unit time may be higher because you can see there.

And there might even exist reasons why the keys are even more likely to be under the lamppost. When you were losing them, you probably preferred to walk at places where you could see, too. You may have lost them while checking the content of your wallet, and you were more likely to do it under the lamppost. So that's why you were more likely under the lamppost at that time, too! Similarly, when God was creating the world, assuming her similar mathematical skills, She was likely to start with discovering things that were relatively easy for us to discover and clarify, too. So she was more likely to drop our Universe under the lamppost, too, and that's why it's right to focus our attention there, too.

For a researcher, it's damn reasonable to focus on things that are easier to be understood properly.

The two situations (keys, physics) aren't quite analogous but they're close enough. My claim is even clearer in the metaphorical "lamppost" of physics. If you want to settle a question, such as the existence of de Sitter vacua, you simply have to build primarily on the concepts – both general principles and the particular constructions – that have been understood well enough. You can't build on the things that are completely unknown. And if you build on things that are only known vaguely or with a lot of uncertainty, you can be misled easily!

So in some sense, I am saying that you should look for your keys under the lamppost, and then increase the sensitivity of your retinas and increase your range that you have a control over. That's how knowledge normally grows – but there always exist regions in the space of ideas and facts that aren't understood yet. The suggestion that claims in physics may be supported by constructions that are either completely unknown or badly understood are just ludicrous. They may sound convincing to them because the keys may be anywhere, the keys may be in the dark. But in the dark of ignorance, science can't be applied and we must appreciate that all our scientific conclusions may only be based on the things that have been illuminated – all of our legitimate science is built out of the insights about the vicinity of the lamppost.

Whoever claims to have knowledge derived from the dark is a charlatan – sorry but it's true, Lisa and Matt! In this particular case, it's totally sensible for Team Vafa to evaluate the experience with the known constructions of the vacua and conclude that it seems rather convincing that no de Sitter vacua exist in string theory and the existing counterexamples are fishy and likely to be inconsistent. This evidence is circumstantial because it builds on the "set of constructions" that have been studied or illuminated – constructions under the lamppost – but that's still vastly better than if you make up your facts and make far-reaching claims about the "world in the dark" that we have no real evidence of!

You surely expect comparisons to politics as well. I can't avoid the feeling that the Team Stanford claim that de Sitter vacua simply have to exist is just another example of some egalitarianism or non-discrimination. Like men and women, anti de Sitter and de Sitter vacua must be treated as equal. But sorry to say, like men and women, de Sitter and anti de Sitter vacua are simply not equal. The constructions of these two classes within string theory look very different and unlike the anti de Sitter vacua, it's plausible and at least marginally compatible with the evidence that the de Sitter vacua don't exist at all. A Palo Alto leftist could prefer a non-discrimination policy but the known facts, evidence, and constructions surely do discriminate between de Sitter and anti de Sitter spaces – and Team Vafa, like any honest scientist who actually cares about the evidence, assigns some importance to this highly asymmetric observation!

by Luboš Motl (noreply@blogger.com) at August 18, 2018 08:15 AM

Lubos Motl - string vacua and pheno

Search for ETs is more speculative than modern theoretical physics
Edwin has pointed out a new tirade against theoretical physics,
Theoretical Physics Is Pointless without Experimental Tests,
that Abraham Loeb published at pages of Scientific American which used to be an OK journal some 20 years ago. The title itself seems plagiarized from Deutsche or Aryan Physics – which may be considered ironic for Loeb who was born in Israel. And in fact, like his German role models, Loeb indeed tries to mock Einstein as well – and blame his mistakes on the usage of thought experiments:
Einstein made great discoveries based on pure thought, but he also made mistakes. Only experiment and observation could determine which was which.

Albert Einstein is admired for pioneering the use of thought experiments as a tool for unraveling the truth about the physical reality. But we should keep in mind that he was wrong about the fundamental nature of quantum mechanics as well as the existence of gravitational waves and black holes...
Loeb has a small, unimportant plus for acknowledging that Einstein was wrong on quantum mechanics. However, as an argument against theoretical physics based on thought experiments and on the emphasis on the patient and careful mental work in general, the sentences above are at most demagogic.

The fact that Einstein was wrong about quantum mechanics, gravitational waves, or black holes don't imply anything wrong about the usage of thought experiments and other parts of modern physics. There's just no way to credibly show such an implication. Other theorists have used better thought experiments, have thought about them more carefully, and some of them have correctly figured out that quantum mechanics had to be right and gravitational waves and black holes had to exist.

The true fathers of quantum mechanics, especially Werner Heisenberg, were really using Einstein's new approach based on thought experiments, principles, and just like Einstein, they carefully tried to remove the assumptions about physics that couldn't have been operationally established (such as the absolute simultaneity killed by special relativity; and the objective existence of values of observables before an observation, killed by quantum mechanics).

Note that gravitational waves as well as black holes were detected many decades after their theoretical discovery. The theoretical discoveries almost directly followed from Einstein's equations. So Einstein's mistakes meant that he didn't trust (his) theory enough. It surely doesn't mean and cannot mean that Einstein trusted theories and theoretical methods too much. Because Loeb has made this wrong conclusion, it's quite some strong evidence in favor of a defect in Loeb's central processing unit.



The title may be interpreted in a way that makes sense. Experiments surely matter in science. But everything else that Loeb is saying is just wrong and illogical. In particular, Loeb wrote this bizarre paragraph about Galileo and timing:
Similar to the way physicians are obliged to take the Hippocratic Oath, physicists should take a “Galilean Oath,” in which they agree to gauge the value of theoretical conjectures in physics based on how well they are tested by experiments within their lifetime.
Well, I don't know how I could judge theories according to experiments that will be done after I die, after my lifetime. That's clearly impossible so this restriction is vacuous. On the other hand, is it OK to judge theories according to experiments that were done before our lifetimes or before physicists' careers?

You bet. Experimental or empirical facts that have been known for a long time are still experimental or empirical facts. In most cases, they may be repeated today, too. People often don't bother to repeat experiments that re-establish well-established truths. But these old empirical facts are still crucial for the work of every theorist. They are sufficient to determine lots of theoretical principles.

You know, it's correct to say that science is a dialogue between the scientist and Nature. But this is only true in the long run. It doesn't mean that every day or every year, both of them have to speak. If Nature doesn't want to speak, She has the right to stay silent. And She often stays silent even if you complained that She doesn't have the right. She ignores your restrictions on Her rights! So at the LHC after the Higgs boson discovery, Nature chose to remain silent so far – or She kept on saying "the Standard Model will look fine to you, human germ".

You can't change this fact by some wishful thinking about "dialogues". Theorists just didn't get new post-Higgs data from the LHC because so far, there are no new data at the LHC. They need to keep on working which makes it obvious that they have to use older facts and new theoretical relationships between them, new hypotheses etc. In the absence of new theoretical data, it is obvious that theorists' work has to be overwhelmingly theoretical or, in Loeb's jargon, it has to be a monologue! When Nature has something new and interesting to say (through experiments), Nature will say it. But theorists can't be silent or "doing nothing" just because Nature is silent these years! Only a complete idiot may fail to realize these points or agree with Loeb.




What Loeb actually wants to say is that a theorist should be obliged to plan the experiments that will settle all his theoretical ideas within his lifetime. But that's not possible. The whole point of scientific research in physics is to study questions about the laws of Nature that haven't been answered yet. And because they haven't been answered yet, people don't know and can't know what the answer will be – and even when it will be found.

An experimenter (or a boss or a manager of an experimental team) may try to plan what the experiment will do, when it will do these things, and what are the answers that it could provide us with. Even this planning sometimes goes wrong, there are delays etc. But this is not the main problem here. The real problem is that the result of a particular experiment is almost never the real question that people want to be answered. An experiment is often just a step towards adjusting our opinions about a question – and whether this step is a big or small one depends on what the experimental outcome actually is, and this is not known in advance.

Loeb has mentioned examples of such questions himself. People actually wanted to know whether there were black holes and gravitational waves. But a fixed experiment with a fixed budget, predetermined sensitivity etc. simply cannot be guaranteed to produce the answer. That's the crucial point that kills Loeb's Aryan Physics as a proposed (not so) new method to do science.

For example, both gravitational waves and black holes are rather hard to see. Similarly, the numerical value of the cosmological constant (or vacuum energy density) is very small. It's this smallness that has implied that one needed a long – and impossible to plan – period of time to discover these things experimentally.

Because black holes, gravitational waves, and a positive cosmological constant needed fine gadgets – and it was not known in advance how fine they had to be – does it mean that the theorists should be banned from studying these questions and concepts? The correct answer is obviously No – while Loeb's answer is Yes. Almost all of theoretical physics is composed of such questions. We just can't know in advance how much time will be needed to settle the questions we care about (and, as Edwin emphasized, there is nothing special about the timescale given by "our lifespan"). We can't know what the answers will be. We can't know whether the evidence that settles these questions will be theoretical in character, dependent on somewhat new experimental tools, or dependent on completely new experimental tools, discoveries, and inventions.

None of these things about the future flow of evidence can be known now (otherwise we could settle all these things now!) which is why it's impossible for these unknown answers to influence what theorists study now! The influences that Loeb demands would violate causality. If the theorists knew in advance when the answer is obtained, they would really have to know what the answer is – as I mentioned above, the confirmation of a null hypothesis always means that the answer to the interesting qualitative question was postponed. But then the whole research would be pointless.

So if science followed Loeb's Aryan Physics principles, it would be pointless! The real science follows the scientific method. Scientists must make decisions and conclusions, often conclusions blurred by some uncertainty, right now, based on the facts that are already known right now – not according to some 4-year plans, 5-year plans, or 50-year plans. And if their research depends on some assumptions, they have to articulate them and go through the possibilities (ideally all of them).

It's also utterly demagogic for him to talk about the "Galilean Oath" because Galileo Galilei disagreed with ideas that were very similar to Loeb's. In particular, Galileo has never avoided the formulation of hypotheses that could have needed a long time to be settled. One example where he was wrong was Galileo's belief that comets were atmospheric phenomena. That belief looks rather silly to me (didn't they already observe the periodicity of some comets, by the way?) but the knowledge was very different then. Science needed a long time to really settle the question.

But more generally, Galileo did invent lots of conjectures and hypotheses because those were the real new concepts that became widespread once he started the new method, the scientific method. Google search for "Galileo conjectured" or "Galileo hypothesized". Of course you get lots of hits.

As e.g. Feynman said in his simple description of the scientific method, the scientific method to search for new laws works as follows: First, we guess the laws. Then we compute consequences. And then we compare the consequences to the empirical data.

Note the order of the steps: the guess must be at the very beginning, scientists must be free to present all such possible hypotheses and guesses, and the computation of the consequences must still be close to the beginning. Loeb proposes something entirely different. He wants some planning of future experiments to be placed at the beginning, and this planning should restrict what the physicists are allowed to think about in the first place.

Sorry, that wouldn't be science and it couldn't have produced interesting results, at least not systematically. And these restrictions are indeed completely analogous to the bogus restrictions that the church officials – and later various philosophers etc. – tried to place on the scientific research. Like Loeb, the church hierarchy also wanted the evidence to be direct at all cases. But one of the ingenious insights by Galileo was that he realized that the evidence may often be indirect or very indirect but one may still learn a great deal of insights out of it.

The simplest example of this "direct vs indirect" controversy are the telescopes. Galileo has improved the telescope technology and made numerous new observations – such as those of the Jovian moons. The church hierarchy actually disputed that those satellites existed because the observation by telescopes wasn't direct enough for them. It took many years before people realized how incredibly idiotic such an argument was. It would be a straight denial of the evidence. The telescopes really see the same thing as the eyes when both see something. Sometimes, telescopes see more details than the eyes – so they must be considered nothing else than improved eyes. The observations from eyes and telescopes are equally trustworthy. But telescopes have a better resolution.

The laymen trust telescopes today even though the telescope observations are "indirect" ways to see something. But the tools to observe and deduce things in physics have become vastly more indirect than they were in Galileo's lifetime. And most laymen – including folks like Loeb – simply get lost in the long chains of reasoning. That's one reason why many people distrust science. Because they haven't verified them individually (and most laymen wouldn't be smart or patient enough to do so), they believe that the long chains of reasoning and evidence just cannot work. But they do work and they are getting longer.

The importance of reasoning and theory-based generalizations was increasing much more quickly during Newton's lifetime – and it kept on increasing at an accelerating rate. Newton united the celestial and terrestrial gravity, among other things. The falling apple and the orbiting Moon move because of the very same force that he described by a single formula. Did he have a "direct proof" that the apple is doing the same thing in the Earth's gravitational field as the Moon? Well, you can't really have a direct proof of such a statement – which could be described as a metaphor by some. His theory was natural enough and compatible with the available tests. Some of these tests were quantitative yet not guaranteed at the beginning. So of course they increased the probability that the unification of celestial and terrestrial gravity was right. But whether such confirmations would arise, how strong and numerous they would be, and when they would materialize just isn't know at the beginning.
The risk for physics stems primarily from mathematically beautiful “truths,” such as string theory, accepted prematurely for decades as a description of reality just because of their elegance.
OK, this criticism of "elegance" is mostly a misinterpretation of pop science. Scientists sometimes describe their feelings – how their brains feel nicely when things fit together. Sometimes they only talk about these emotional things in order to find some common ground with a journalist or another layman. But at the end, this type of beauty or elegance is very different from the beauty or elegance experienced by the laymen or artists. The theoretical physicists' version of beauty or elegance reflects some rather technical properties of the theories and the statement that these traits increase the probability that the theory is right may be pretty much proven.

But even if you disagree with these proofs, it doesn't matter because the scientific papers simply don't use the beauty or elegance arguments prominently. When you read a new paper about some string dualities, string vacua, or anything of the sort, you don't really read "this would be beautiful, and therefore the value of some quantity is XY". Only when there are some calculations of XY, the authors claim that there is some evidence. Otherwise they call their propositions conjectures or hypotheses. And sometimes they use these words that remind us of the uncertainty even when there is a rather substantial amount of evidence available, too.

But the uncertainty is unavoidable in science. A person who feels sick whenever there is some uncertainty just cannot be a scientist. Despite the uncertainty, a scientist has to determine what seems more likely and less likely right now. When some things look very likely, they may be accepted as facts at a preliminary basis. Some other people's belief in these propositions may be weaker – and they may claim that the proposition was accepted prematurely. But at the end, some preliminary conclusions are being made about many things. Science just couldn't possibly work without them.

By the way, I forgot to discuss the subtitle of Loeb's article:
Our discipline is a dialogue with nature, not a monologue, as some theorists would prefer to believe
Note that he emphasizes that theoretical physics is "his discipline". It sounds similar to Smolin's fraudulent claims that he was a "string theorist". Smolin isn't a string theorist and doesn't have the intellectual abilities to ever become a string theorist. Whether Loeb is a theoretical physicist is at least debatable. He's the boss of Harvard's astronomy department. The words "astrophysicist" would surely be defensible. But the phrase "theoretical physicist" isn't quite the same thing. I hope that you remember Sheldon Cooper's explanation of the difference between a rocket scientist and a theoretical physicist.

Why doesn't Missy just tell them that Sheldon is a toll taker at the Golden Gate Bridge? ;-)

Given Loeb's fundamental problems with the totally basic methodology of theoretical physics – including thought experiments and long periods of careful and patient thinking uninterrupted by experimental distractions – I think it is much more reasonable to say that Loeb clearly isn't a theoretical physicist so his subtitle is a fraudulent effort to claim some authority that he doesn't possess.

OK, Loeb tried to hijack Galileo's name for some delusions about (or against) modern physics that Galileo would almost certainly disagree with. Galileo wouldn't join these Aryan-Physics-style attacks on theoretical physics. At some level, we may consider him a founder of theoretical physics, too.

SETI vs string theory

But my title refers to a particular bizarre coincidence in Loeb's criticism of theorists' thinking that could be experimentally inaccessible for the rest of our (or some living person's?) lifetimes. He wants the experimental results right now, doesn't he? A funny thing is that Loeb is also a key official at the Breakthrough Starshot Project, Yuri Milner's $100 million kite to be sent to greet the oppressed extraterrestrial minorities who live near Alpha Centauri, the nearest star of ours except for the Sun.

String theory is too speculative for him but the discussions with the ETs are just fine, aren't they? Loeb seems aware of the ludicrous situation in which he has maneuvered himself:
At the same time, many of the same scientists that consider the study of extra dimensions as mainstream regard the search for extraterrestrial intelligence (SETI) as speculative. This mindset fails to recognize that SETI merely involves searching elsewhere for something we already know exists on Earth, and by the knowledge that a quarter of all stars host a potentially habitable Earth-size planet around them.
From his perspective, the efforts to chat with the extraterrestrial aliens are less speculative than modern theoretical physics. Wow. Why is it so? His argument is cute as well. SETI is just searching for something that is known to exist – intelligent life. However, the thing that just searches for something that is known to exist – intelligent life – would have the acronym SI only and it would be completely pointless because the answer is known. SETI also has ET in the middle, you know, which stands for "extraterrestrial". And Loeb must have overlooked these two letters altogether.

It is not known at all whether there are other planets where intelligent life exists, and if they exist, what is their density, age, longevity, appearance, and degree of similarity to the life on Earth. It's even more unknown or speculative how these hypothetical ETs, if they exist near Alpha Centauri, would react to Milner's kite. We couldn't even reliably predict how our civilization would react to a similar kite that would arrive to Earth. How could we make realistic plans about the reactions of a hypothetical extraterrestrial civilization?

On the other hand, string theory is just a technical upgrade of quantum field theory – one that looks unique even 50 years after the birth of string theory. Quantum field theory and string theory yield basically the same predictions for the doable experiments, quantum field theory is demonstrably the relevant approximation of stringy physics, and this approximation has been successfully compared to the empirical data. Everything seems to work.

The extra dimensions are just scalar fields analogous to those that are known to exist that are added on the stringy world sheet (and in this sense, the addition of the extra dimension is as mundane as the addition of an extra flavor of leptons or quarks). We have theoretical reasons to think that the total number of spacetime dimensions should be 10 or 11. Unlike the expectations about the ETs, this is not mere prejudice. There are actually calculations of the critical dimension. Joe Polchinski's "String Theory" textbook contains 7 different calculations of \(D=26\) for the bosonic string in the first volume; the realistic superstring analogously has \(D=10\). This is not like saying "there should be cow-like aliens near Alpha Centauri because the stars look alike and I like this assertion".

How can someone say that this research of extensions of successful quantum field theories is as speculative as Skyping with extraterrestrial aliens, let alone more speculative than those big plans with the ETs? At some moments, you can see that some people have simply lost it. And Loeb has lost it. It makes no sense to talk to him about these matters. He seems to hate theoretical physics so fanatically that he's willing to team up not only with the Šmoit-like crackpots but also with extraterrestrial aliens in his efforts to fight against modern theoretical physics.

Too bad, Mr Loeb, but even if extraterrestrial intelligent civilizations exist, it won't help your case because these civilizations – because of the adjective "intelligent" – know that string theory is right and you are full of šit.

And that's the memo.



P.S.: I forgot to discuss the "intellectual power" paragraph:
Given our academic reward system of grades, promotions and prizes, we sometimes forget that physics is a learning experience about nature rather than an arena for demonstrating our intellectual power. As students of experience, we should be allowed to make mistakes and correct our prejudices.
Now, this is a bizarre combination of statements. Loeb says "physics is about" learning, not demonstrating our intellectual power. "Physics is about" is a vague sequence of words, however. We should distinguish two questions: What drives people to do physics? And what decides about their success?

What primarily drives the essential people to do physics is curiosity. Physicists want to know how Nature works. String theorists want lots of more detailed questions about Nature to be answered. Their curiosity is real and they don't give a damn whether an ideologue wants to prevent them from studying some questions: the curiosity is real, they know that they want to know, and some obnoxious Loeb-style babbling can't change anything about it.

Some people are secondary researchers. They do it because it's a good source of income or prestige or whatever. They study it because others have made it possible, they created the jobs, chairs, and so on. But the primary motivation is curiosity.

But then we have the question whether one succeeds. The intellectual power isn't everything but it's obviously important. Loeb clearly wants to deny this importance – but he doesn't want to do it directly because the statement would sound idiotic, indeed. But why does he feel so uncomfortable about the need for intellectual power in theoretical physics?

He presents the intellectual power as the opposite of the validity of physical theories. This contrast is the whole point of the paragraph above. But this contrast is complete nonsense. There is no negative correlation between "intellectual power" and "validity of the theories that are found". On the contrary, the correlation is pretty much obviously positive.

At the end, his attack against the intellectual power is fully analogous to the statement that ice-hockey isn't about the demonstration of one's physical strength and skills, it's about scoring goals. When some parts are emphasized, the sentence is correct. But not too correct. The demonstration of the physical skills and strength is also "what ice-hockey is about". It's what drives some people. And the skills and strength are needed to do it well, too. The rhetorical exercise "either strength, or goals" – which is so completely analogous to Loeb's "either intellectual power, or proper learning of things about Nature" – is just a road to hell. The only possible implication of such a proposition would be to say that "people without the intellectual power should be made theoretical physicists". Does he really believe this makes any sense? Or why does he mix the validity of theories with the intellectual power in this negative way?

Well, let me tell you why. Because he is jealous about some people's superior intellectual powers compared to his. And he is making the bet – probably correctly – that the readers of Scientific American's pages are dumb enough not to notice that his rant is completely illogical, from the beginning to the end.

by Luboš Motl (noreply@blogger.com) at August 18, 2018 08:15 AM

Lubos Motl - string vacua and pheno

Deep thinkers build conjectures upon conjectures upon 5+ more floors
Among the world's string theorists, Sheldon Cooper has given the most accurate evaluation (as far as I can say) of the critics of string theory:
While I have no respect for Leslie [Winkle, a subpar scientist designed to resemble a hybrid of Sabine Hossenfelder and Lee Smolin] as a scientist or a human being for that matter we have to concede her undeniable expertise in the interrelated fields of promiscuity and general sluttiness.
Not even Edward Witten has ever put it this crisply. Winkle has rightfully thanked Sheldon for that praise. Well, I also don't have any respect for the string theory haters as scientists or human beings, for that matter. But I am regularly reminded that the disagreement is much deeper than different opinions about some technical questions. It's a disagreement about the basic ethical and value system.



Many stupid things have been written by journalists and the string theory haters – the difference between these two groups is often tiny – as reactions to the controversies among string theorists concerning the cosmological constant or quintessence and most of these stupid proclamations have been discussed dozens of times on this blog and it's boring to discuss the same stupidities all the time.





But there's one relatively new slogan that has apparently become popular among these individuals. Not Even Wrong, a leading website of the crackpots, has released its 10460th rant
Theorists with a Swamp, not a Theory
Here you have the slogan in four variations:
Will Kinney: The landscape is a conjecture. The “swampland” is a conjecture built on a conjecture.

Sabine Hossenfelder: The landscape itself is already a conjecture build [sic] on a conjecture, the latter being strings to begin with. So: conjecture strings, then conjecture the landscape (so you don’t have to admit the theory isn’t unique), then conjecture the swampland because it’s still not working.

Lars: “It’s conjectures all the way down.” Conjecture built on guess / In turn that’s built on hunch / The latter really rests / On inference a bunch

Peter Woit: The problem is that you don’t know what the relevant string theory equations are. So, this is a conjecture about a conjecture: / First conjecture: There is a well-defined theory satisfying a certain list of properties. / Second conjecture: The equations of this unknown theory do or don’t have certain specific properties.
The sudden explosion of this meme shows many things. The first thing is that they are just talking heads who mindlessly parrots slogans they just heard from other members of that echo chamber. But the content of the slogan is more damning.

You know: All of these individuals clearly have a severe psychological problem with uncertainty and the mental constructions building on uncertain assumptions. But this building upon uncertain starting points is what science is all about! And the more advanced, the deeper the science – and especially modern theoretical physics – is, the higher number of floors built on each other the skyscraper of knowledge has.

Scientists try to connect the bricks as tightly as they can – and they wrestle with the uncertainty as much as possible. But the fact is that it's almost never possible to eliminate at least some uncertainty about important physical questions. Does it mean that physicists should give up?

This qualitative comment isn't a lesson I converged to before I was given a PhD or something like that. The excitement about the ability to build very tall skyscrapers from the metaphorical (intellectual) bricks was something that I already experienced when I was 3 years old – and even more so when I was an older kid.



For creative children. Czech product. Designed with the help of kids. "Finally we can build a proper castle, what do you say, bro?"

The people who aren't thrilled with this construction of tall buildings made out of uncertain ideas are simply not curious people. They're closer to pigs than to theoretical physicists. But the likes of Peter Woit not only fail to be thrilled. They are openly – and, as you can check, hysterically – hostile towards these key drivers of all of modern science – the curiosity and the desire to have a chance to see the grand structures of ideas underlying the Universe in their full glory, with their actual complex relationships. So they're on the opposite side from the pigs than the theoretical physicists. What is the proper name of the creatures whose coordinates may be written as\[

\vec x_{\rm Šmoits} = 2 \vec x_{\rm Pig} - \vec x_{\text{theoretical physicist}}?

\] Oh, I see. The name of the anti theoretical physicists relatively to the pigs at the origin is Šmoits! But seriously, this is something so essential that I could never accept that, tolerate the environment of immoral, intellectually dead, uncurious, arrogant morons such as Woit and Hossenfelder. A nation where this kind of creatures is allowed to influence the public discourse vis-a-vis science is fudged up and deserves to go extinct as soon as possible.

You know, I only "discovered" Richard Feynman when I was 17 or so. But these basic things as the ability to "live with the uncertainty" and "not giving up the desire to figure things out because of the uncertainty" was something much older, as I mentioned. But Feynman has described a true scientist's relationship to the doubts and uncertainty in the BBC program:
I can live with doubt and uncertainty and not knowing. I think it's much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things. But I am not absolutely sure of anything. And there are many things I know nothing about. But I don't have to know any answers. I don't feel frightened by not knowing things, by being lost in the mysterious Universe without having any purpose – which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.
Amen to that. Feynman was really comparing his, scientist's, view on uncertainty with the attitude of non-scientists such as the religious people. You can't really be a scientist if you have a serious psychological problem with uncertainty, doubt, and not knowing things. You can't really do theoretical physics if you find it absurd or impossible to build additional floors on top of assumptions that aren't quite certain – because, as Feynman mentioned, nothing is really quite certain in natural science.

And the non-scientists who believe that they may be certain about something are almost universally wrong. They just "totally" believe in wrong things about Nature. Science is the only reliable method to converge towards the most accurate answers but it still can't eliminate the uncertainty altogether. In fact, the science's admission that it doesn't eliminate the uncertainty about all big questions – a humility of the scientific method, you might say – is one of the key reasons why science works so much better than the alternative, "pompous" methods to find the truth (such as the organized religions).

So the existence of the huge number of de Sitter vacua wasn't ever quite certain and it is still not certain. But people had to build on it, elaborate on the assumptions, try to get as far as they can, find the complicated chains of implications that are nevertheless almost certain. The same holds for other scenarios that are reasonably possible such as the Vafa Team's quintessence picture. Science simply is about the constant formulation of conjectures and hypotheses – and conjectures based upon conjectures; conjectures based upon conjectures based upon conjectures (the Šmoits couldn't even envision such a complicated thing but the actual theoretical physicists have to deal with constructions of this sort and even analogous constructions with 5+ floors – and not only talk about these chains).

If you have a problem with the absence of perfect certainty, and you just can't build new ideas in such a situation, if you can't focus on thinking what some (uncertain) assumptions, hypotheses, conjectures, or axioms imply, it simply means that you can't be a theoretical physicist. Every person who has this psychological problem with building on uncertain axioms but who pretends to be a scientist is a 100% fraud and every person who believes this fraudster's claim that she is a scientist is a complete moroness. (I introduced some affirmative action so that the [never] fudged up feminists can't complain.) They totally suck as thinkers. And if you start to brag about this fatal intellectual defect of yours, others in your environment should appreciate that you're as far from a pig as theoretical physicists are – but you're on the opposite side of the pig than the theoretical physicists. Given these assumptions, you're a Šmoit, a pile of junk.

You need to be treated as junk, otherwise there is something profoundly wrong about the whole society that harbors this junk.

by Luboš Motl (noreply@blogger.com) at August 18, 2018 08:15 AM

Lubos Motl - string vacua and pheno

Scientists really mustn't pick answers according to the public perception
A stringy summer workshop paid by Jim Simons is underway in Stony Brook, Long Island, New York. Cumrun Vafa – who arguably has the closest personal relationship with Simons (a rich guy and achieved mathematician) among the string theorists – is introducing (almost?) all the talks.

One of the talks was given by Thomas Van Riet. He presents some reasons to doubt the KKLT construction. Lots of his equations are very specific. I think it's clear he knows all the warping factors and terms in the potential etc. at least as well as Team Stanford. Thomas wrote a 2014 TRF guest blog about the very same issue and you may compare the talk with the text and decide how much progress has been made in almost 4 years.



Just to be sure, KKLT stands for Kachru-Kallosh-Linde-Trivedi and it's the famous technical stringy paper with some 2,500 citations that claims that the large number – later estimated as the notorious \(10^{500}\) – of different de Sitter compactifications may be constructed as type IIB string theory flux vacua. That's the technical basis for the more general claim that string theory implies the existence of a "huge landscape", and therefore the "multiverse", and therefore the relevance of the "anthropic principle".

These three interpretations of the paper are increasingly uncertain or controversial. But even the zeroth step, the existence of the de Sitter compactifications, is uncertain. My subjective probability that these vacua exist is over 50% most of the time but sometimes it's below 50%. The KKLT construction has been presented as a sure thing primarily by Team Stanford.

Vafa and others have mentioned that some of this marketed certainty has always been due to a wishful thinking. Experimental cosmologists seemed to clearly observe that we have lived in the de Sitter space so string theory should have better predicted a de Sitter space, too. Let's agree about it as quickly as possible, some of the people partly thought around 2000 – I've heard those sentiments.



The KKLT paper is 15 years old and there have always been doubts about the validity of the construction, especially the final steps when an anti de Sitter "intermediate" vacuum is "uplifted" to a de Sitter space through the addition of an anti-D3-brane (this final step always seemed rather arbitrary to me – to say the least, I've never believed that a totally independent researcher far from Palo Alto would have decided that this addition of the anti-D3-branes is "the right" step that implies the existence of lots of de Sitter vacua in string theory; it smelled like someone just more or less randomly picked a possible, complicated enough, excuse that sounded plausible). Most of the people who were vocal critics have always been crackpots and they have offered no valid arguments against the construction – and no competing ideas that would impress a real scientist.

Cumrun Vafa who coined the term "Swampland" – when I had the office next to him – has naturally become the most important KKLT skeptic. He's not really "negative". Instead, he offers an alternative that could be much more wonderful, a new deep principle that would ban the de Sitter vacua and that could do much more. This particular principle banning the de Sitter vacua remains unproven so it is a conjecture. On the other hand, the idea is that it could be established in the future and it could become analogous to the uncertainty principle (although probably less fundamental than the uncertainty principle).

So that claim about the non-existence of de Sitter vacua could be a deep principle or a theorem, if you wish.

Vafa's "Swampland" is a term for some ugly theories that look consistent as non-gravitational field theories but there actually exists no way how to extend them to UV-complete theories that are coupled to quantum gravity. Now, the previous sentence ended with the word "quantum gravity". Sometimes, the phrase "string theory" is inserted to the definition. These two definitions could be in principle unequivalent. But Cumrun Vafa, like your humble correspondent, is a full-blown string theory believer so at the end, "string theory" and [consistent] "quantum gravity" are assumed to be synonyms.

The equivalence of "string theory" and "quantum gravity" isn't quite well-established, partly because none of these two terms is sufficiently rigorously defined ;-), but their equivalence is consistent with everything we know (modulo questions whether Witten's monstrous AdS3 theory or Vasiliev theory are vacua of string theory). If these two terms were different, one would have to be careful about the two definitions of "Swampland". But I guess that in that case, the term "Swampland" could be less powerful than believed by Cumrun Vafa now, anyway.

The weak gravity conjecture is probably the most widely accepted example of the "Swampland" constraints. In effective field theories, the coupling constant \(g\) and the masses \(m\) of charged particles may be pretty much arbitrary. But if our paper is right, there must exist light enough particles with masses \(m\lt g M_{\rm Planck}\). These predicted particles are light enough – and they're so light that the gravity between two copies of such particles is weaker than the electrostatic or analogous repulsion between them. In field theory, you could think that you may keep masses \(m,M_{\rm Planck}\) fixed and tune \(g\) very close to zero. But in string theory and quantum gravity, it seems forbidden to make \(g\) too small. In our Universe, electrons obey this inequality safely – the gravitational force is some 44 orders of magnitude weaker than the electromagnetic one – but the point is that the direction of this inequality isn't a coincidence. It holds in general.

But Cumrun – and later others – have proposed lots of similar general principles, general inequalities and qualitative rules that tell you that quantum gravity or string theory is very constraining, predictive, and implies certain restrictions that could never be derived from the low-energy effective field theories themselves.

The constructions of de Sitter vacua in string theory look disputable, suspect, they always have some really weak point. Not only the precise quantitative features of the de Sitter vacua are incalculable precisely – which could be blamed on the absence of SUSY. But even the very existence seems to be unprovable due to delicate points. And, as Vafa and Van Riet said, not only de Sitter vacua with a tiny cosmological constant require some faith because of some weak points in the construction. Even vacua with the cosmological constant of \(0.01\) in the Planck units are impossible to find reliably – and not only in \(D=4\) but in any other number of large spacetime dimensions.

Cumrun proposes that all such constructions are ultimately wrong due to some delicate but fatal bugs. And the proof why each of these proposed vacua is actually wrong could be analogous to the proofs that you can't measure \(X\) and \(P\) accurately and simultaneously with certain gadgets. Einstein has proposed lots of ways to disprove the uncertainty principle in the 1920s but all of them were ultimately wrong. Since the 1930s, the frantic activity to disprove the uncertainty principle faded away – much like the efforts to build a perpetuum mobile faded a century earlier. It seems possible that Vafa's no-go-conjecture becomes a no-go-theorem and it will be viewed as equally "obvious" an argument against all de Sitter constructions – just like we find it obvious that attempts to construct the perpetuum mobile or to violate the uncertainty principle are doomed.

The heated discussion

OK, go to 1:30:15 in the Van Riet talk. Thomas just completed his presentation and the first question came from Arthur Hebecker, a participant from Heidelberg, Germany (check that the voice is indeed Hebecker's).

Hebecker said that all of the stuff that Van Riet and others (including Vafa etc.) are doing is "very dangerous". Now, this was already a pair of words that made me laugh nervously. How can a pure theorist's work be "very dangerous"? Such a complaint smells like a complaint by the Inquisition in the Middle Ages or by the ideologues in the Nazi or communist totalitarian systems, or by the climate alarmists who don't like when a climatologist does a real, honest scientific work – like complaints by powerful regimes that sometimes found or find a discipline of science "very dangerous" (meaning for their power). Sorry but the search for the truth cannot be "very dangerous" in general. Instead, the people who have a psychological (or existential) problem with the search for the truth are dangerous!

Why is this stuff said to be very dangerous?

Hebecker said that for 15 years, KKLT was presented as a major result of string theory. So if now, after 15 years, KKLT lost its trustworthiness or were claimed to be invalid, German string theorists such as himself would lose the last tiny pieces of credit they have in the German public (or, somewhat more precisely, the German-Turkish-Syrian-Afghani-Sudanese... public) and his job etc. could be threatened. Don't forget that the German (=Aryan) Physics basically dismisses all of theoretical physics as a Jewish conspiracy. So this mustn't be allowed, he pretty much explicitly said, and string theorists are obliged to insist that KKLT is the final word, despite all their uncertainties and counter-arguments.

Regular TRF readers hopefully know that Hebecker's rant is a sequence of assertions I would never accept. Thankfully enough, Hebecker was the only one who took this attitude. Vafa, Van Riet, Bena, and probably others were very clear about the basic point. I am sorry Mr Hebecker but the scientist's job is to search for the truth about Nature. A real scientist simply cannot take possible public reactions into account when he is deciding which theories or answers to questions are right and which of them are wrong!

Hebecker looked really worried. All the people are against him, pushing him to defend KKLT as the holy word. Because of some bizarre causal relationships, a part of the danger was a paper by Lavinia Heisenberg et al. that was published on the same day, August 8th.

Cumrun Vafa was relaxed throughout the discussion – although he had to use the term "defamation" when Hebecker claimed that Vafa's alternative picture is far less well-established than the KKLT. (Well, I would probably agree it's somewhat less well-established. Surely a much smaller number of persuasive and realistic enough formulae for the potential energy has been computed in Vafa's picture than in the KKLT framework.)

But concerning the broader, sociological point, Vafa and others were clear. It's your problem how you deal with the criticisms, when your friends are giving you hard time. At the end, Hebecker yelled that those were not his friends :-). Like others, I had to laugh. Let me analyze that funny word. First, I know very well that Vafa uses this term often – it's a face of his warm character. But I think that there are actually two reasons why Vafa chose the word "friends":
  1. all other humans are friends, Vafa wants everyone to love each other – OK, I've mentioned that
  2. those people were referred to as Hebecker's friends because they're at least approximately peers – if they're not even Hebecker's peers, it's clear that there's no way how Hebecker could incorporate their views into his research because their views probably don't make any sense
A funny aspect of Vafa's words "your friends" is that at least one of the authors of the paper that Hebecker has complained about, Robert Brandenberger, may be said to be Vafa's friend, not Hebecker's, of course. They pioneered the string gas cosmology together – incidentally another thing I tend to be skeptical about but I would never agree with the status of a heresy for this program in cosmology.

Also, Vafa, Van Riet, and Bena gave two reasons to Hebecker why he shouldn't adjust his opinions about the KKLT according to the "wind":
  1. it's a matter of scientific integrity not to be pushed around by the societal pressure. If needed, a scientist needs to be burned at stake for the truth. If Hebecker has a psychological problem with being burned at stake for the truth, he should resign because this is a part of his job
  2. the causal relationships between opinions and criticisms that Hebecker believes are probably fantasies, anyway. These friends (Vafa's word; I am even nicer than Cumrun but I would probably use the synonym "scum that should be concentrated in camps") would probably give him a hard time regardless of the technical details about the new results in string theory
The first point is really the fundamental one ethically. The second point may be used if Hebecker doesn't have a sufficient amount of scientific integrity and he's not willing to be burned at stake by the anti-string crackpots (assuming that these savages have already mastered the secrets of fire). Even if Hebecker were just a spineless opportunist who cares about his chair – and that's exactly the impression that everyone who is not a spineless opportunist had to get after having listened to Hebecker's whining – his strategy would make no sense pragmatically, either.

Find a new KKLT-like result, you will be bashed by the anti-string scum. (As Bena and Van Riet said, the hatred against the big landscape was a primary starting point for the anti-string hysteria since 2006.) Find a result that weakens or even disproves KKLT, you will be based by the anti-string scum. Increase the unity in the views among string theorists, you will be bashed by the anti-string scum. Increase the diversity of views and research directions, you will be bashed by the anti-string scum. The point is that the anti-string scum will bash you regardless of any facts and results because they are scum!

Just evolve. Develop a thicker skin. I am still allergic to all these people's demagogy and vitriol but admit it, Mr Hebecker – I am more exposed to this stuff than you and maybe more than the world's string theorists combined.

Just look how the archetype of these scumbags, Peter W*it, has reacted to events in string theory. He has used everything as a reason to bash string theory and string theorists. You simply cannot take this scum seriously. This scum has no knowledge, no integrity, no consistency, no guarantees that anything you do will be beneficial. So even if you were spineless and ready to cooperate with some of the most evil people in the world in order to help yourself personally, it would make absolutely no sense to try to please this scum.

Incidentally, what I find ironic is that Mr Hebecker tries to present himself as some kind of a defender of string theory and its good name in the broader public. But how can he be a defender of string theory if he pretty much teams up with the most vitriolic critics of string theory? He has basically attacked Van Riet, Vafa, and others by the statement that their research is "very dangerous". But has he ever attacked the scum that spreads lies about string theory 24 hours a day, 7 days a week? What about Ms Sabine Hossenfelder who is, along with Mr Unzicker, a leading anti-physics crackpot in Germany?

Where are the videos of talks where we can see Mr Hebecker as challenging these two really nasty lying German crackpots? Heidelberg (Hebecker) is just 80 km away from Frankfurt (Hossenfelder). But when Hebecker wants to attack someone for hurting the public image of string theory, he must take a flight to an island that is 6200 km away from Heidelberg. Why? Because Mr Hebecker has no balls and no integrity. Hebecker is a pußy who can be easily trampled upon by bitches such as Ms Hossenfelder. He can only attack nice people who are right, like Vafa or Van Riet.

That's pretty bad.

Now, Mr Peter W*it – who has previously bashed string theory and string theorists for their uniform opinions – is bashing string theorists and string theory for their diverse opinions. Those who can't figure out that W*it is a despicable pile of šit must be really brain-dead, or they must be as nasty jerks as he is. I won't comment on these self-contradictory principles behind his criticisms.

But the main ethical or broader, sociological question about the exchange between Hebecker and Team Vafa was whether it's right for a scientist to pay attention to these political questions and demand that their colleagues change their behavior because of some theories about P.R. As I have made very clear, I am totally against such an influence, and so was every speaker during the talk except for Mr Hebecker.

It turns out that the only commenter on Peter W*it's blog who has touched this issue, Bai, agrees with me and Vafa and others:
As a complete outsider to HEP (I am a condensed matter physicist though), I find appalling that issues like the damage being made to a community (in certain parts of the World..) are openly discussed and considered at Scientific events. It makes it so explicit that some of these fellows (I suspect the vast majority..) give a lot of importance to avoiding their University chairs being shaken, their funding being cut, etc, than seeking the ultimate truth in their research. It is very sad to see the kind of game (hard) Science is becoming, and it all sets a terrible example for future generations…
Right – except that the corrupt views weren't voiced by a "vast majority" or "some of these fellows" but by a single German researcher and everyone else disagreed with him and agreed with Bai.

At conferences about condensed matter physics but also particle physics or any discipline of hard science, you are simply not supposed to analyze some implications of your work for the opinions of the public about your field. As a scientist, whether string theorist or condensed matter physicist, you are supposed to go wherever the evidence leads you. If the laymen decide not to fund you according to your picture of the truth that you tell them, they have the right to tell you. If they burn you at stake for saying the truth, they have the right to do so – assuming that they first change the constitution and reintroduce this kind of punishment, of course.

But as long as you are a scientist, you simply mustn't adapt your views according to the speculations about the public perception of your results. If you do so, you cease to be a scientist.

Needless to say, Peter W*it didn't find the whining about the P.R. implications of the scientific results troubling:
Bai, this exchange was interesting precisely because these issues, while important to people, are virtually never publicly discussed. This is not something that regularly happens. [...]
It's really what excites him. All these people – Sm*lin, W*it, Hossenfelder etc. – are just fake pieces of šit who don't give a damn about the science and who know nothing about it. Everything that they are doing is politics. They have figured out that even though they know virtually nothing, they may become the go-to experts among the laymen who really have no clue about science whatsoever (hat tip: Gordon).

So they are offended by every scientific conference where this political dirt isn't discussed – and even by every conference where this political dirt where this šit doesn't play the decisive role.

Just to be sure, I agree that this political and P.R. šit has "become important to people", including researchers who deny it, but I think it's very wrong that it has become important.

In the rest of the comment, W*it wrote that Vafa "rightfully" said that it was right for a scientist not to allow the political ramifications to change his opinions. But it is very clear that W*it doesn't really believe it. This part of the comment contradicts absolutely everything he had written before. The whole point of his disgusting newest blog post was to celebrate that a discussion at a conference was affected by the vitriol that scum like himself, W*it, have spread. Finally, they have made a difference and influenced a question-and-answer section after a talk at a workshop. Should we congratulate these nobodies?

Sorry but there is nothing to celebrate. Decent people will only celebrate the work by W*it and stuff like him when this work is properly supervised by guards in a labor camp.

And that's the memo.

by Luboš Motl (noreply@blogger.com) at August 18, 2018 08:14 AM

August 17, 2018

Christian P. Robert - xi'an's og

approximative Laplace

I came across this question on X validated that wondered about one of our examples in Monte Carlo Statistical Methods. We have included a section on Laplace approximations in the Monte Carlo integration chapter, with a bit of reluctance on my side as this type of integral approximation does not directly connect to Monte Carlo methods. Even less in the case of the example as we aimed at replacing a coverage probability for a Gamma distribution with a formal Laplace approximation. Formal due to the lack of asymptotics, besides the length of the interval (a,b) which probability is approximated. Hence, on top of the typos, the point of the example is not crystal clear, in that it does not show much more than the step-function approximation to the function converges as the interval length gets to zero. For instance, using instead a flat approximation produces an almost as good approximation:

>  xact(5,2,7,9)
[1] 0.1933414
> laplace(5,2,7,9)
[1] 0.1933507
> flat(5,2,7,9)
[1] 0.1953668

What may be more surprising is the resilience of the approximation as the width of the interval increases:

> xact(5,2,5,11)
[1] 0.53366
> lapl(5,2,5,11)
[1] 0.5354954
> plain(5,2,5,11)
[1] 0.5861004
> quad(5,2,5,11)
[1] 0.434131

by xi'an at August 17, 2018 10:18 PM

Peter Coles - In the Dark

Cosmology Big Brother

I saw on Twitter today that the new series of Celebrity Big Brother has just started, though looking at the list of inmates housemates, I’m not sure whether the producers of this show understand the meaning of the word `celebrity’. At any rate, I’ve never heard of most of them.

I get the feeling that the Big Brother franchise may be getting a little tired, so I thought I’d pitch a new variant in order to boost the flagging ratings.

In Cosmology Big Brother a group of wannabe cosmologists live together in a specially-constructed house (with lots of whiteboards) isolated from the outside world (i.e. the arXiv). As the series progresses the furniture and rooms are gradually moved further apart, the temperature of the central heating is turned down, and the contents of the house become progressively more disordered.

Housemates are regularly voted out, at which point they have to enter the `real world’ (i.e. get a job in data science). Eventually only one person remains and whoever that is is awarded a research grant. They can then spend the rest of their life combining their study of cosmology with the usual activities of a Big Brother winner, e.g. opening supermarkets.

by telescoper at August 17, 2018 04:02 PM

Jon Butterworth - Life and Physics

The Standard Model at Fifty

At the beginning of June 2018, I gave an (academic) talk on the discovery of the Higgs boson at a meeting at Case Western Reserve University (Cleveland, Ohio, USA) to celebrate fifty years of the Standard Model – the SM@50.

20180603_112015The list of speakers was without doubt the most eminent collection of physicists I have ever  been a part of. In alphabetical order, and mostly with links to the reasons for their eminence:



Most of their signatures are on the poster above.

I think any particle physicist, and probably most physicists, will recognise a few of their scientific heroes there. I did, enough to give me a hint of imposter syndrome at the start of my talk before I remembered that I was standing there not as an individual but as a representative of thousands, so I had better get on with it and not let the side down.

I think I did ok. If you want you can judge that for yourself by watching the recording below, or perhaps more interestingly see the rest of the talks, since they are all now available here. They vary in academic level and in presentational quality (being a great physicist is not the same thing as being a great speaker!) but some of them are really wonderful and also quite accessible I think.

Some of us also did a reddit AMA.

Many thanks to Case, especially Glenn and Bryan, for putting on such a memorable meeting (and for inviting me to it).

by Jon Butterworth at August 17, 2018 12:58 PM

ZapperZ - Physics and Physicists

The Quantum Form of General Relativity's Equivalence Principle?
This is an interesting approach to one of the dilemma being faced in physics, which is trying to reconcile General Relativity, or gravity in particular, with the quantum mechanical picture. We have had String Theory and Loop Quantum Gravity, etc. going through this effort. But in this paper that just got published in Nature[1], the authors tackled it in a different way, by examining the Einstein's equivalence principle and formulating the QM's version of it, which is different than the classical version.

The ArXiv version of the paper can be found here. However, I have not verified if it is identical to the published version. The ArXiv manuscript was submitted in 2015, while the version in Nature Physics has only been published recently (2018). There doesn't appear to be any updates to this version since its submission to ArXiv.

The best part about this is that the predictions are testable (gives dirty look at String Theory).

I'll let you explore this and see what you think.

Zz.

[1] Magdalena Zych, Caslav Brukner, Nature Physics, https://www.nature.com/articles/s41567-018-0197-6

by ZapperZ (noreply@blogger.com) at August 17, 2018 12:52 PM

August 16, 2018

Christian P. Robert - xi'an's og

optimal approximations for importance sampling

“…building such a zero variance estimator is most of the times not practical…”

As I was checking [while looking at Tofino inlet from my rental window] on optimal importance functions following a question on X validated, I came across this arXived note by Pantaleoni and Heitz, where they suggest using weighted sums of step functions to reach minimum variance. However, the difficulty with probability densities that are step functions is that they necessarily have a compact support, which thus make them unsuitable for targeted integrands with non-compact support. And making the purpose of the note and the derivation of the optimal weights moot. It points out its connection with the reference paper of Veach and Guibas (1995) as well as He and Owen (2014), a follow-up to the other reference paper by Owen and Zhou (2000).

by xi'an at August 16, 2018 10:18 PM

Emily Lakdawalla - The Planetary Society Blog

China's mission to the far side of the Moon will launch in December
The Chang'e-4 spacecraft will make use of a newly launched relay satellite to stay in touch with Earth.

August 16, 2018 07:09 PM

Emily Lakdawalla - The Planetary Society Blog

National Academies: NASA needs a plan for Mars
Though progress is being made on Mars Sample Return, a new report from the National Academies recommends NASA have a long-term plan for robotic Mars exploration, and work to ensure communications infrastructure is maintained at the Red Planet. These recommendations largely align with those made by The Planetary Society in a report released in 2017.

August 16, 2018 06:44 PM

Peter Coles - In the Dark

Being and Blogging

I saw this cartoon in The Oldie (which I buy for the crossword) and it really struck home. I can think of quite a few occasions when I’ve met a complete stranger who, by being a reader this blog, seemed to know more about me than I do!

There have been a few times when I’ve posted personal things on here that I probably should have kept to myself. On the other hand, a lot of things have happened over the last ten years or do that I haven’t even mentioned on here. Believe me, you don’t want to know!

I know I’ve also written tactless items about things I would have been wiser to have left alone and have sometimes caused offence as a result. Everybody makes mistakes, of course, but if you make them on the internet they’re in the public domain forever. Part of the problem is that it’s do easy to forget that people actually read this stuff!

On the other hand, I do think at least some of the items I’ve posted have had a positive effect and that, in my mind at least, far outweighs the negatives.

Anyway, this rambling post is just a product of the process I go through from time to time, wondering how much longer I want to keep this blog going. I still don’t know the answer..

by telescoper at August 16, 2018 06:23 PM

Christian P. Robert - xi'an's og

a free press needs you [reposted]

“Criticizing the news media — for underplaying or overplaying stories, for getting something wrong — is entirely right. News reporters and editors are human, and make mistakes. Correcting them is core to our job. But insisting that truths you don’t like are “fake news” is dangerous to the lifeblood of democracy. And calling journalists the “enemy of the people” is dangerous, period.”

by xi'an at August 16, 2018 12:18 PM

Peter Coles - In the Dark

The Problem of the Moving Triangle

I found this nice geometric puzzle a few days ago on Twitter. It’s not too hard, but I thought I’d put it in the `Cute Problems‘ folder.

In the above diagram, the small equilateral triangle moves about inside the larger one in such a way that it keeps the orientation shown. What can you say about the sum a+b+c?

Answers through the comments box please, and please show your working!

by telescoper at August 16, 2018 10:58 AM

August 15, 2018

John Baez - Azimuth

Open Petri Nets (Part 1)

Jade Master and I have just finished a paper on open Petri nets:

• John Baez and Jade Master, Open Petri nets.

Abstract. The reachability semantics for Petri nets can be studied using open Petri nets. For us an ‘open’ Petri net is one with certain places designated as inputs and outputs via a cospan of sets. We can compose open Petri nets by gluing the outputs of one to the inputs of another. Open Petri nets can be treated as morphisms of a category, which becomes symmetric monoidal under disjoint union. However, since the composite of open Petri nets is defined only up to isomorphism, it is better to treat them as morphisms of a symmetric monoidal double category \mathbb{O}\mathbf{pen}(\mathrm{Petri}). Various choices of semantics for open Petri nets can be described using symmetric monoidal double functors out of \mathbb{O}\mathbf{pen}(\mathrm{Petri}). Here we describe the reachability semantics, which assigns to each open Petri net the relation saying which markings of the outputs can be obtained from a given marking of the inputs via a sequence of transitions. We show this semantics gives a symmetric monoidal lax double functor from \mathbb{O}\mathbf{pen}(\mathrm{Petri}) to the double category of relations. A key step in the proof is to treat Petri nets as presentations of symmetric monoidal categories; for this we use the work of Meseguer, Montanari, Sassone and others.

I’m excited about this, especially because our friends at Statebox are planning to use open Petri nets in their software. They’ve recently come out with a paper too:

• Fabrizio Romano Genovese and Jelle Herold, Executions in (semi-)integer Petri nets are compact closed categories.

Petri nets are widely used to model open systems in subjects ranging from computer science to chemistry. There are various kinds of Petri net, and various ways to make them ‘open’, and my paper with Jade only handles the simplest. But our techniques are flexible, so they can be generalized.

What’s an open Petri net? For us, it’s a thing like this:

The yellow circles are called ‘places’ (or in chemistry, ‘species’). The aqua rectangles are called ‘transitions’ (or in chemistry, ‘reactions’). There can in general be lots of places and lots of transitions. The bold arrows from places to transitions and from transitions to places complete the structure of a Petri net. There are also arbitrary functions from sets X and Y into the set of places. This makes our Petri net into an ‘open’ Petri net.

We can think of open Petri nets as morphisms between finite sets. There’s a way to compose them! Suppose we have an open Petri net P from X to Y, where now I’ve given names to the points in these sets:

We write this as P \colon X \nrightarrow Y for short, where the funky arrow reminds us this isn’t a function between sets. Given another open Petri net Q \colon Y \nrightarrow Z, for example this:

the first step in composing P and Q is to put the pictures together:

At this point, if we ignore the sets X,Y,Z, we have a new Petri net whose set of places is the disjoint union of those for P and Q.

The second step is to identify a place of P with a place of Q whenever both are images of the same point in Y. We can then stop drawing everything involving Y, and get an open Petri net QP \colon X \nrightarrow Z, which looks like this:

Formalizing this simple construction leads us into a bit of higher category theory. The process of taking the disjoint union of two sets of places and then quotienting by an equivalence relation is a pushout. Pushouts are defined only up to canonical isomorphism: for example, the place labeled C in the last diagram above could equally well have been labeled D or E. This is why to get a category, with composition strictly associative, we need to use isomorphism classes of open Petri nets as morphisms. But there are advantages to avoiding this and working with open Petri nets themselves. Basically, it’s better to work with things than mere isomorphism classes of things! If we do this, we obtain not a category but a bicategory with open Petri nets as morphisms.

However, this bicategory is equipped with more structure. Besides composing open Petri nets, we can also ‘tensor’ them via disjoint union: this describes Petri nets being run in parallel rather than in series. The result is a symmetric monoidal bicategory. Unfortunately, the axioms for a symmetric monoidal bicategory are cumbersome to check directly. Double categories turn out to be more convenient.

Double categories were introduced in the 1960s by Charles Ehresmann. More recently they have found their way into applied mathematics. They been used to study various things, including open dynamical systems:

• Eugene Lerman and David Spivak, An algebra of open continuous time dynamical systems and networks.

open electrical circuits and chemical reaction networks:

• Kenny Courser, A bicategory of decorated cospans, Theory and Applications of Categories 32 (2017), 995–1027.

open discrete-time Markov chains:

• Florence Clerc, Harrison Humphrey and P. Panangaden, Bicategories of Markov processes, in Models, Algorithms, Logics and Tools, Lecture Notes in Computer Science 10460, Springer, Berlin, 2017, pp. 112–124.

and coarse-graining for open continuous-time Markov chains:

• John Baez and Kenny Courser, Coarse-graining open Markov processes. (Blog article here.)

As noted by Shulman, the easiest way to get a symmetric monoidal bicategory is often to first construct a symmetric monoidal double category:

• Mike Shulman, Constructing symmetric monoidal bicategories.

The theory of ‘structured cospans’ gives a systematic way to build symmetric monoidal double categories—Kenny Courser and I are writing a paper on this—and Jade and I use this to construct the symmetric monoidal double category of open Petri nets.

A 2-morphism in a double category can be drawn as a square like this:

We call X_1,X_2,Y_1 and Y_2 ‘objects’, f and g ‘vertical 1-morphisms’, M and N ‘horizontal 1-cells’, and \alpha a ‘2-morphism’. We can compose vertical 1-morphisms to get new vertical 1-morphisms and compose horizontal 1-cells to get new horizontal 1-cells. We can compose the 2-morphisms in two ways: horizontally and vertically. (This is just a quick sketch of the ideas, not the full definition.)

In our paper, Jade and I start by constructing a symmetric monoidal double category \mathbb{O}\mathbf{pen}(\textrm{Petri}) with:

• sets X, Y, Z, \dots as objects,

• functions f \colon X \to Y as vertical 1-morphisms,

• open Petri nets P \colon X \nrightarrow Y as horizontal 1-cells,

• morphisms between open Petri nets as 2-morphisms.

(Since composition of horizontal 1-cells is associative only up to an invertible 2-morphism, this is technically a pseudo double category.)

What are the morphisms between open Petri nets like? A simple example may be help give a feel for this. There is a morphism from this open Petri net:

to this one:

mapping both primed and unprimed symbols to unprimed ones. This describes a process of ‘simplifying’ an open Petri net. There are also morphisms that include simple open Petri nets in more complicated ones, etc.

This is just the start. Our real goal is to study the semantics of open Petri nets: that is, how they actually describe processes! More on that soon!


Part 1: the double category of open Petri nets.

Part 2: the reachability semantics for open Petri nets.

Part 3: the free symmetric monoidal category on a Petri net.

by John Baez at August 15, 2018 03:24 PM

Emily Lakdawalla - The Planetary Society Blog

Here are some recent postcards from Jupiter
Let's check in on NASA's Juno spacecraft, which completed its 14th close flyby of Jupiter last month.

August 15, 2018 11:00 AM

August 14, 2018

Jon Butterworth - Life and Physics

Cartographic Errors

Despite my best efforts and those of several others, there are, inevitably, some errors in A Map of the Invisible/Atom Land. Apologies.

When they get spotted and reported, they get fixed in future editions. Where they might cause confusion to the reader who have older editions, I am collecting them on this page. I’ll also add any note or queries which come up and seem like they might be interesting, just as I did with Smashing Physics.

by Jon Butterworth at August 14, 2018 05:04 PM

ZapperZ - Physics and Physicists

MinutePhysics Special Relativity Chapter 8
If you missed Chapter 7 of this series, check it out here.

This time, the topic is on the ever-popular Twin Paradox (which really isn't a paradox since there is a logical explanation for it).



You can compare this explanation with that given by Don Lincoln a while back. I think Don's video is clearer to me, since I can comprehend the math.

Zz.

by ZapperZ (noreply@blogger.com) at August 14, 2018 01:43 PM

CERN Bulletin

CERN Rocked at the Hardronic

As every year in the summer, over one weekend, the Prevessin site becomes a mecca for rock music, welcoming not only CERN staff members but also visitors.

The 2018 version of the Hardronic Festival, took place last August 4th, in a relaxed atmosphere, despite the overwhelming heat which was greatly reduced thanks to the welcome shade provided by the trees, an abundance of beverages and the availability of food trucks.

For this 27th edition, 13 rock and pop groups followed on successively from each other, performing on two stages, providing more than eight solid hours of music, attracting young and old festival-goers in large numbers who, danced and sang the night away.

The quality of the programming and organization allowed everyone to share a beautiful evening of relaxation and music.

This CERN-made festival has become a summer event not to be missed.

Sponsored by the Staff Association, the Hardronic Festival is organised by the Music Club and allows talented musicians to perform. It also highlights the talent, which can be found within the CERN MusiClub, and in the local french and swiss area.

Professionally organised and a pleasant and easily accessible venue,  it was nevertheless mandatory for external visitors to register prior to the event in order to gain access.

A big thank you to Arek and Django for the festival to all the volunteers and the technical team, they all did an excellent job.

Put it in your diaries for next year!

Hardronic Festival: http://hardronic.web.cern.ch/Hardronic/2018/

August 14, 2018 11:08 AM

Emily Lakdawalla - The Planetary Society Blog

The Venus controversy
A lack of new missions keeps scientists guessing on what shaped the planet’s surface.

August 14, 2018 10:00 AM

CERN Bulletin

CERN Staff Association and CERN Photo Club Summer Photography Competition

“Summer’s lease hath all too short a date” William Shakespeare

Photography is a wonderful medium to preserve our summer memories. If there is a summer photo you have taken and are particularly proud of, why not enter the CERN Staff Association and CERN Photo Club Summer photography competition. Take advantage of your holidays to send us your photos!

The competition is open to all Members of Personnel at CERN and Members of CERN Clubs and closes on 30 September 2018.

There are two categories:

  • Photo(s) ‘Summer’ in general
  • Photo(s) ‘Summer at CERN’

Entries should be submitted in JPEG digital format and at a resolution suitable for printing the image in A4 format along with a title and short description explaining the image and what it represents for you to photo.contest@cern.ch with the subject ‘CERN STAFF ASSOCIATION PHOTOGRAPHY COMPETITION 2018’

All participants of the competition accept to comply with the terms and conditions.

Numerous prizes kindly offered by CERN Clubs and the Staff Association are up for grabs!

Wishing you all a great holiday and a happy summer!

August 14, 2018 08:08 AM

August 13, 2018

Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

by Andrew at August 13, 2018 10:07 PM

Emily Lakdawalla - The Planetary Society Blog

Space Policy & Advocacy Program Quarterly Report - July 2018
The Planetary Society's Space Policy and Advocacy team publishes quarterly reports on their activities, actions, priorities, and goals in service of their efforts to promote space science and exploration in Washington, D.C.

August 13, 2018 10:03 PM

CERN Bulletin

Exhibition

 

E=mc² and Stars

 

Marizeth Baumgarten

 

From 3 to 14 September
CERN Meyrin, Main Building

A creator of emotions, carried away by inspiration, the artist uses surrealism to express creativity and to provoke feelings and emotions, reactions, such as pleasure, fantasy, like your dreams, and invites you to use your imagination.

Think, reflect and be part of these emotions that convey her work.

In her works are present movement, dynamic, lyrical symbolism and romantic real or unreal, sometimes provocative and ambiguous.

In surrealism, as in art there are no universal concepts.

Every age, every culture has its own.

Each artist expresses artistically from their point of view.

In surrealism no limitations, barriers and rules, everything is permitted.

The very careful technique is acrylic and oil..

For more information and access requests: staff.association@cern.ch | +41 22 767 28 19

August 13, 2018 03:08 PM

Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

by Axel Maas (noreply@blogger.com) at August 13, 2018 02:46 PM

CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

August 13, 2018 02:08 PM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juillet et décembre.

La prochaine permanence se tiendra le :

Mardi 28 août de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 25 septembre, 30 octobre et 27 novembre 2018.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

August 13, 2018 02:08 PM

Jon Butterworth - Life and Physics

Life, Physics and Everything

When the Guardian’s science blog network closes, Life & Physics will have been here for eight years. Physics has come a long way in that time, but there is (as always) more to be done…

My sign-off from The Guardian.

by Jon Butterworth at August 13, 2018 11:38 AM

August 12, 2018

Jon Butterworth - Life and Physics

USA Temperature: can I sucker you?

I’m just back from a bit of a busman’s holiday in California, so US weather is on my mind. No anecdotes though – instead, here is an instructive example of the bad kind of data mining.

Open Mind

Suppose I wanted to convince people that temperature in the USA wasn’t going up, it was going down. What would I show? Let’s try yearly average temperature in the conterminous U.S., also known as the “lower 48 states” (I’ll just call it “USA”):

View original post 627 more words

by Jon Butterworth at August 12, 2018 09:05 AM

August 11, 2018

John Baez - Azimuth

The Philosophy and Physics of Noether’s Theorems

 

I’ll be speaking at a conference celebrating the centenary of Emmy Noether’s work connecting symmetries and conservation laws:

The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame).

They write:

2018 brings with it the centenary of a major milestone in mathematical physics: the publication of Amalie (“Emmy”) Noether’s theorems relating symmetry and physical quantities, which continue to be a font of inspiration for “symmetry arguments” in physics, and for the interpretation of symmetry within philosophy.

In order to celebrate Noether’s legacy, the University of Notre Dame and the LSE Centre for Philosophy of Natural and Social Sciences are co-organizing a conference that will bring together leading mathematicians, physicists, and philosophers of physics in order to discuss the enduring impact of Noether’s work.

There’s a registration fee, which you can see on the conference website, along with a map showing the conference location, a schedule of the talks, and other useful stuff.

Here are the speakers:

John Baez (UC Riverside)

Jeremy Butterfield (Cambridge)

Anne-Christine Davis (Cambridge)

Sebastian De Haro (Amsterdam and Cambridge)

Ruth Gregory (Durham)

Yvette Kosmann-Schwarzbach (Paris)

Peter Olver (UMN)

Sabrina Pasterski (Harvard)

Oliver Pooley (Oxford)

Tudor Ratiu (Shanghai Jiao Tong and Geneva)

Kasia Rejzner (York)

Robert Spekkens (Perimeter)

I’m looking forward to analyzing the basic assumptions behind various generalizations of Noether’s first theorem, the one that shows symmetries of a Lagrangian give conserved quantities. Having generalized it to Markov processes, I know there’s a lot more to what’s going on here than just the wonders of Lagrangian mechanics:

• John Baez and Brendan Fong, A Noether theorem for Markov processes, J. Math. Phys. 54 (2013), 013301. (Blog article here.)

I’ve been trying to get to the bottom of it ever since.

by John Baez at August 11, 2018 01:00 AM

The n-Category Cafe

The Philosophy and Physics of Noether's Theorems

Nicholas Teh tells me that there is to be a conference held in London, UK, on October 5-6, 2018, celebrating the centenary of Emmy Noether’s work in mathematical physics.

2018 brings with it the centenary of a major milestone in mathematical physics: the publication of Amalie (“Emmy”) Noether’s theorems relating symmetry and physical quantities, which continue to be a font of inspiration for “symmetry arguments” in physics, and for the interpretation of symmetry within philosophy.

In order to celebrate Noether’s legacy, the University of Notre Dame and the LSE Centre for Philosophy of Natural and Social Sciences are co-organizing a conference that will bring together leading mathematicians, physicists, and philosophers of physics in order to discuss the enduring impact of Noether’s work.

Speakers include our very own John Baez.

We have the entry nLab: Noether’s theorem. Since this (the first theorem) concerns group symmetries and conserved quantities, and since we are at the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-Category Café, naturally we’re interested in higher Noetherian constructions, involving actions by higher groups. For an example of this you can turn to Urs Schreiber’s Higher prequantum geometry and its talk of ‘higher Noether currents’ as a <semantics>L <annotation encoding="application/x-tex">L_{\infty}</annotation></semantics>-algebra extension (p. 21).

Here are all the conference speakers:

by david (d.corfield@kent.ac.uk) at August 11, 2018 12:51 AM

August 09, 2018

ZapperZ - Physics and Physicists

Is Online Education Just As Good And Effective?
Rhett Allain is tackling a topic that I've been dealing with for a while. It isn't about learning things online, but rather is an online education and degree just as good and effective as brick-and-mortar education? Here, he approached this from the point of view that an "education" involves more than just the subject matter. It involves human and social interaction, and learning about things that are not related to your area. He used the analogy of chocolate chips and chocolate chip cookies:

The cookie is the on-campus experience. College is not just about the chocolate chips. It's about all of that stuff that holds the chips together. College is more than a collection of classes. It's the experience of living away from home. It's the cookie dough of relationships with other humans and even faculty. College can be about clubs and other student groups. It's about studying with your peers. College is the whole cookie.
.
.
.
But wait! While we are talking about learning stuff, I have one more point to make. Don't think that you should acquire all of the skills and knowledge you need for your whole career during your time at school. You will always be learning new things, and there will always be new stuff to learn (no one learned about smartphones in the '80s). In fact, a college degree is not about job training. It's not. Really, it's not about that.

Then what is the whole chocolate chip cookie about? It's about exploring who you are and learning things that might not directly relate to a particular field. College is about taking classes that might not have anything to do with work. Art history is a great class—even if you aren't going to work in a museum. Algebra should be taken by all students—even though you probably won't need it (most humans get by just fine without a solid math background). So really, the whole cookie is about becoming more mature as a human. It's about leveling up in the human race—and that is something that is difficult to do online (but surely not impossible).

I have no issue with these points. However, we can even go right down to the jugular with this one instead of invoking some esoteric plea for a well-rounded education and social skills. There are compelling evidence that online-only lessons are not as effective and efficient as in-person, in-class lessons, if the latter is done properly.

I will use the example of the effectiveness of peer-instruction method as introduced by Harvard's Eric Mazur. Here, he showed how active learning, instead of passive learning, can be significantly more effective for the students. In such cases, student-to-student interactions are a vital part of learning, with the instructor serving as a "guidance counselor".

This is not the only example where active learning is more favorable than passive learning. There have been other students that have show significant improvement in students' understanding and grasp of the material when they are actively engaged in the learning process. Active learning is something that hasn't been done and maybe can't be easily done with online lessons, and certainly not from simply watching or reading the material online.

So forget about honing your social skills or learning about art history. Even the subject matter that you wish to understand may be more difficult to comprehend when you do this by yourself in an online course. There are enough evidence to support this, and it is why you shouldn't be surprised if you struggle to understand the material that you are trying to learn by yourself.

Zz.

by ZapperZ (noreply@blogger.com) at August 09, 2018 01:35 PM

August 08, 2018

ZapperZ - Physics and Physicists

Loop Quantum Gravity
This is one of those still-unverified theory that tries to reconcile quantum mechanics with General Relativity. I'm not in this field, so I have no expertise in it. But I know that for many people who have read about it, they are aware of String theory and it's competition, Loop Quantum Gravity.

In this video, Fermilab's Don Lincoln tries to explain LQG to the masses.



Keep in mind that this idea is still lacking in experimental support. The gamma ray burst observation that he mentioned in the video has been highlighted here quite a while back.

Without experimental verification, both String theory and LQG continue to have issues with their credibility as a science.

Zz.

by ZapperZ (noreply@blogger.com) at August 08, 2018 10:34 PM

Clifford V. Johnson - Asymptotia

Science Friday Book Club Q&A

Between 3 and 4 pm Eastern time today (very shortly, as I type!) I’ll be answering questions about Hawking’s “A Brief History of Time” as part of a Live twitter event for Science Friday’s Book Club. See below. Come join in! Hey SciFri Book Clubbers! Do you have had any … Click to continue reading this post

The post Science Friday Book Club Q&A appeared first on Asymptotia.

by Clifford at August 08, 2018 06:52 PM

August 07, 2018

ZapperZ - Physics and Physicists

Ban Cellphone Use In Classrooms?
First of all, let me state my policy on the use of electronic devices (mobile phones, tablets, laptop computers, etc.) in my classrooms. I do not have an outright ban (other than during exams and quizzes) during class, but they can't be use in an intrusive manner that disrupts the running of the class. So no making phone calls, etc. So far, I haven't had any issues to change that policy. Many of my colleagues do have an outright ban on the use of these devices during class.

Now, a few weeks ago, I came across this paper. They studied students who used these devices for non-class related purposes during class. They found that the distraction of these devices, in the end, affects the average class grade that the student received at the end of the course (they were psychology courses). The distracted students, on average, scored half a grade lower than those that are in classes that ban the use of these devices for non-class related purposes.

But what is also surprising is that there was a collateral damage done onto students who were in the same class as these distracted students, but they themselves did not use these devices during class.

Furthermore, when the use of electronic devices was allowed in class, performance on the unit exams and final exams was poorer for students who did not use electronic devices during the class as well as for the students who did use an electronic device. This is the first-ever finding in an actual classroom of the social effect of classroom distraction on subsequent exam performance. The effect of classroom distraction on  exam performance confirms the laboratory finding of the social effect of distraction (Sana et al.,2013). 
 So this is like second-hand smoking.

The good thing about this is that, I can now tell my students that, while I allow their use in the class during lessons, there is evidence that if they choose to use them, their grades may suffer. I may even upload this paper to the Learning Management System. However, because of the collateral damage that might be done to other students who do not use these devices during class, I am seriously rethinking my policy, and am considering imposing an outright ban on the non-class related use of these devices during my lessons.

If you teach, what is your experience with this?

Zz.

by ZapperZ (noreply@blogger.com) at August 07, 2018 02:38 PM

August 01, 2018

Clifford V. Johnson - Asymptotia

DC Moments…

I'm in Washington DC for a very short time. 16 hours or so. I'd have come for longer, but I've got some parenting to get back to. It feels a bit rude to come to the American Association of Physics Teachers annual meeting for such a short time, especially because the whole mission of teaching physics in all the myriad ways is very dear to my heart, and here is a massive group of people devoted to gathering about it.

It also feels a bit rude because I'm here to pick up an award. (Here's the announcement that I forgot to post some months back.)

I meant what I said in the press release: It certainly is an honour to be recognised with the Klopsteg Memorial Lecture Award (for my work in science outreach/engagemnet), and it'll be a delight to speak to the assembled audience tomorrow and accept the award.

Speaking in an unvarnished way for a moment, I and many others who do a lot of work to engage the public with science have, over the years, had to deal with not being taken seriously by many of our colleagues. Indeed, suffering being dismissed as not being "serious enough" about our other [...] Click to continue reading this post

The post DC Moments… appeared first on Asymptotia.

by Clifford at August 01, 2018 04:54 AM

July 30, 2018

Lubos Motl - string vacua and pheno

An 11-dimensional brain: a bit too exciting jargon
A month ago, lots of media wrote about a truly exciting topic, the eleven-dimensional brain. Some links to the article may be found in
The “Eleven Dimensional” Brain? Topology of Neural Networks
by Neuroskeptic, a blogger at the Discover Magazine. I recommend you that article if you want to demystify the whole thing. It's likely that most of the "regular media" prefer to keep you mystified.

This "higher-dimensional brain" reminds me of some papers that caught my attention in the mid 1990s – papers by (otherwise) string theorist Dimitri Nanopoulos and his collaborators such as Mavromatos. To give you a great example, look at this 1995 hep-ph (!) paper
Theory of Brain Function, Quantum Mechanics and Superstrings
Micropoulos wrote a lot about the NanoTubules – OK, it was the other way around, Nanopoulos wrote about MicroTubules. I was always rather skeptical and that skepticism was sufficient to prevent me from trying to read such papers carefully.




But in the subsequent two decades, I have read a lot of this ambitious, quirky science and my skepticism deepened. These days, I would probably dismiss Nanopoulos' paper right away. In the abstract, Nanopoulos referred to the Penrose-Hameroff "quantum theories of the brain". I think that those claims – partly driven by Penrose's misunderstanding of quantum mechanics and Hameroff's misunderstanding of any physics – were so stupid that the stupidity is enough to reasonably dismiss any paper that just positively mentions Penrose's and Hameroff's ideas.




It was always attractive to imagine some higher-dimensional structures that secretly exist inside the brain. There was something fascinatingly possible – and these speculations gave me goosebumps despite the skepticism. Fortunately, Neuroskeptic has beautifully demystified the newest stuff. Biologists say that the brain is \(N\)-dimensional as soon as you find a group of \((N+1)\) neurons in which every neuron is connected with all other neurons.

It's like the connections between the \((N+1)\) vertices of a simplex in \(N\) dimensions (such as the triangle and tetrahedron for \(N=2,3\), respectively).

OK, you may see that the neuroscientists are rather modest. As soon as they see sufficiently many connections between several neurons, they talk about higher-dimensional space. If you keep on reading, it starts to sound like Radio Yerevan (from the Soviet jokes). OK, instead of truly higher-dimensional structures, you just have many connections between neurons that may be ordered in the usual 3-dimensional space.

On top of that, the maximum dimension they found was not 11, like the spacetime in M-theory, but only 7. And to make the story even less persuasive than what the hype sounds like, this high dimension isn't a feature of a real brain but just a simulation of a brain. And it's not a simulated human brain, it's just a simulated rat brain.

Well, the writers of the simulation may surely decide how much the "cliques" are connected, can't they? When you realize such a thing, it becomes totally puzzling what their claim actually is. The statement that "one may write down a simulation with many connections" surely doesn't sound like an exciting scientific discovery to me. Neuroskeptic says it is very interesting work, anyway, so I may be overlooking something very precious. But I just don't see it and it's not clear to me how this may be a flagship result of a brain center whose funding is $1 billion. The suggestions that they have found a link to M-theory are probably vacuous and they're nothing else than pure hype.

If you tell me something exciting that I misunderstand, it may be appreciated.

by Luboš Motl (noreply@blogger.com) at July 30, 2018 10:26 AM

July 28, 2018

Jon Butterworth - Life and Physics

Doomsday Love affair

Some podcasts about the end of the world. I’m in Episode 3. Not sure of the exact date (of recording, or of the end of the world).

by Jon Butterworth at July 28, 2018 12:09 PM

July 26, 2018

Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

We’ve already had a bunch of cool guests, check these out:

And there are more exciting episodes on the way. Enjoy, and spread the word!

by Sean Carroll at July 26, 2018 04:15 PM

July 20, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Summer days, academics and technological universities

The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh.

 

 

 

                                       Counsellor’s Strand in Dunmore East

So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish.

However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st.

And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities.

This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone?

by cormac at July 20, 2018 03:32 PM

The n-Category Cafe

Compositionality: the Editorial Board

An editorial board has now been chosen for the journal Compositionality, and they’re waiting for people to submit papers.

We are happy to announce the founding editorial board of Compositionality, featuring established researchers working across logic, computer science, physics, linguistics, coalgebra, and pure category theory (see the full list below). Our steering board considered many strong applications to our initial open call for editors, and it was not easy narrowing down to the final list, but we think that the quality of this editorial board and the general response bodes well for our growing research community.

In the meantime, we hope you will consider submitting something to our first issue. Look out in the coming weeks for the journal’s official open-for-submissions announcement.

The editorial board of Compositionality:

• Corina Cristea, University of Southampton, UK

• Ross Duncan, University of Strathclyde, UK

• Andrée Ehresmann, University of Picardie Jules Verne, France

• Tobias Fritz, Max Planck Institute, Germany

• Neil Ghani, University of Strathclyde, UK

• Dan Ghica, University of Birmingham, UK

• Jeremy Gibbons, University of Oxford, UK

• Nick Gurski, Case Western Reserve University, USA

• Helle Hvid Hansen, Delft University of Technology, Netherlands

• Chris Heunen, University of Edinburgh, UK

• Aleks Kissinger, Radboud University, Netherlands

• Joachim Kock, Universitat Autònoma de Barcelona, Spain

• Martha Lewis, University of Amsterdam, Netherlands

• Samuel Mimram, École Polytechnique, France

• Simona Paoli, University of Leicester, UK

• Dusko Pavlovic, University of Hawaii, USA

• Christian Retoré, Université de Montpellier, France

• Mehrnoosh Sadrzadeh, Queen Mary University, UK

• Peter Selinger, Dalhousie University, Canada

• Pawel Sobocinski, University of Southampton, UK

• David Spivak, MIT, USA

• Jamie Vicary, University of Birmingham, UK

• Simon Willerton, University of Sheffield, UK

Best,
Joshua Tan, Brendan Fong, and Nina Otter
Executive editors, Compositionality

by john (baez@math.ucr.edu) at July 20, 2018 03:07 PM

July 19, 2018

Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: PlanckSpectra (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

by Andrew at July 19, 2018 06:51 PM

Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

by Andrew at July 19, 2018 12:02 PM

The n-Category Cafe

The Duties of a Mathematician

What are the ethical responsibilities of a mathematician? I can think of many, some of which I even try to fulfill, but this document raises one that I have mixed feelings about:

Namely:

The ethical responsibility of mathematicians includes a certain duty, never precisely stated in any formal way, but of course felt by and known to serious researchers: to dedicate an appropriate amount of time to study each new groundbreaking theory or proof in one’s general area. Truly groundbreaking theories are rare, and this duty is not too cumbersome. This duty is especially applicable to researchers who are in the most active research period of their mathematical life and have already senior academic positions. In real life this informal duty can be taken to mean that a reasonable number of mathematicians in each major mathematical country studies such groundbreaking theories.

My first reaction to this claimed duty was quite personal: namely, that I couldn’t possibly meet it. My research is too thinly spread over too many fields to “study each new groundbreaking theory or proof” in my general area. While Fesenko says that “truly groundbreaking theories are rare, and this duty is not too cumbersome”, I feel the opposite. I’d really love to learn more about the Langlands program, and the amplitudohedron, and Connes’ work on the Riemann Hypothesis, and Lurie’s work on <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topoi, and homotopy type theory, and Monstrous Moonshine, and new developments in machine learning, and … many other things. But there’s not enough time!

More importantly, while it’s undeniably good to know what’s going on, that doesn’t make it a “duty”. I believe mathematicians should be free to study what they’re interested in.

But perhaps Fesenko has a specific kind of mathematician in mind, without mentioning it: not the larks who fly free, but the solid, established “gatekeepers” and “empire-builders”. These are the people who master a specific field, gain academic power, and strongly influence the field’s development, often by making pronouncements about what’s important and what’s not.

For such people to ignore promising developments in their self-proclaimed realm of expertise can indeed be damaging. Perhaps these people have a duty to spend a certain amount of time studying each new ground-breaking theory in their ambit. But I’m fundamentally suspicious of these people in the first place! So, I’m not eager to figure out their duties.

What do you think about “the duties of a mathematician”?

Of course I would be remiss not to mention the obvious, namely that Fesenko is complaining about the reception of Mochizuki’s work on inter-universal Teichmüller theory. If you read his whole article, that will be completely clear. But this is a controversial subject, and “hard cases make bad law”—so while it makes a fascinating read, I’d rather talk about the duties of a mathematician more generally. If you want to discuss what Fesenko has to say about inter-universal Teichmüller theory, Peter Woit’s blog might be a better place, since he’s jumped right into the middle of that conversation:

As for me, my joy is to learn new mathematics, figure things out, explain things, and talk to people about math. My duties include helping students who are having trouble, trying to make mathematics open-access, and coaxing mathematicians to turn their skills toward saving the planet. The difference is that joy makes me do things spontaneously, while duty taps me on the shoulder and says “don’t forget….”

by john (baez@math.ucr.edu) at July 19, 2018 03:04 AM

July 18, 2018

Clifford V. Johnson - Asymptotia

Muskovites Vs Anti-Muskovites…

Saw this split over Elon Musk coming over a year ago. This is panel from my graphic short story “Resolution” that appears in the 2018 SF anthology Twelve Tomorrows, edited by Wade Roush (There’s even an e-version now if you want fast access!) -cvj

The post Muskovites Vs Anti-Muskovites… appeared first on Asymptotia.

by Clifford at July 18, 2018 02:34 AM

July 17, 2018

John Baez - Azimuth

Compositionality: the Editorial Board

The editors of this journal have an announcement:

We are happy to announce the founding editorial board of Compositionality, featuring established researchers working across logic, computer science, physics, linguistics, coalgebra, and pure category theory (see the full list below). Our steering board considered many strong applications to our initial open call for editors, and it was not easy narrowing down to the final list, but we think that the quality of this editorial board and the general response bodes well for our growing research community.

In the meantime, we hope you will consider submitting something to our first issue. Look out in the coming weeks for the journal’s official open-for-submissions announcement.

The editorial board of Compositionality:

• Corina Cristea, University of Southampton, UK
• Ross Duncan, University of Strathclyde, UK
• Andrée Ehresmann, University of Picardie Jules Verne, France
• Tobias Fritz, Max Planck Institute, Germany
• Neil Ghani, University of Strathclyde, UK
• Dan Ghica, University of Birmingham, UK
• Jeremy Gibbons, University of Oxford, UK
• Nick Gurski, Case Western Reserve University, USA
• Helle Hvid Hansen, Delft University of Technology, Netherlands
• Chris Heunen, University of Edinburgh, UK
• Aleks Kissinger, Radboud University, Netherlands
• Joachim Kock, Universitat Autònoma de Barcelona, Spain
• Martha Lewis, University of Amsterdam, Netherlands
• Samuel Mimram, École Polytechnique, France
• Simona Paoli, University of Leicester, UK
• Dusko Pavlovic, University of Hawaii, USA
• Christian Retoré, Université de Montpellier, France
• Mehrnoosh Sadrzadeh, Queen Mary University, UK
• Peter Selinger, Dalhousie University, Canada
• Pawel Sobocinski, University of Southampton, UK
• David Spivak, MIT, USA
• Jamie Vicary, University of Birmingham, UK
• Simon Willerton, University of Sheffield, UK

Best,
Josh, Brendan, and Nina
Executive editors, Compositionality

by John Baez at July 17, 2018 04:45 PM

July 16, 2018

Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ? 
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist. 

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC). 

read more

by Tommaso Dorigo at July 16, 2018 09:13 AM

July 13, 2018

Clifford V. Johnson - Asymptotia

Radio Radio Summer Reading!

Friday will see me busy in the Radio world! Two things: (1) On the WNPR Connecticut morning show “Where We Live” they’ll be doing Summer reading recommendations. I’ll be on there live talking about my graphic non-fiction book The Dialogues: Conversations about the Nature of the Universe. Tune in either … Click to continue reading this post

The post Radio Radio Summer Reading! appeared first on Asymptotia.

by Clifford at July 13, 2018 05:23 AM

July 12, 2018

Clifford V. Johnson - Asymptotia

Splashes

In case you’re wondering, after yesterday’s post… Yes I did find some time to do a bit of sketching. Here’s one that did not get finished but was fun for working the rust off… The caption from instagram says: Quick Sunday watercolour pencil dabbling … been a long time. This … Click to continue reading this post

The post Splashes appeared first on Asymptotia.

by Clifford at July 12, 2018 08:32 PM

Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!

 

by Matt Strassler at July 12, 2018 04:59 PM

July 09, 2018

The n-Category Cafe

Beyond Classical Bayesian Networks

guest post by Pablo Andres-Martinez and Sophie Raynor

In the final installment of the Applied Category Theory seminar, we discussed the 2014 paper “Theory-independent limits on correlations from generalized Bayesian networks” by Henson, Lal and Pusey.

In this post, we’ll give a short introduction to Bayesian networks, explain why quantum mechanics means that one may want to generalise them, and present the main results of the paper. That’s a lot to cover, and there won’t be a huge amount of category theory, but we hope to give the reader some intuition about the issues involved, and another example of monoidal categories used in causal theory.

Introduction

Bayesian networks are a graphical modelling tool used to show how random variables interact. A Bayesian network consists of a pair <semantics>(G,P)<annotation encoding="application/x-tex">(G,P)</annotation></semantics> of directed acyclic graph (DAG) <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> together with a joint probability distribution <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> on its nodes, satisfying the Markov condition. Intuitively the graph describes a flow of information.

The Markov condition says that the system doesn’t have memory. That is, the distribution on a given node <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> is only dependent on the distributions on the nodes <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> for which there is an edge <semantics>XY<annotation encoding="application/x-tex">X \rightarrow Y</annotation></semantics>. Consider the following chain of binary events. In spring, the pollen in the air may cause someone to have an allergic reaction that may make them sneeze.

a poset

In this case the Markov condition says that given that you know that someone is having an allergic reaction, whether or not it is spring is not going to influence your belief about the likelihood of them sneezing. Which seems sensible.

Bayesian networks are useful

  • as an inference tool, thanks to belief propagation algorithms,

  • and because, given a Bayesian network <semantics>(G,P)<annotation encoding="application/x-tex">(G,P)</annotation></semantics>, we can describe d-separation properties on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> which enable us to discover conditional independences in <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>.

It is this second point that we’ll be interested in here.

Before getting into the details of the paper, let’s try to motivate this discussion by explaining its title: “Theory-independent limits on correlations from generalized Bayesian networks" and giving a little more background to the problem it aims to solve.

Crudely put, the paper aims to generalise a method that assumes classical mechanics to one that holds in quantum and more general theories.

Classical mechanics rests on two intuitively reasonable and desirable assumptions, together called local causality,

  • Causality:

    Causality is usually treated as a physical primitive. Simply put it is the principle that there is a (partial) ordering of events in space time. In order to have information flow from event <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> to event <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>, <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> must be in the past of <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>.

    Physicists often define causality in terms of a discarding principle: If we ignore the outcome of a physical process, it doesn’t matter what process has occurred. Or, put another way, the outcome of a physical process doesn’t change the initial conditions.

  • Locality:

    Locality is the assumption that, at any given instant, the values of any particle’s properties are independent of any other particle. Intuitively, it says that particles are individual entities that can be understood in isolation of any other particle.

    Physicists usually picture particles as having a private list of numbers determining their properties. The principle of locality would be violated if any of the entries of such a list were a function whose domain is another particle’s property values.

In 1935 Einstein, Podolsky and Rosen showed that quantum mechanics (which was a recently born theory) predicted that a pair of particles could be prepared so that applying an action on one of them would instantaneously affect the other, no matter how distant in space they were, thus contradicting local causality. This seemed so unreasonable that the authors presented it as evidence that quantum mechanics was wrong.

But Einstein was wrong. In 1964, John S. Bell set the bases for an experimental test that would demonstrate that Einstein’s “spooky action at a distance” (Einstein’s own words), now known as entanglement, was indeed real. Bell’s experiment has been replicated countless of times and has plenty of variations. This video gives a detailed explanation of one of these experiments, for a non-physicist audience.

But then, if acting on a particle has an instantaneous effect on a distant point in space, one of the two principle above is violated: On one hand, if we acted on both particles at the same time, each action being a distinct event, both would be affecting each other’s result, so it would not be possible to decide on an ordering; causality would be broken. The other option would be to reject locality: a property’s value may be given by a function, so the resulting value may instantaneously change when the distant ‘domain’ particle is altered. In that case, the particles’ information was never separated in space, as they were never truly isolated, so causality is preserved.

Since causality is integral to our understanding of the world and forms the basis of scientific reasoning, the standard interpretation of quantum mechanics is to accept non-locality.

The definition of Bayesian networks implies a discarding principle and hence there is a formal sense in which they are causal (even if, as we shall see, the correlations they model do not always reflect the temporal order). Under this interpretation, the causal theory Bayesian networks describe is classical. Precisely, they can only model probability distributions that satisfy local causality. Hence, in particular, they are not sufficient to model all physical correlations.

The goal of the paper is to develop a framework that generalises Bayesian networks and d-separation results, so that we can still use graph properties to reason about conditional dependence under any given causal theory, be it classical, quantum, or even more general. In particular, this theory will be able to handle all physically observed correlations, and all theoretically postulated correlations.

Though category theory is not mentioned explicitly, the authors achieve their goal by using the categorical framework of operational probablistic theories (OPTs).

Bayesian networks and d-separation

Consider the situation in which we have three Boolean random variables. Alice is either sneezing or she is not, she either has a a fever or she does not, and she may or may not have flu.

Now, flu can cause both sneezing and fever, that is

<semantics>P(sneezing|flu)P(sneezing) and likewise P(fever|flu)P(fever)<annotation encoding="application/x-tex">P(sneezing \ | \ flu ) \neq P( sneezing) \ \text{ and likewise } \ P(fever \ | \ flu ) \neq P( fever)</annotation></semantics>

so we could represent this graphically as

a poset

Moreover, intuitively we wouldn’t expect there to be any other edges in the above graph. Sneezing and fever, though correlated - each is more likely if Alice has flu - are not direct causes of each other. That is,

<semantics>P(sneezing|fever)P(sneezing) but P(sneezing|fever,flu)=P(sneezing|flu).<annotation encoding="application/x-tex">P(sneezing \ | \ fever ) \neq P(sneezing) \ \text{ but } \ P(sneezing \ | \ fever, \ flu ) = P(sneezing \ | \ flu).</annotation></semantics>

Bayesian networks

Let <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> be a directed acyclic graph or DAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. (Here a directed graph is a presheaf on (<semantics><annotation encoding="application/x-tex">\bullet \rightrightarrows \bullet</annotation></semantics>)).

The set <semantics>Pa(Y)<annotation encoding="application/x-tex">Pa(Y)</annotation></semantics> of parents of a node <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> contains those nodes <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> such that there is a directed edge <semantics>XY<annotation encoding="application/x-tex">X \to Y</annotation></semantics>.

So, in the example above <semantics>Pa(flu)=<annotation encoding="application/x-tex">Pa(flu) = \emptyset</annotation></semantics> while <semantics>Pa(fever)=Pa(sneezing)={flu}<annotation encoding="application/x-tex">Pa(fever) = Pa(sneezing) = \{ flu \}</annotation></semantics>.

To each node <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> of a directed graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, we may associate a random variable, also denoted <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. If <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is the set of nodes of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and <semantics>(x X) XV<annotation encoding="application/x-tex">(x_X)_{X \in V}</annotation></semantics> is a choice of value <semantics>x X<annotation encoding="application/x-tex">x_X</annotation></semantics> for each node <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, such that <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> is the chosen value for <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>, then <semantics>pa(y)<annotation encoding="application/x-tex">pa(y)</annotation></semantics> will denote the <semantics>Pa(Y)<annotation encoding="application/x-tex">Pa(Y)</annotation></semantics>-tuple of values <semantics>(x X) XPa(Y)<annotation encoding="application/x-tex">(x_X)_{X \in Pa(Y)}</annotation></semantics>.

To define Bayesian networks, and establish the notation, let’s revise some probability basics.

Let <semantics>P(x,y|z)<annotation encoding="application/x-tex">P(x,y \ | \ z)</annotation></semantics> mean <semantics>P(X=x and Y=y|Z=z)<annotation encoding="application/x-tex">P(X = x \text{ and } \ Y = y \ | \ Z = z)</annotation></semantics>, the probability that <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has the value <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>, and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> has the value <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> given that <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> has the value <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics>. Recall that this is given by

<semantics>P(x,y|z)=P(x,y,z)P(z).<annotation encoding="application/x-tex">P(x,y \ |\ z) = \frac{ P(x,y,z) }{P(z)}.</annotation></semantics>

The chain rule says that, given a value <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and sets of values <semantics>Ω,Λ<annotation encoding="application/x-tex">\Omega, \Lambda</annotation></semantics> of other random variables,

<semantics>P(x,Ω|Λ)=P(x|Λ)P(Ω|x,Λ).<annotation encoding="application/x-tex">P(x, \Omega \ | \ \Lambda) = P( x \ | \ \Lambda) P( \Omega \ | \ x, \Lambda).</annotation></semantics>

Random variables <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> are said to be conditionally independent given <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>, written <semantics>XY|Z<annotation encoding="application/x-tex">X \perp\!\!\!\!\!\!\!\perp Y \ | \ Z</annotation></semantics>, if for all values <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> of <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> and <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics> of <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>

<semantics>P(x,y|z)=P(x|z)P(y|z).<annotation encoding="application/x-tex">P(x,y \ | \ z) = P(x \ | \ z) P(y \ | \ z).</annotation></semantics>

By the chain rule this is equivalent to

<semantics>P(x|y,z)=P(x|z),x,y,z.<annotation encoding="application/x-tex">P(x \ | \ y,z ) = P (x \ | \ z) , \ \forall x,y, z.</annotation></semantics>

More generally, we may replace <semantics>X,Y<annotation encoding="application/x-tex">X,Y</annotation></semantics> and <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> with sets of random variables. So, in the special case that <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> is empty, then <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> are independent if and only if <semantics>P(x,y)=P(x)P(y)<annotation encoding="application/x-tex">P(x, y) = P(x)P(y)</annotation></semantics> for all <semantics>x,y<annotation encoding="application/x-tex">x,y</annotation></semantics>.

Markov condition

A joint probability distribution <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> on the nodes of a DAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is said to satisfy the Markov condition if for any set of random variable <semantics>{X i} i=1 n<annotation encoding="application/x-tex">\{X_i\}_{i = 1}^n</annotation></semantics> on the nodes of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, with choice of values <semantics>{x i} i=1 n<annotation encoding="application/x-tex">\{x_i\}_{i = 1}^n</annotation></semantics>

<semantics>P(x i,,x n)= i=1 nP(x i|pa(x i)).<annotation encoding="application/x-tex">P(x_i, \dots, x_n) = \prod_{i = 1}^n P(x_i \ | \ {pa(x_i)}).</annotation></semantics>

So, for the flu, fever and sneezing example above, a distribution <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> satisfies the Markov condition if

<semantics>P(flu,fever,sneezing)=P(fever|flu)P(sneezing|flu)P(flu).<annotation encoding="application/x-tex">P(flu, \ fever, \ sneezing) = P(fever \ | \ flu) P(sneezing \ | \ flu) P(flu).</annotation></semantics>

A Bayesian network is defined as a pair <semantics>(G,P)<annotation encoding="application/x-tex">(G,P)</annotation></semantics> of a DAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and a joint probability distribution <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> on the nodes of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> that satisfies the Markov condition with respect to <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. This means that each node in a Bayesian network is conditionally independent, given its parents, of any of the remaining nodes.

In particular, given a Bayesian network <semantics>(G,P)<annotation encoding="application/x-tex">(G,P)</annotation></semantics> such that there is a directed edge <semantics>XY<annotation encoding="application/x-tex">X \to Y</annotation></semantics>, the Markov condition implies that

<semantics> yP(x,y)= yP(x)P(y|x)=P(x) yP(y|x)=P(x)<annotation encoding="application/x-tex">\sum_{y} P(x,y) = \sum_y P(x) P(y \ | \ x) = P(x) \sum_y P(y \ | \ x) = P(x)</annotation></semantics>

which may be interpreted as a discard condition. (The ordering is reflected by the fact that we can’t derive <semantics>P(y)<annotation encoding="application/x-tex">P(y)</annotation></semantics> from <semantics> xP(x,y)= xP(x)P(y|x)<annotation encoding="application/x-tex">\sum_{x} P(x,y) = \sum_x P(x) P(y \ | \ x)</annotation></semantics>.)

Let’s consider some simple examples.

Fork

In the example of flu, sneezing and fever above, the graph has a fork shape. For a probability distribution <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> to satisfy the Markov condition for this graph we must have

<semantics>P(x,y,z)=P(x|z)P(y|z)P(z),x,y,z.<annotation encoding="application/x-tex">P(x, y, z) = P(x \ | \ z) P(y \ | \ z)P(z), \ \forall x,y,z.</annotation></semantics>

However, <semantics>P(x,y)P(x)P(y)<annotation encoding="application/x-tex">P(x,y) \neq P(x) P(y)</annotation></semantics>.

In other words, <semantics>XY|Z<annotation encoding="application/x-tex">X \perp\!\!\!\!\!\!\!\perp Y \ | \ Z</annotation></semantics>, though <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> are not independent. This makes sense, we wouldn’t expect sneezing and fever to be uncorrelated, but given that we know whether or not Alice has flu, telling us that she has fever isn’t going to tell us anything about her sneezing.

Collider

Reversing the arrows in the fork graph above gives a collider as in the following example.

a poset

Clearly whether or not Alice has allergies other than hayfever is independent of what season it is. So we’d expect a distribution on this graph to satisfy <semantics>XY|<annotation encoding="application/x-tex">X \perp\!\!\!\!\!\!\!\perp Y \ | \ \emptyset</annotation></semantics>. However, if we know that Alice is having an allergic reaction, and it happens to be spring, we will likely assume that she has some allergy, i.e. <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> are not conditionally independent given <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>.

Indeed, the Markov condition and chain rule for this graph gives us <semantics>XY|<annotation encoding="application/x-tex">X \perp\!\!\!\!\!\!\!\perp Y \ | \ \emptyset</annotation></semantics>:

<semantics>P(x,y,z)=P(x)P(y)P(z|x,y)=P(z|x,y)P(x|y)P(y)x,y,z.<annotation encoding="application/x-tex">P(x, y, z) = P(x)P(y) P(z \ | \ x,\ y) = P(z \ | \ x,\ y) P( x\ | \ y) P(y) \ \forall x,y,z.</annotation></semantics>

from which we cannot derive <semantics>P(x|z)P(y|z)=P(x,y|z)<annotation encoding="application/x-tex">P(x \ | \ z) P(y \ | \ z) = P(x,y \ | \ z)</annotation></semantics>. (However, it could still be true for some particular choice of probability distribution.)

Chain

Finally, let us return to the chain of correlations presented in the introduction.

Clearly the probabilities that it is spring and that Alice is sneezing are not independent, and indeed, we cannot derive <semantics>P(x,y)=P(x)P(y)<annotation encoding="application/x-tex">P(x, y) = P(x) P(y)</annotation></semantics>. However observe that, by the chain rule, a Markov distribution on the chain graph must satisfy <semantics>XY|Z<annotation encoding="application/x-tex">X\perp\!\!\!\!\!\!\!\perp Y \ | \ Z</annotation></semantics>. If we know Alice is having an allergic reaction that is not hayfever, whether or not she is sneezing is not going to affect our guess as to what season it is.

Crucially, in this case, knowing the season is also not going to affect whether we think Alice is sneezing. By definition, conditional independence of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> given <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> is symmetric in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>. In other words, a joint distribution <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> on the variables <semantics>X,Y,Z<annotation encoding="application/x-tex">X,Y,Z</annotation></semantics> satisfies the Markov condition with respect to the chain graph

<semantics>XZY<annotation encoding="application/x-tex">X \longrightarrow Z \longrightarrow Y</annotation></semantics>

if and only if <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> satisfies the Markov condition on

<semantics>YZX.<annotation encoding="application/x-tex">Y \longrightarrow Z \longrightarrow X .</annotation></semantics>

d-separation

The above observations can be generalised to statements about conditional independences in any Bayesian network. That is, if <semantics>(G,P)<annotation encoding="application/x-tex">(G,P)</annotation></semantics> is a Bayesian network then the structure of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is enough to derive all the conditional independences in <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> that are implied by the graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> (in reality there may be more that have not been included in the network!).

Given a DAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and a set of vertices <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, let <semantics>m(U)<annotation encoding="application/x-tex">m(U)</annotation></semantics> denote the union of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> with all the vertices <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> such that there is a directed edge from <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> to <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>. The set <semantics>W(U)<annotation encoding="application/x-tex">W(U)</annotation></semantics> will denote the non-inclusive future of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>, that is, the set of vertices <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> for which there is no directed (possibly trivial) path from <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> to <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>.

For a graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, let <semantics>X,Y,Z<annotation encoding="application/x-tex">X, Y, Z</annotation></semantics> now denote disjoint subsets of the vertices of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> (and their corresponding random variables). Set <semantics>W:=W(XYZ)<annotation encoding="application/x-tex">W := W(X \cup Y \cup Z)</annotation></semantics>.

Then <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> are said to be d-separated by <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>, written <semantics>XY|Z<annotation encoding="application/x-tex">X \perp Y \ | \ Z</annotation></semantics>, if there is a partition <semantics>{U,V,W,Z}<annotation encoding="application/x-tex">\{U,V,W,Z\}</annotation></semantics> of the nodes of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> such that

  • <semantics>XU<annotation encoding="application/x-tex">X \subseteq U</annotation></semantics> and <semantics>YV<annotation encoding="application/x-tex">Y \subseteq V</annotation></semantics>, and

  • <semantics>m(U)m(V)W,<annotation encoding="application/x-tex">m(U) \cap m(V) \subseteq W,</annotation></semantics> in other words <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> and <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> have no direct influence on each other.

(This is lemma 19 in the paper.)

Now d-separation is really useful since it tells us everything there is to know about the conditional dependences on Bayesian networks with underlying graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. Indeed,

Theorem 5

  • Soundness of d-separation (Verma and Pearl, 1988) If <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is a Markov distribution with respect to a graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> then for all disjoint subsets <semantics>X,Y,Z<annotation encoding="application/x-tex">X,Y,Z</annotation></semantics> of nodes of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> <semantics>XY|Z<annotation encoding="application/x-tex">X \perp Y \ | \ Z</annotation></semantics> implies that <semantics>XY|Z<annotation encoding="application/x-tex">X \perp\!\!\!\!\!\!\!\perp Y \ | \ Z</annotation></semantics>.

  • Completeness of d-separation (Meek, 1995) If <semantics>XY|Z<annotation encoding="application/x-tex">X \perp\!\!\!\!\!\!\!\perp Y \ | \ Z</annotation></semantics> for all <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> Markov with respect to <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, then <semantics>XY|Z<annotation encoding="application/x-tex">X \perp Y \ | \ Z</annotation></semantics>.

We can combine the previous examples of fork, collider and chain graphs to get the following

a poset

A priori, Allergic reaction is conditionally independent of Fever. Indeed, we have the partition

a poset

which clearly satisfies d-separation. However, if Sneezing is known then <semantics>W=<annotation encoding="application/x-tex">W = \emptyset</annotation></semantics>, so Allergic reaction and Fever are not independent. Indeed, if we use the same sets <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> and <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> as before, then <semantics>m(U)m(V)={Sneezing}<annotation encoding="application/x-tex">m(U) \cap m(V) = \{ Sneezing \}</annotation></semantics>, so the condition for d-separation fails; and it does for any possible choice of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> and <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics>. Interestingly, if Flu is also known, we again obtain conditional independence between Allergic reaction and Fever, as shown below.

a poset

Before describing the limitations of this setup and why we may want to generalise it, it is worth observing that Theorem 5 is genuinely useful computationally. Theorem 5 says that given a Bayesian network <semantics>(G,P)<annotation encoding="application/x-tex">(G,P)</annotation></semantics>, the structure of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> gives us a recipe to factor <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, thereby greatly increasing computation efficiency for Bayesian inference.

Latent variables, hidden variables, and unobservables

In the context of Bayesian networks, there are two reasons that we may wish to add variables to a probabilistic model, even if we are not entirely sure what the variables signify or how they are distributed. The first reason is statistical and the second is physical.

Consider the example of flu, fever and sneezing discussed earlier. Although our analysis told us <semantics>FeverSneezing|Flu<annotation encoding="application/x-tex">Fever \perp\!\!\!\!\!\!\!\perp Sneezing \ | \ Flu</annotation></semantics>, if we conduct an experiment we are likely to find:

<semantics>P(fever|sneezing,flu)P(fever|flu).<annotation encoding="application/x-tex">P(fever \ | \ sneezing, \ flu) \neq P(fever \ | \ flu).</annotation></semantics>

The problem is caused by the graph not properly modelling reality, but a simplification of it. After all, there are a whole bunch of things that can cause sneezing and flu. We just don’t know what they all are or how to measure them. So, to make the network work, we may add a hypothetical latent variable that bunches together all the unknown joint causes, and equip it with a distribution that makes the whole network Bayesian, so that we are still able to perform inference methods like belief propagation.

a poset

On the other hand, we may want to add variables to a Bayesian network if we have evidence that doing so will provide a better model of reality.

For example, consider the network with just two connected nodes

a poset

Every distribution on this graph is Markov, and we would expect there to be a correlation between a road being wet and the grass next to it being wet as well, but most people would claim that there’s something missing from the picture. After all, rain could be a ‘common cause’ of the road and the grass being wet. So, it makes sense to add a third variable.

But maybe we can’t observe whether it has rained or not, only whether the grass and/or road are wet. Nonetheless, the correlation we observe suggests that they have a common cause. To deal with such cases, we could make the third variable hidden. We may not know what information is included in a hidden variable, nor its probability distribution.

All that matters is that the hidden variable helps to explain the observed correlations.

a poset

So, latent variables are a statistical tool that ensure the Markov condition holds. Hence they are inherently classical, and can, in theory, be known. But the universe is not classical, so, even if we lump whatever we want into as many classical hidden variables as we want and put them wherever we need, in some cases, there will still be empirically observed correlations that do not satisfy the Markov condition.

Most famously, Bell’s experiment shows that it is possible to have distinct variables <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> that exhibit correlations that cannot be explained by any classical hidden variable, since classical variables are restricted by the principle of locality.

In other words, though <semantics>AB|Λ<annotation encoding="application/x-tex">A \perp B \ | \ \Lambda</annotation></semantics>,

<semantics>P(a|b,λ)P(a|λ).<annotation encoding="application/x-tex">P(a \ | b,\ \lambda) \neq P(a \ | \ \lambda).</annotation></semantics>

Implicitly, this means that a classical <semantics>Λ<annotation encoding="application/x-tex">\Lambda</annotation></semantics> is not enough. If we want <semantics>P(a|b,λ)P(a|λ)<annotation encoding="application/x-tex">P(a \ | b,\ \lambda) \neq P(a \ | \ \lambda)</annotation></semantics> to hold, <semantics>Λ<annotation encoding="application/x-tex">\Lambda</annotation></semantics> must be a non-local (non-classical) variable. Quantum mechanics implies that we can’t possibly empirically find the value of a non-local variable (for similar reasons to the Heisenberg’s uncertainty principle), so non-classical variables are often called unobservables. In particular, it is irrelevant to question whether <semantics>AB|Λ<annotation encoding="application/x-tex">A \perp\!\!\!\!\!\!\!\perp B \ | \ \Lambda</annotation></semantics>, as we would need to know the value of <semantics>Λ<annotation encoding="application/x-tex">\Lambda</annotation></semantics> in order to condition over it.

Indeed, this is the key idea behind what follows. We declare certain variables to be unobservable and then insist that conditional (in)dependence only makes sense between observable variables conditioned over observable variables.

Generalising classical causality

The correlations observed in the Bell experiment can be explained by quantum mechanics. But thought experiments such as the one described here suggest that theoretically, correlations may exist that violate even quantum causality.

So, given that graphical models and d-separation provide such a powerful tool for causal reasoning in the classical context, how can we generalise the Markov condition and Theorem 5 to quantum, and even more general causal theories? And, if we have a theory-independent Markov condition, are there d-separation results that don’t correspond to any given causal theory?

Clearly the first step in answering these questions is to fix a definition of a causal theory.

Operational probabilistic theories

An operational theory is a symmetric monoidal category <semantics>(C,,I)<annotation encoding="application/x-tex">(\mathsf {C}, \otimes, I)</annotation></semantics> whose objects are known as systems or resources. Morphisms are finite sets <semantics>f={𝒞 i} iI<annotation encoding="application/x-tex">f = \{\mathcal {C}_i\}_{i \in I}</annotation></semantics> called tests, whose elements are called outcomes. Tests with a single element are called deterministic, and for each system <semantics>Aob(C)<annotation encoding="application/x-tex">A \in ob (\mathsf {C})</annotation></semantics>, the identity <semantics>id A(A,A)<annotation encoding="application/x-tex">id_A \in \mathsf (A,A)</annotation></semantics> is a deterministic test.

In this discussion, we’ll identify tests <semantics>{𝒞 i} i,{𝒟 j} j<annotation encoding="application/x-tex">\{\mathcal {C}_i \}_i , \{\mathcal {D}_j\}_j</annotation></semantics> in <semantics>C<annotation encoding="application/x-tex">\mathsf {C}</annotation></semantics> if we may always replace one with the other without affecting the distributions in <semantics>C(I,I)<annotation encoding="application/x-tex">\mathsf {C}(I, I)</annotation></semantics>.

Given <semantics>{𝒞 i} iC(B,C)<annotation encoding="application/x-tex">\{\mathcal {C}_i \}_i \in \mathsf {C}(B, C)</annotation></semantics> and <semantics>{𝒟 j}C(A,B)<annotation encoding="application/x-tex">\{\mathcal {D}_j \} \in \mathsf {C}(A, B)</annotation></semantics>, their composition <semantics>fg<annotation encoding="application/x-tex">f \circ g</annotation></semantics> is given by

<semantics>{𝒞 i𝒟 j} i,jC(A,C).<annotation encoding="application/x-tex">\{ \mathcal {C}_i \circ \mathcal {D}_j \}_{i,j} \in \mathsf {C}(A, C).</annotation></semantics>

First apply <semantics>𝒟<annotation encoding="application/x-tex">\mathcal {D}</annotation></semantics> with output <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> then apply <semantics>𝒞<annotation encoding="application/x-tex">\mathcal {C}</annotation></semantics> with outcome <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>.

The monoidal composition <semantics>{𝒞 i𝒟 j} i,jC(AC,BD)<annotation encoding="application/x-tex">\{ \mathcal {C}_i \otimes \mathcal {D}_j \}_{i, j} \in \mathsf {C}(A \otimes C, B \otimes D)</annotation></semantics> corresponds to applying <semantics>{𝒞 i} iC(A,B)<annotation encoding="application/x-tex">\{\mathcal {C}_i\}_i \in \mathsf {C}(A,B)</annotation></semantics> and <semantics>{𝒟 j} j<annotation encoding="application/x-tex">\{ \mathcal {D}_j \}_j</annotation></semantics> separately on <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>.

An operational probabilistic theory or OPT is an operational theory such that every test <semantics>II<annotation encoding="application/x-tex">I \to I</annotation></semantics> is a probability distribution.

A morphism <semantics>{𝒞 i} iC(A,I)<annotation encoding="application/x-tex">\{ \mathcal {C}_i \}_i \in \mathsf {C}(A, I)</annotation></semantics> is called an effect on <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>. An OPT <semantics>C<annotation encoding="application/x-tex">\mathsf {C}</annotation></semantics> is called causal or a causal theory if, for each system <semantics>Aob(C)<annotation encoding="application/x-tex">A \in ob (\mathsf {C})</annotation></semantics>, there is a unique deterministic effect <semantics> AC(A,I)<annotation encoding="application/x-tex">\top_A \in \mathsf {C}( A, I)</annotation></semantics> which we call the discard of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>.

In particular, for a causal OPT <semantics>C<annotation encoding="application/x-tex">\mathsf {C}</annotation></semantics>, uniqueness of the discard implies that, for all systems <semantics>A,Bob(C)<annotation encoding="application/x-tex">A, B \in ob (\mathsf {C})</annotation></semantics>,

<semantics> A B= AB,<annotation encoding="application/x-tex">\top_A \otimes \top_B = \top_{A \otimes B},</annotation></semantics> and, given any determinstic test <semantics>𝒞C(A,B)<annotation encoding="application/x-tex">\mathcal {C} \in \mathsf {C}(A, B)</annotation></semantics>,

<semantics> B𝒞= A.<annotation encoding="application/x-tex">\top_B \circ \mathcal {C} = \top_A.</annotation></semantics>

The existence of a discard map allows a definition of causal morphisms in a causal theory. For example, as we saw in January when we discussed Kissinger and Uijlen’s paper, a test <semantics>{𝒞 i} iC(A,B)<annotation encoding="application/x-tex">\{ \mathcal {C}_i \}_i \in \mathsf {C} (A, B)</annotation></semantics> is causal if

<semantics> B{𝒞 i} i= AC(A,I).<annotation encoding="application/x-tex">\top_B \circ \{ \mathcal {C}_i \}_i = \top_A \in \mathsf {C}( A, I).</annotation></semantics>

In other words, for a causal test, discarding the outcome is the same as not performing the test. Intuitively it is not obvious why such morphisms should be called causal. But this definition enables the formulation of a non-signalling condition that describes the conditions under which the possibility of cause-effect correlation is excluded, in particular, it implies the impossibility of time travel.

Examples

The category <semantics>Mat( +)<annotation encoding="application/x-tex">Mat(\mathbb {R}_+)</annotation></semantics> of natural numbers and with <semantics>Mat( +)(m,n)<annotation encoding="application/x-tex">Mat(\mathbb {R}_+)(m,n)</annotation></semantics> the set of <semantics>n×m<annotation encoding="application/x-tex">n \times m</annotation></semantics> matrices, has the structure of a causal OPT. The causal morphisms in <semantics>Mat( +)<annotation encoding="application/x-tex">Mat(\mathbb {R}_+)</annotation></semantics> are the stochastic maps (the matrices whose columns sum to 1). This category describes classical probability theory.

The category <semantics>CPM<annotation encoding="application/x-tex">\mathsf{CPM}</annotation></semantics> of sets of linear operators on Hilbert spaces and completely positive maps between them is an OPT and describes quantum relations. The causal morphisms are the trace preserving completely positive maps.

Finally, Boxworld is the theory that allows to describe any correlation between two variables as the cause of some resource of the theory in the past.

Generalised Bayesian networks

So, we’re finally ready to give the main construction and results of the paper. As mentioned before, to get a generalised d-separation result, the idea is that we will distinguish observable and unobservable variables, and simply insist that conditional independence is only defined relative to observable variables.

To this end, a generalised DAG or GDAG is a DAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> together with a partition on the nodes of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> into two subsets called observed and unobserved. We’ll represent observed nodes by triangles, and unobserved nodes by circles. An edge out of an (un)observed node will be called (un)observed and represented by a (solid) dashed arrow.

In order to get a generalisation of Theorem 5, we still need to come up with a sensible generalisation of the Markov property which will essentially say that at an observed node that has only observed parents, the distribution must be Markov. However, if an observed node has an unobserved parent, the latter’s whole history is needed to describe the distribution.

To state this precisely, we will associate a causal theory <semantics>(C,,I)<annotation encoding="application/x-tex">(\mathsf {C}, \otimes, I)</annotation></semantics> to a GDAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> via an assignment of systems to edges of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and tests to nodes of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, such that the observed edges of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> will ‘carry’ only the outcomes of classical tests (so will say something about conditional probability) whereas unobserved edges will carry only the output system.

Precisely, such an assignment <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> satisfies the generalised Markov condition (GMC) and is called a generalised Markov distribution if

  • Each unobserved edge corresponds to a distinct system in the theory.

  • If we can’t observe what is happening at a node, we can’t condition over it: To each unobserved node and each value of its observed parents, we assign a deterministic test from the system defined by the product of its incoming (unobserved) edges to the system defined by the product of its outgoing (unobserved) edges.

  • Each observed node <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is an observation test, i.e. a morphism in <semantics>C(A,I)<annotation encoding="application/x-tex">\mathsf {C}(A, I)</annotation></semantics> for the system <semantics>Aob(C)<annotation encoding="application/x-tex">A \in ob( \mathsf {C})</annotation></semantics> corresponding to the product of the systems assigned to the unobserved input edges of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Since <semantics>C<annotation encoding="application/x-tex">\mathsf {C}</annotation></semantics> is a causal theory, this says that <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is assigned a classical random variable, also denoted <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, and that if <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> is an observed node, and has observed parent <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, the distribution at <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> is conditionally dependent on the distribution at <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> (see here for details).

  • It therefore follows that each observed edge is assigned the trivial system <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>.

  • The joint probability distribution on the observed nodes of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is given by the morphism <semantics>C(I,I)<annotation encoding="application/x-tex">\mathsf {C}(I, I)</annotation></semantics> that results from these assignments.

A generalised Bayesian network consists of a GDAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> together with a generalised Markov distribution <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>.

Example

Consider the following GDAG

a poset

Let’s build its OPT morphism as indictated by the generalised Markov condition.

The observed node <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has no incoming edges so it corresponds to a <semantics>C(I,I)<annotation encoding="application/x-tex">\mathsf {C}(I, I)</annotation></semantics> morphism, and thus we assign a probability distribution to it.

The unobserved node A depends on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, and has no unobserved inputs, so we assign a deterministic test <semantics>A(x):IA<annotation encoding="application/x-tex">A(x): I \to A</annotation></semantics> for each value <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

a poset

The observed node <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> has one incoming unobserved edge and no incoming observed edges so we assign to it a test <semantics>Y:AI<annotation encoding="application/x-tex">Y: A \to I</annotation></semantics> such that, for each value <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, <semantics>YA(x)<annotation encoding="application/x-tex">Y \circ A(x)</annotation></semantics> is a probability distribution.

Building up the rest of the picture gives an OPT diagram of the form

a poset

which is a <semantics>C(I,I)<annotation encoding="application/x-tex">\mathsf {C}(I, I)</annotation></semantics> morphism that defines the joint probability distribution <semantics>P(x,y,z,w)<annotation encoding="application/x-tex">P(x,y,z,w)</annotation></semantics>. We now have all the ingredients to state Theorem 22, the generalised d-separation theorem. This is the analogue of Theorem 5 for generalised Markov distributions.

Theorem 22

Given a GDAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and subsets <semantics>X,Y,Z<annotation encoding="application/x-tex">X,Y, Z</annotation></semantics> of observed nodes

  • if a probability distribution <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is generalised Markov relative to <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> then <semantics>XY|ZXY|Z<annotation encoding="application/x-tex">X \perp Y \ | \ Z \Rightarrow X\perp\!\!\!\!\!\!\!\perp Y \ | \ Z</annotation></semantics>.

  • If <semantics>XY|Z<annotation encoding="application/x-tex">X\perp\!\!\!\!\!\!\!\perp Y \ | \ Z</annotation></semantics> holds for all generalised Markov probability distributions on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, then <semantics>XY|Z<annotation encoding="application/x-tex">X \perp Y \ | \ Z</annotation></semantics>.

Note in particular that there is no change in the definition of d-separation: d-separation of a GDAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is simply d-separation with respect to its underlying DAG. There is also no change in the definition of conditional independence. Now, however, we restrict to statements of conditional independence with respect to observed nodes only. This enables the generalised soundness and completeness statements of the theorem.

The proof of soundness uses uniqueness of discarding, and completeness follows since generalised Markov is a stronger condition on a distribution than classically Markov.

Classical distributions on GDAGs

Theorem 22 is all well and good. But does it really generalise the classical case? That is, can we recover Theorem 5 for all classical Bayesian networks from Theorem 22?

As a first step, Proposition 17 states that if all the nodes of a generalised Bayesian network are observed, then it is a classical bayesian network. In fact, this follows pretty immediately from the definitions.

Moreover, it is easily checked that, given a classical Bayesian network, even if it has hidden or latent variables, it can still be expressed directly as a generalised Bayesian network with no unobserved nodes.

In fact, Theorem 22 generalises Theorem 5 in a stricter sense. That is, the generalised Bayesian network setup together with classical causality adds nothing extra to the theory of classical Bayesian networks. If a generalised Markov distribution is classical (then hidden and latent variables may be represented by unobserved nodes), it can be viewed as a classical Bayesian network. More precisely, Lemma 18 says that, given any generalised Bayesian network <semantics>(G,P)<annotation encoding="application/x-tex">(G,P)</annotation></semantics> with underlying DAG <semantics>G<annotation encoding="application/x-tex">G'</annotation></semantics> and distribution <semantics>P𝒞<annotation encoding="application/x-tex">P \in \mathcal {C}</annotation></semantics>, we can construct a classical Bayesian network <semantics>(G,P)<annotation encoding="application/x-tex">(G', P')</annotation></semantics> such that <semantics>P<annotation encoding="application/x-tex">P'</annotation></semantics> agrees with <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> on the observed nodes.

It is worth voicing a note of caution. The authors themselves mention in the conclusion that the construction based on GDAGs with two types of nodes is not entirely satisfactory. The problem is that, although the setups and results presented here do give a generalisation of Theorem 5, they do not, as such, provide a way of generalising Bayesian networks as they are used for probabilistic inference to non-classical settings. For example, belief propagation works through observed nodes, but there is no apparent way of generalising it for unobserved nodes.

Theory independence

More generally, given a GDAG <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, we can look at the set of distributions on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> that are generalised Markov with respect to a given causal theory. Of particular importance are the following.

  • The set <semantics>𝒞<annotation encoding="application/x-tex">\mathcal {C}</annotation></semantics> of generalised Markov distributions in <semantics>Mat( +)<annotation encoding="application/x-tex">Mat(\mathbb {R}_+)</annotation></semantics> on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>.

  • The set <semantics>𝒬<annotation encoding="application/x-tex">\mathcal {Q}</annotation></semantics> of generalised Markov distributions in <semantics>CPM<annotation encoding="application/x-tex">\mathsf{CPM}</annotation></semantics> on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>.

  • The set <semantics>𝒢<annotation encoding="application/x-tex">\mathcal {G}</annotation></semantics> of all generalised Markov distributions on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. (This is the set of generalised Markov distributions in Boxworld.)

Moreover, we can distinguish another class of distributions on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, by not restricting to d-seperation of observed nodes, but considering distributions that satisfy the observable conditional independences given by any d-separation properties on the graph. Theorem 22 implies, in particular that <semantics>GI<annotation encoding="application/x-tex">G \subseteq I</annotation></semantics>.

And, so, since <semantics>Mat( +)<annotation encoding="application/x-tex">Mat(\mathbb {R}_+)</annotation></semantics> embeds into <semantics>CPM<annotation encoding="application/x-tex">\mathsf{CPM}</annotation></semantics>, we have <semantics>𝒞𝒬𝒢<annotation encoding="application/x-tex">\mathcal {C} \subseteq \mathcal {Q} \subseteq \mathcal {G} \subseteq \mathcal {I}</annotation></semantics>.

This means that one can ask for which graphs (some or all of) these inequalities are strict, and the last part of the paper explores these questions. In the original paper, a sufficient condition is given for graphs to satisfy <semantics>𝒞<annotation encoding="application/x-tex">\mathcal {C} \neq \mathcal {I}</annotation></semantics>. I.e. for these graphs it is guaranteed that the causal structure admits correlations that are non-local. Moreover the authors show that their condition is necessary for small enough graphs.

Another interesting result is that there exist graphs for which <semantics>𝒢<annotation encoding="application/x-tex">\mathcal {G} \neq \mathcal {I}</annotation></semantics>. This means that using a theory of resources, whatever theory it may be, to explain correlations imposes constraints that are stronger than those imposed by the relations themselves.

What next?

This setup represents one direction for using category theory to generalise Bayesian networks. In our group work at the ACT workshop, we considered another generalisation of Bayesian networks, this time staying within the classical realm. Namely, building on the work of Bonchi, Gadducci, Kissinger, Sobocinski, and Zanasi, we gave a functorial Markov condition on directed graphs admitting cycles. Hopefully we’ll present this work here soon.

by john (baez@math.ucr.edu) at July 09, 2018 03:58 PM

July 08, 2018

Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence (3\sigma) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})

and CMS (see here)

\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from 35.9{\rm fb}^{-1} data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below 2\sigma (see here). For the WW decay, ATLAS does not see anything above 1\sigma (see here).

So, although there is something to take under attention with the increase of data, that will reach 100 {\rm fb}^{-1} this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.

 

by mfrasca at July 08, 2018 10:58 AM

July 04, 2018

The n-Category Cafe

Symposium on Compositional Structures

There’s a new conference series, whose acronym is pronounced “psycho”. It’s part of the new trend toward the study of “compositionality” in many branches of thought, often but not always using category theory:

  • First Symposium on Compositional Structures (SYCO1), School of Computer Science, University of Birmingham, 20-21 September, 2018. Organized by Ross Duncan, Chris Heunen, Aleks Kissinger, Samuel Mimram, Simona Paoli, Mehrnoosh Sadrzadeh, Pawel Sobocinski and Jamie Vicary.

The Symposium on Compositional Structures is a new interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language. We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

More details below! Our very own David Corfield is one of the invited speakers.

The Symposium on Compositional Structures is a new interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language. We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

Submission is easy, with no format requirements or page restrictions. The meeting does not have proceedings, so work can be submitted even if it has been submitted or published elsewhere.

While no list of topics could be exhaustive, SYCO welcomes submissions with a compositional focus related to any of the following areas, in particular from the perspective of category theory:

  • logical methods in computer science, including classical and quantum programming, type theory, concurrency, natural language processing and machine learning;
  • graphical calculi, including string diagrams, Petri nets and reaction networks;
  • languages and frameworks, including process algebras, proof nets, type theory and game semantics;
  • abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;
  • quantum algebra, including quantum computation and representation theory;
  • tools and techniques, including rewriting, formal proofs and proof assistants, and game theory;
  • industrial applications, including case studies and real-world problem descriptions.

This new series aims to bring together the communities behind many previous successful events which have taken place over the last decade, including “Categories, Logic and Physics”, “Categories, Logic and Physics (Scotland)”, “Higher-Dimensional Rewriting and Applications”, “String Diagrams in Computation, Logic and Physics”, “Applied Category Theory”, “Simons Workshop on Compositionality”, and the “Peripatetic Seminar in Sheaves and Logic”.

The steering committee hopes that SYCO will become a regular fixture in the academic calendar, running regularly throughout the year, and becoming over time a recognized venue for presentation and discussion of results in an informal and friendly atmosphere. To help create this community, in the event that more good-quality submissions are received than can be accommodated in the timetable, we may choose to defer some submissions to a future meeting, rather than reject them. This would be done based on submission order, giving an incentive for early submission, and avoiding any need to make difficult choices between strong submissions. Deferred submissions would be accepted for presentation at any future SYCO meeting without the need for peer review. This will allow us to ensure that speakers have enough time to present their ideas, without creating an unnecessarily competitive atmosphere. Meetings would be held sufficiently frequently to avoid a backlog of deferred papers.

Invited Speakers

  • David Corfield, Department of Philosophy, University of Kent: “The ubiquity of modal type theory”.

  • Jules Hedges, Department of Computer Science, University of Oxford: “Compositional game theory”

Important Dates

All times are anywhere-on-earth.

  • Submission deadline: Sunday 5 August 2018
  • Author notification: Monday 13 August 2018
  • Travel support application deadline: Monday 20 August 2018
  • Symposium dates: Thursday 20 September and Friday 21 September 2018

Submissions

Submission is by EasyChair, via the following link:

Submissions should present research results in sufficient detail to allow them to be properly considered by members of the programme committee, who will assess papers with regards to significance, clarity, correctness, and scope. We encourage the submission of work in progress, as well as mature results. There are no proceedings, so work can be submitted even if it has been previously published, or has been submitted for consideration elsewhere. There is no specific formatting requirement, and no page limit, although for long submissions authors should understand that reviewers may not be able to read the entire document in detail.

Funding

Some funding is available to cover travel and subsistence costs, with a priority for PhD students and junior researchers. To apply for this funding, please contact the local organizer Jamie Vicary at j.o.vicary@bham.ac.uk by the deadline given above, with a short statement of your travel costs and funding required.

Programme Committee

The symposium managed by the following people, who also serve as the programme committee.

  • Ross Duncan, University of Strathclyde
  • Chris Heunen, University of Edinburgh
  • Aleks Kissinger, Radboud University Nijmegen
  • Samuel Mimram, École Polytechnique
  • Simona Paoli, University of Leicester
  • Mehrnoosh Sadrzadeh, Queen Mary, University of London
  • Pawel Sobocinski, University of Southampton
  • Jamie Vicary, University of Birmingham and University of Oxford (local organizer)

by john (baez@math.ucr.edu) at July 04, 2018 05:57 PM

Tommaso Dorigo - Scientificblogging

Chasing The Higgs Self Coupling: New CMS Results
Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...).

In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin. 

read more

by Tommaso Dorigo at July 04, 2018 12:57 PM

June 25, 2018

Sean Carroll - Preposterous Universe

On Civility

Alex Wong/Getty Images

White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.

I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.

On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.

This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.

More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.

The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.

The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.

This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.

 

 

by Sean Carroll at June 25, 2018 06:00 PM

June 24, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

7th Robert Boyle Summer School

This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

220px-Robert_Boyle_0001

The Irish-born scientist and aristocrat Robert Boyle   

IMG_1745[1]

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

trio

IMG_9390

IMG_9398 (1)

Images from the garden party in the grounds of Lismore Castle

by cormac at June 24, 2018 08:19 PM

June 22, 2018

Jester - Resonaances

Both g-2 anomalies
Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...

by Mad Hatter (noreply@blogger.com) at June 22, 2018 11:04 PM

June 16, 2018

Tommaso Dorigo - Scientificblogging

On The Residual Brightness Of Eclipsed Jovian Moons
While preparing for another evening of observation of Jupiter's atmosphere with my faithful 16" dobsonian scope, I found out that the satellite Io will disappear behind the Jovian shadow tonight. This is a quite common phenomenon and not a very spectacular one, but still quite interesting to look forward to during a visual observation - the moon takes some time to fully disappear, so it is fun to follow the event.
This however got me thinking. A fully eclipsed jovian moon should still be able to reflect back some light picked up from the still lit other satellites - so it should not, after all, appear completely dark. Can a calculation be made of the effect ? Of course - and it's not that difficult.

read more

by Tommaso Dorigo at June 16, 2018 04:47 PM

June 12, 2018

Axel Maas - Looking Inside the Standard Model

How to test an idea
As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

by Axel Maas (noreply@blogger.com) at June 12, 2018 10:49 AM

June 10, 2018

Tommaso Dorigo - Scientificblogging

Modeling Issues Or New Physics ? Surprises From Top Quark Kinematics Study
Simulation, noun:
1. Imitation or enactment
2. The act or process of pretending; feigning.
3. An assumption or imitation of a particular appearance or form; counterfeit; sham.

Well, high-energy physics is all about simulations. 

We have a theoretical model that predicts the outcome of the very energetic particle collisions we create in the core of our giant detectors, but we only have approximate descriptions of the inputs to the theoretical model, so we need simulations. 

read more

by Tommaso Dorigo at June 10, 2018 11:18 AM

June 09, 2018

Jester - Resonaances

Dark Matter goes sub-GeV
It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. 
                       
It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.   

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.
     
Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

by Mad Hatter (noreply@blogger.com) at June 09, 2018 05:39 PM

June 08, 2018

Jester - Resonaances

Massive Gravity, or You Only Live Twice
Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).   

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...           

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl~10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ~300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m~10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ~1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.   

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

by Mad Hatter (noreply@blogger.com) at June 08, 2018 08:35 AM

June 07, 2018

Jester - Resonaances

Can MiniBooNE be right?
The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.


This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.     

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.     

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

by Mad Hatter (noreply@blogger.com) at June 07, 2018 01:20 PM

June 01, 2018

Jester - Resonaances

WIMPs after XENON1T
After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.
 
To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field. 

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?

by Mad Hatter (noreply@blogger.com) at June 01, 2018 05:30 PM

Tommaso Dorigo - Scientificblogging

MiniBoone Confirms Neutrino Anomaly
Neutrinos, the most mysterious and fascinating of all elementary particles, continue to puzzle physicists. 20 years after the experimental verification of a long-debated effect whereby the three neutrino species can "oscillate", changing their nature by turning one into the other as they propagate in vacuum and in matter, the jury is still out to decide what really is the matter with them. And a new result by the MiniBoone collaboration is stirring waters once more.

read more

by Tommaso Dorigo at June 01, 2018 12:49 PM

May 26, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A festschrift at UCC

One of my favourite academic traditions is the festschrift, a conference convened to honour the contribution of a senior academic. In a sense, it’s academia’s version of an Oscar for lifetime achievement, as scholars from all around the world gather to pay tribute their former mentor, colleague or collaborator.

Festschrifts tend to be very stimulating meetings, as the diverging careers of former students and colleagues typically make for a diverse set of talks. At the same time, there is usually a unifying theme based around the specialism of the professor being honoured.

And so it was at NIALLFEST this week, as many of the great and the good from the world of Einstein’s relativity gathered at University College Cork to pay tribute to Professor Niall O’Murchadha, a theoretical physicist in UCC’s Department of Physics noted internationally for seminal contributions to general relativity.  Some measure of Niall’s influence can be seen from the number of well-known theorists at the conference, including major figures such as Bob WaldBill UnruhEdward Malec and Kip Thorne (the latter was recently awarded the Nobel Prize in Physics for his contribution to the detection of gravitational waves). The conference website can be found here and the programme is here.

IMG_1640

IMG_1644

IMG_1642

University College Cork: probably the nicest college campus in Ireland

As expected, we were treated to a series of high-level talks on diverse topics, from black hole collapse to analysis of high-energy jets from active galactic nuclei, from the initial value problem in relativity to the search for dark matter (slides for my own talk can be found here). To pick one highlight, Kip Thorne’s reminiscences of the forty-year search for gravitational waves made for a fascinating presentation, from his description of early designs of the LIGO interferometer to the challenge of getting funding for early prototypes – not to mention his prescient prediction that the most likely chance of success was the detection of a signal from the merger of two black holes.

All in all, a very stimulating conference. Most entertaining of all were the speakers’ recollections of Niall’s working methods and his interaction with students and colleagues over the years. Like a great piano teacher of old, one great professor leaves a legacy of critical thinkers dispersed around their world, and their students in turn inspire the next generation!

 

by cormac at May 26, 2018 12:16 AM

May 21, 2018

Andrew Jaffe - Leaves on the Line

Leon Lucy, R.I.P.

I have the unfortunate duty of using this blog to announce the death a couple of weeks ago of Professor Leon B Lucy, who had been a Visiting Professor working here at Imperial College from 1998.

Leon got his PhD in the early 1960s at the University of Manchester, and after postdoctoral positions in Europe and the US, worked at Columbia University and the European Southern Observatory over the years, before coming to Imperial. He made significant contributions to the study of the evolution of stars, understanding in particular how they lose mass over the course of their evolution, and how very close binary stars interact and evolve inside their common envelope of hot gas.

Perhaps most importantly, early in his career Leon realised how useful computers could be in astrophysics. He made two major methodological contributions to astrophysical simulations. First, he realised that by simulating randomised trajectories of single particles, he could take into account more physical processes that occur inside stars. This is now called “Monte Carlo Radiative Transfer” (scientists often use the term “Monte Carlo” — after the European gambling capital — for techniques using random numbers). He also invented the technique now called smoothed-particle hydrodynamics which models gases and fluids as aggregates of pseudo-particles, now applied to models of stars, galaxies, and the large scale structure of the Universe, as well as many uses outside of astrophysics.

Leon’s other major numerical contributions comprise advanced techniques for interpreting the complicated astronomical data we get from our telescopes. In this realm, he was most famous for developing the methods, now known as Lucy-Richardson deconvolution, that were used for correcting the distorted images from the Hubble Space Telescope, before NASA was able to send a team of astronauts to install correcting lenses in the early 1990s.

For all of this work Leon was awarded the Gold Medal of the Royal Astronomical Society in 2000. Since then, Leon kept working on data analysis and stellar astrophysics — even during his illness, he asked me to help organise the submission and editing of what turned out to be his final papers, on extracting information on binary-star orbits and (a subject dear to my heart) the statistics of testing scientific models.

Until the end of last year, Leon was a regular presence here at Imperial, always ready to contribute an occasionally curmudgeonly but always insightful comment on the science (and sociology) of nearly any topic in astrophysics. We hope that we will be able to appropriately memorialise his life and work here at Imperial and elsewhere. He is survived by his wife and daughter. He will be missed.

by Andrew at May 21, 2018 09:27 AM

May 14, 2018

Sean Carroll - Preposterous Universe

Intro to Cosmology Videos

In completely separate video news, here are videos of lectures I gave at CERN several years ago: “Cosmology for Particle Physicists” (May 2005). These are slightly technical — at the very least they presume you know calculus and basic physics — but are still basically accurate despite their age.

  1. Introduction to Cosmology
  2. Dark Matter
  3. Dark Energy
  4. Thermodynamics and the Early Universe
  5. Inflation and Beyond

Update: I originally linked these from YouTube, but apparently they were swiped from this page at CERN, and have been taken down from YouTube. So now I’m linking directly to the CERN copies. Thanks to commenters Bill Schempp and Matt Wright.

by Sean Carroll at May 14, 2018 07:09 PM

May 10, 2018

Sean Carroll - Preposterous Universe

User-Friendly Naturalism Videos

Some of you might be familiar with the Moving Naturalism Forward workshop I organized way back in 2012. For two and a half days, an interdisciplinary group of naturalists (in the sense of “not believing in the supernatural”) sat around to hash out the following basic question: “So we don’t believe in God, what next?” How do we describe reality, how can we be moral, what are free will and consciousness, those kinds of things. Participants included Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Daniel Dennett, Owen Flanagan, Rebecca Newberger Goldstein, Janna Levin, Massimo Pigliucci, David Poeppel, Nicholas Pritzker, Alex Rosenberg, Don Ross, and Steven Weinberg.

Happily we recorded all of the sessions to video, and put them on YouTube. Unhappily, those were just unedited proceedings of each session — so ten videos, at least an hour and a half each, full of gems but without any very clear way to find them if you weren’t patient enough to sift through the entire thing.

No more! Thanks to the heroic efforts of Gia Mora, the proceedings have been edited down to a number of much more accessible and content-centered highlights. There are over 80 videos (!), with a median length of maybe 5 minutes, though they range up to about 20 minutes and down to less than one. Each video centers on a particular idea, theme, or point of discussion, so you can dive right into whatever particular issues you may be interested in. Here, for example, is a conversation on “Mattering and Secular Communities,” featuring Rebecca Goldstein, Dan Dennett, and Owen Flanagan.

The videos can be seen on the workshop web page, or on my YouTube channel. They’re divided into categories:

A lot of good stuff in there. Enjoy!

by Sean Carroll at May 10, 2018 02:48 PM