Particle Physics Planet

October 23, 2016

John Baez - Azimuth

Open and Interconnected Systems

Brendan Fong finished his thesis a while ago, and here it is!

• Brendan Fong, The Algebra of Open and Interconnected Systems, Ph.D. thesis, Department of Computer Science, University of Oxford, 2016.

This material is close to my heart, since I’ve informally served as Brendan’s advisor since 2011, when he came to Singapore to work with me on chemical reaction networks. We’ve been collaborating intensely ever since. I just looked at our correspondence, and I see it consists of 880 emails!

At some point I gave him a project: describe the category whose morphisms are electrical circuits. He took up the challenge much more ambitiously than I’d ever expected, developing powerful general frameworks to solve not only this problem but also many others. He did this in a number of papers, most of which I’ve already discussed:

• Brendan Fong, Decorated cospans, Th. Appl. Cat. 30 (2015), 1096–1120. (Blog article here.)

• Brendan Fong and John Baez, A compositional framework for passive linear circuits. (Blog article here.)

• Brendan Fong, John Baez and Blake Pollard, A compositional framework for Markov processes. (Blog article here.)

• Brendan Fong and Brandon Coya, Corelations are the prop for extraspecial commutative Frobenius monoids. (Blog article here.)

• Brendan Fong, Paolo Rapisarda and Paweł Sobociński,
A categorical approach to open and interconnected dynamical systems.

But Brendan’s thesis is the best place to see a lot of this material in one place, integrated and clearly explained.

I wanted to write a summary of his thesis. But since he did that himself very nicely in the preface, I’m going to be lazy and just quote that! (I’ll leave out the references, which are crucial in scholarly prose but a bit off-putting in a blog.)


This is a thesis in the mathematical sciences, with emphasis on the mathematics. But before we get to the category theory, I want to say a few words about the scientific tradition in which this thesis is situated.

Mathematics is the language of science. Twinned so intimately with physics, over the past centuries mathematics has become a superb—indeed, unreasonably effective—language for understanding planets moving in space, particles in a vacuum, the structure of spacetime, and so on. Yet, while Wigner speaks of the unreasonable effectiveness of mathematics in the natural sciences, equally eminent mathematicians, not least Gelfand, speak of the unreasonable ineffectiveness of mathematics in biology and related fields. Why such a difference?

A contrast between physics and biology is that while physical systems can often be studied in isolation—the proverbial particle in a vacuum—biological systems are necessarily situated in their environment. A heart belongs in a body, an ant in a colony. One of the first to draw attention to this contrast was Ludwig von Bertalanffy, biologist and founder of general systems theory, who articulated the difference as one between closed and open systems:

Conventional physics deals only with closed systems, i.e. systems which are considered to be isolated from their environment. […] However, we find systems which by their very nature and definition are not closed systems. Every living organism is essentially an open system. It maintains itself in a continuous inflow and outflow, a building up and breaking down of components, never being, so long as it is alive, in a state of chemical and thermodynamic equilibrium but maintained in a so-called ‘steady state’ which is distinct from the latter.

While the ambitious generality of general systems theory has proved difficult, von Bertalanffy’s philosophy has had great impact in his home field of biology, leading to the modern field of systems biology. Half a century later, Dennis Noble, another great pioneer of systems biology and the originator of the first mathematical model of a working heart, describes the shift as one from reduction to integration.

Systems biology […] is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. It means changing our philosophy, in the full sense of the term.

In this thesis we develop rigorous ways of thinking about integration or, as we refer to it, interconnection.

Interconnection and openness are tightly related. Indeed, openness implies that a system may be interconnected with its environment. But what is an environment but comprised of other systems? Thus the study of open systems becomes the study of how a system changes under interconnection with other systems.

To model this, we must begin by creating language to describe theinterconnection of systems. While reductionism hopes that phenomena can be explained by reducing them to “elementary units investigable independently of each other” (in the words of von Bertalanffy), this philosophy of integration introduces as an additional and equal priority the investigation of the way these units are interconnected. As such, this thesis is predicated on the hope that the meaning of an expression in our new language is determined by the meanings of its constituent expressions together with the syntactic rules combining them. This is known as the principle of compositionality.

Also commonly known as Frege’s principle, the principle of compositionality both dates back to Ancient Greek and Vedic philosophy, and is still the subject of active research today. More recently, through the work of Montague in natural language semantics and Strachey and Scott in programming language semantics, the principle of compositionality has found formal expression as the dictum that the interpretation of a language should be given by a homomorphism from an algebra of syntactic representations to an algebra of semantic objects. We too shall follow this route.

The question then arises: what do we mean by algebra? This mathematical question leads us back to our scientific objectives: what do we mean by system? Here we must narrow, or at least define, our scope. We give some examples. The investigations of this thesis began with electrical circuits and their diagrams, and we will devote significant time to exploring their compositional formulation. We discussed biological systems above, and our notion of system
includes these, modelled say in the form of chemical reaction networks or Markov processes, or the compartmental models of epidemiology, population biology, and ecology. From computer science, we consider Petri nets, automata, logic circuits, and the like. More abstractly, our notion of system encompasses matrices and systems of differential equations.

Drawing together these notions of system are well-developed diagrammatic representations based on network diagrams— that is, topological graphs. We call these network-style diagrammatic languages. In abstract, by ‘system’ we shall simply mean that which can be represented by a box with a collection of terminals, perhaps of different types, through which it interfaces with the surroundings. Concretely, one might envision a circuit diagram with terminals, such as


The algebraic structure of interconnection is then simply the structure that results from the ability to connect terminals of one system with terminals of another. This graphical approach motivates our language of interconnection: indeed, these diagrams will be the expressions of our language.

We claim that the existence of a network-style diagrammatic language to represent a system implies that interconnection is inherently important in understanding the system. Yet, while each of these example notions of system are well-studied in and of themselves, their compositional, or algebraic, structure has received scant attention. In this thesis, we study an algebraic structure called a ‘hypergraph category’, and argue that this is the relevant algebraic structure for modelling interconnection of open systems.

Given these pre-existing diagrammatic formalisms and our visual intuition, constructing algebras of syntactic representations is thus rather straightforward. The semantics and their algebraic structure are more subtle.

In some sense our semantics is already given to us too: in studying these systems as closed systems, scientists have already formalised the meaning of these diagrams. But we have shifted from a closed perspective to an open one, and we need our semantics to also account for points of interconnection.

Taking inspiration from Willems’ behavioural approach and Deutsch’s constructor theory, in this thesis I advocate the following position. First, at each terminal of an open system we may make measurements appropriate to the type of terminal. Given a collection of terminals, the universum is then the set of all possible measurement outcomes. Each open system has a collection of terminals, and hence a universum. The semantics of an open system is the subset of measurement outcomes on the terminals that are permitted by the system. This is known as the behaviour of the system.

For example, consider a resistor of resistance r. This has two terminals—the two ends of the resistor—and at each terminal, we may measure the potential and the current. Thus the universum of this system is the set \mathbb{R}\oplus\mathbb{R}\oplus\mathbb{R}\oplus\mathbb{R}, where the summands represent respectively the potentials and currents at each of the two terminals. The resistor is governed by Kirchhoff’s current law, or conservation of charge,
and Ohm’s law. Conservation of charge states that the current flowing into one terminal must equal the current flowing out of the other terminal, while Ohm’s law states that this current will be proportional to the potential difference, with constant of proportionality 1/r. Thus the behaviour of the resistor is the set

\displaystyle{   \big\{\big(\phi_1,\phi_2,     -\tfrac1r(\phi_2-\phi_1),\tfrac1r(\phi_2-\phi_1)\big)\,\big\vert\,     \phi_1,\phi_2 \in \mathbb{R}\big\} }

Note that in this perspective a law such as Ohm’s law is a mechanism for partitioning behaviours into possible and impossible behaviours.

Interconnection of terminals then asserts the identification of the variables at the identified terminals. Fixing some notion of open system and subsequently an algebra of syntactic representations for these systems, our approach, based on the principle of compositionality, requires this to define an algebra of semantic objects and a homomorphism from syntax to semantics. The first part of this thesis develops the mathematical tools necessary to pursue this vision for modelling open systems and their interconnection.

The next goal is to demonstrate the efficacy of this philosophy in applications. At core, this work is done in the faith that the right language allows deeper insight into the underlying structure. Indeed, after setting up such a language for open systems there are many questions to be asked: Can we find a sound and complete logic for determining when two syntactic expressions have the same semantics? Suppose we have systems that have some property, for example controllability. In what ways can we interconnect controllable systems so that the combined system is also controllable? Can we compute the semantics of a large system quicker by computing the semantics of subsystems and then composing them? If I want a given system to achieve a specified trajectory, can we interconnect another system to make it do so? How do two different notions of system, such as circuit diagrams and signal flow graphs, relate to each other? Can we find homomorphisms between their syntactic and semantic algebras? In the second part of this thesis we explore some applications in depth, providing answers to questions of the above sort.

Outline of the thesis

The thesis is divided into two parts. Part I, comprising
Chapters 1 to 4, focuses on mathematical foundations. In it we develop the theory of hypergraph categories and a powerful tool for constructing and manipulating them: decorated corelations. Part II, comprising Chapters 5 to 7, then discusses applications of this theory to examples of open systems.

The central refrain of this thesis is that the syntax and semantics of network-style diagrammatic languages can be modelled by hypergraph categories. These are introduced in Chapter 1. Hypergraph categories are symmetric monoidal categories in which every object is equipped with the structure of a special commutative Frobenius monoid in a way compatible with the monoidal product. As we will rely heavily on properties of monoidal categories, their functors, and their graphical calculus, we begin with a whirlwind review of these ideas. We then provide a definition of hypergraph categories and their functors, a strictification theorem, and an important example: the category of cospans in a category with finite colimits.

A cospan is a pair of morphisms

X \to N \leftarrow Y

with a common codomain. In Chapter 2 we introduce the idea of a ‘decorated cospan’, which equips the apex N with extra structure. Our motivating example is cospans of finite sets decorated by graphs, as in this picture:

Here graphs are a proxy for expressions in a network-style diagrammatic language. To give a bit more formal detail, let \mathcal C be a category with finite colimits, writing its as coproduct as +, and let (\mathcal D, \otimes) be a braided monoidal category. Decorated cospans provide a method of producing a hypergraph category from a lax braided monoidal functor

F\colon (\mathcal C,+) \to (\mathcal D, \otimes)

The objects of these categories are simply the objects of \mathcal C, while the morphisms are pairs comprising a cospan X \rightarrow N \leftarrow Y in \mathcal C together with an element I \to FN in \mathcal D—the so-called decoration. We will also describe how to construct hypergraph functors between decorated cospan categories. In particular, this provides a useful tool for constructing a hypergraph category that captures the syntax of a network-style diagrammatic language.

Having developed a method to construct a category where the morphisms are expressions in a diagrammatic language, we turn our attention to categories of semantics. This leads us to the notion of a corelation, to which we devote Chapter 3. Given a factorisation system (\mathcal{E},\mathcal{M}) on a category \mathcal{C}, we define a corelation to be a cospan X \to N \leftarrow Y such that the copairing of the two maps, a map X+Y \to N, is a morphism in \mathcal{E}. Factorising maps X+Y \to N using the factorisation system leads to a notion of equivalence on cospans, and this helps us describe when two diagrams are equivalent. Like cospans, corelations form hypergraph categories.

In Chapter 4 we decorate corelations. Like decorated cospans,
decorated corelations are corelations together with some additional structure on the apex. We again use a lax braided monoidal functor to specify the sorts of extra structure allowed. Moreover, decorated corelations too form the morphisms of a hypergraph category. The culmination of our theoretical work is to show that every hypergraph category and every hypergraph functor can be constructe using decorated corelations. This implies that we can use decorated corelations to construct a semantic hypergraph category for any network-style diagrammatic language, as well as a hypergraph functor from its syntactic category that interprets each diagram. We also discuss how the intuitions behind decorated corelations guide construction of these categories and functors.

Having developed these theoretical tools, in the second part we turn to demonstrating that they have useful applications. Chapter 5 uses corelations to formalise signal flow diagrams representing linear time-invariant discrete dynamical systems as morphisms in a category. Our main result gives an intuitive sound and fully complete equational theory for reasoning about these linear time-invariant systems. Using this framework, we derive a novel structural characterisation of controllability, and consequently provide a methodology for analysing controllability of networked and interconnected systems.

Chapter 6 studies passive linear networks. Passive linear
networks are used in a wide variety of engineering applications, but the best studied are electrical circuits made of resistors, inductors and capacitors. The goal is to construct what we call the ‘black box functor’, a hypergraph functor from a category of open circuit diagrams to a category of behaviours of circuits. We construct the former as a decorated cospan category, with each morphism a cospan of finite sets decorated by a circuit diagram on the apex. In this category, composition describes the process of attaching the outputs of one circuit to the inputs of another. The behaviour of a circuit is the relation it imposes between currents and potentials at their terminals. The space of these currents and potentials naturally has the structure of a symplectic vector space, and the relation imposed by a circuit is a Lagrangian linear relation. Thus, the black box functor goes from our category of circuits to the category of symplectic vector spaces and Lagrangian linear relations. Decorated corelations provide a critical tool for constructing these hypergraph categories and the black box functor.

Finally, in Chapter 7 we mention two further research directions. The first is the idea of a ‘bound colimit’, which aims to describe why epi-mono factorisation systems are useful for constructing corelation categories of semantics for open systems. The second research direction pertains to applications of the black box functor for passive linear networks, discussing the work of Jekel on the inverse problem for electric circuits and the work of Baez, Fong, and Pollard on open Markov processes.

by John Baez at October 23, 2016 11:50 PM

Christian P. Robert - xi'an's og


One approach to random number generation that had always intrigued me is Kinderman and Monahan’s (1977) ratio-of-uniform method. The method is based on the result that the uniform distribution on the set A of (u,v)’s in R⁺xX such that


induces the distribution with density proportional to ƒ on V/U. Hence the name. The proof is straightforward and the result can be seen as a consequence of the fundamental lemma of simulation, namely that simulating from the uniform distribution on the set B of (w,x)’s in R⁺xX such that


induces the marginal distribution with density proportional to ƒ on X. There is no mathematical issue with this result, but I have difficulties with picturing the construction of efficient random number generators based on this principle.

ratouniI thus took the opportunity of the second season of [the Warwick reading group on] Non-uniform random variate generation to look anew at this approach. (Note that the book is freely available on Luc Devroye’s website.) The first thing I considered is the shape of the set A. Which has nothing intuitive about it! Luc then mentions (p.195) that the boundary of A is given by


which then leads to bounding both ƒ and x→x²ƒ(x) to create a box around A and an accept-reject strategy, but I have trouble with this result without making further assumptions about ƒ… Using a two component normal mixture as a benchmark, I found bounds on u(.) and v(.) and simulated a large number of points within the box to end up with the above graph that indeed the accepted (u,v)’s were within this boundary. And the same holds with a more ambitious mixture:


Filed under: Books, pictures, R, Statistics Tagged: Luc Devroye, Non-Uniform Random Variate Generation, random number generation, ratio of uniform algorithm, University of Warwick

by xi'an at October 23, 2016 10:16 PM

October 22, 2016

Christian P. Robert - xi'an's og

Fool’s quest [book review]

Although I bought this second volume in the Fitz and the Fool trilogy quite a while ago, I only came to read it very recently. And enjoyed it unreservedly! While the novel builds upon the universe Hobb created in the liveship traders trilogy (forget the second trilogy!) and the Assassin and Fool trilogies, the story is compelling enough to bring out excitement and longing for further adventures of Fitz and the Fool. Many characters that were introduced in the earlier volume suddenly take on substance and meaning, while the main characters are no longer heroes of past eras, but also acquire further depth and subtlety. Even long-lasting ones like Chade. I cannot tell whether this new dimension of the plights affecting the Six Duchies and its ruler, King Verity, was conceived from the start or came later to the author, but it really fits seamlessly and increases by several orders of magnitude the epic feeling of the creation. Although it is hard to rank this book against the very first ones, like Royal Assassin, I feel this is truly one of the best of Hobb’s books, with the right mixture of action, plotting, missed opportunities and ambiguous angles about the main characters. So many characters truly come to life in this volume that I bemoan the sluggish pace of the first one even more now. While one could see Fool’s Quest as the fourteenth book in the Realm of the Elderlings series, and hence hint at senseless exploitation of the same saga, there are just too many new threads and perspective there to maintain this posture. A wonderful book and a rarity of a middle book being so. I am clearly looking forward the third instalment!

Filed under: Books, Kids Tagged: book review, Elderlings, Fool's Assassin, Fool's Quest, heroic fantasy, Robin Hobb, Royal Assassin, Six Duchies

by xi'an at October 22, 2016 10:16 PM

Peter Coles - In the Dark

The Case of Bode versus Mundell

Getting ready to come in and help with today’s Undergraduate Open Day today at Cardiff University, I checked Twitter this morning and found a number of tweets about a shocking news story that I feel obliged to comment on.  The astronomy community in the United Kingdom is fairly small and relatively close-knit, which makes this case especially troubling, but it does have far wider ramifications in the University sector and beyond.

I don’t usually link to stories in the Daily Mail, but you can find the item here. The report relates to a libel action taken by Professor Mike Bode of Liverpool John Moores University against Professor Carole Mundell, a former employee of that institution who is now Head of the Astrophysics group at the University of Bath.  Carole Mundell is a highly regarded extragalactic observational astronomer who works primarily on gamma-ray bursters and their implications for cosmology.

The case revolves around allegations of sexual harassment (and, according to the Daily Mail, sexual assault) made against another former employee of Liverpool John Moores, Dr Chris Simpson by a female student, and the allegation by Professor Mundell that Professor Bode wrote a misleading reference on behalf of Dr Simpson that omitted mention of the pending allegations and allowed him to move to a post in South Africa before the investigations into them could be concluded. Professsor Bode claimed that this allegation was defamatory and sued Professor Mundell for libel and slander.

The Daily Mail story is not very illuminating as to the substance of the litigation but a full account of the case of Bode versus Mundell can be found here. In fact the case did not go to a full trial hearing, but summarily concluded before getting that far on the grounds that it would certainly fail; you can find more information about the judgment at the above link.

I had no idea any of this was going on until this morning. The story left me shocked, angry and dismayed but also full of admiration for Carole Mundell’s courage and determination in fighting this case. I think I’ll refrain from commenting further on the conduct of Mike Bode and Chris Simpson, or indeed of that Liverpool John Moores University, except to say that I hope this affair does not end with this failed action and that wider lessons can be learned from what happened in this case. I suspect that Liverpool John Moores is going to have some seriously bad publicity about this, but of greater concern to the wider community is the apparent failure of process in dealing with the allegations about Chris Simpson.

Coincidentally, yesterday saw the publication of a report by the Universities UK Task Force examining violence against women, harassment and hate crime affecting university students.  The document includes a number of rather harrowing case studies that make for difficult reading, but it’s an important document. A new set of guidelines has been issued relating to how to handle allegations that involve behaviour that may be criminal (such as sexual assault).  I urge anyone working in the HE sector (or pretty much anywhere else for that matter) to read the recommendations and to act on them.

I have no idea whether sexual harassment is an increasing problem on campus or what we are seeing is increased reporting, but it is clear that this is a serious problem in the UK’s universities. On paper, the policies and procedures universities already have in place for should be able to deal with many of the issues raised in the Task Force report, but there seems to me to be an individual and perhaps even institutional reluctance to follow these procedures properly. The fear of reputational damage seems to be standing in the way of the genuine cultural change that we need.

Bode versus Mundell may well prove to be a landmark case that makes the astronomy community come to terms with the sexual harassment going on in its midst. As much I’d like to be proved wrong, however, I don’t think it is likely to be the last scandal that will come to light. Carole Mundell’s courage may well lead to more people coming forward, and perhaps changes in practice will mean they are pursued more vigorously. Cultural transformation may prove to be a painful process, but it will prove to have been worth it.





by telescoper at October 22, 2016 02:36 PM

Lubos Motl - string vacua and pheno

A bump at LEP near \(30\GeV\): weak but possibly justifiable
In the morning, I was intrigued by a hep-ex paper by Arno Heister
Observation of an excess at \(30\GeV\) in the opposite sign di-muon spectra of \(Z\to b\bar b+X\) events recorded by the ALEPH experiment at LEP
To make the story short, he claims that the 1992-1995 data from the LEP (Large Electron-Positron) Collider at CERN contains a less-than-3-sigma bump at \(M_{\mu^+\mu^-}\sim 30.4\GeV\) indicating a boson of width \(1.8\GeV\).

Recall that the LHC is located in a tunnel that was the largest European infrastructure project before it was surpassed by the Channel Tunnel. But the LHC isn't the first collider that has lived or lives in the LHC tunnel. Before it was born, the LEP collider – that used to collide electrons with positrons – was happily living there.

The song above shows what LEP looked like to the horny girlfriends of (male and female) particle physicists. However, if you dared to claim that there was no LHC in the tunnel, you would be wrong. The very song was sung (in 2000) by the LHC, the Les Horribles Cernettes. ;-)

I was hesitating whether the bump deserves a blog post but I decided that the answer is Yes after I saw Matt Strassler's text
A Hidden Gem At An Old Experiment?
minutes ago. Strassler starts by saying that while he admires the intelligence of Sabine Hossenfelder – who trumps most chimps that Strassler knows – she is completely wrong when she says that the nightmare scenario has come true. After all, Strassler points out, there can't be any nightmare because it's not even a nighttime yet.

When he gets to the technicalities, it becomes clear that the events pointed out by Arno Heister are events that contain a muon pair, \(\mu^+\mu^-\), or an electron-positron pair, \(e^+ e^-\), along with the bottom quark-antiquark pair, \(b\bar b\). The invariant mass of the leptons \(\ell^+\ell^-\) is measured and the bump is found near \(30\GeV\), around one-third of the Z-boson mass.

But on top of the lepton pair, the events are requested to contain the bottom quarks. So the idea is that you can't see these new hypothetical bosons decaying to the lepton pair in isolation. You may only see them along with another new particle that decays to the quark-antiquark pair.

Matt Strassler says (and said a decade ago) that this is rather natural in the "hidden valley" models. Those may predict a new spin-one (vector) boson \(V\), with the mass of \(30\GeV\) if you want to explain these events; along with a new spin-zero (scalar) boson \(S\) which decays to \(b\bar b\) and sometimes perhaps to \(\tau^+\tau^-\).

Both new particles \(V\) and \(S\) may arise from the decay of\[

Z \to V + S

\] the good old Z-boson whose mass is \(91\GeV\). So in the electron-positron collisions, the Z-boson could have been created. This Z-boson could have decayed to two (new) bosons \(V,S\) and those decayed to the lepton pair and the quark-antiquark pair, respectively.

However, I find the whole class of "hidden valley" models rather unmotivated. Just to be sure, it's not wrong or inconsistent – like loop quantum gravity or other piece of debunkable garbage that I have repeatedly written about. But I don't feel that they're needed for anything, that there really exists evidence that they should be right. There's no strong evidence that they shouldn't exist, either. But if you like to use some Occam's razor, you could think that there is a good reason – the razor – to prefer the Standard Model (or MSSM) over the "hidden valley" models. On the other hand, razors may be dangerous. You may cut your throat if you use them unwisely.

The "hidden valleys" may occur but if they do, one may ask: Who ordered them?

It's actually hard to extract a reasonable estimate from Matt Strassler of the probability that the "hidden valley" models are relevant for Nature – or even relevant for those particular bumps at ALEPH (one of the detectors at LEP, i.e. a counterpart of ATLAS and CMS at the LHC) that are being talked about here. Why?

Because Matt Strassler is, along with Kathryn Zurek and two or so less important collaborators, a co-father of the "hidden valleys". See Strassler-Zurek 2006. In fact, the first three references in Heister's paper are papers written or co-written by Matt Strassler. Of course, a blogger's proposed set of new particles may be real. But worries about the blogger's impartiality may potentially be justified, too.

OK, can't the existence of the new \(30\GeV\) boson be settled by the LHC?

The theme song from "Friends of the Green Valley", a Czechoslovak communist-era soap opera about kids (of my age, at that time) who befriend wildlife thanks to a gamekeeper. Just to be sure, the theme song in Czech is sung by a non-musician actor who is Slovak, to make things worse.

It seems to me that Strassler basically says that the number of collisions at the LHC is more than enough to do so but the 6,000 LHC experimenters haven't actually done a search for the channel equivalent to Heister's paper – or many other searches for the "hidden valleys" that should be done. I am somewhat surprised by this claim but it's plausible.

It would surely be exciting in the medium term if this new physics were found. However, in the long run, I wouldn't know what to do with that. I think that the effective field theory describing all of our experiments in Nature would get "more messy" and "more urgently waiting for answers", not less so. The "hidden valleys" could have far-reaching consequences for our guesses about the shape of compactified dimensions in string theory and related high-energy issues. But I have some worries that the anthropic hand-waving and "typicality" could be our best explanation why the hidden valleys exist at all and why they have the rough properties they have. That could amplify some features of the status quo that some people find frustrating.

by Luboš Motl ( at October 22, 2016 07:09 AM

October 21, 2016

Emily Lakdawalla - The Planetary Society Blog

Likely Schiaparelli crash site imaged by Mars Reconnaissance Orbiter
Just a day after the arrival of ExoMars Trace Gas Orbiter and its lander Schiaparelli, Mars Reconnaissance Orbiter has taken a photo of the landing site with its Context Camera, and things do not look good.

October 21, 2016 10:56 PM

The n-Category Cafe

Linear Algebraic Groups (Part 2)

This time we show how projective geometry ‘subsumes’ Euclidean, elliptic and hyperbolic geometry. It does so in two ways: the projective plane includes all 3 other planes, and its symmetry group contains their symmetry groups.

By the time we understand this, we’re almost ready to think about geometry as a subject that depends on a choice of group. But we’re also getting ready to think about algebraic geometry (for example, projective varieties).

  • Lecture 2 (Sept. 27) - The road to projective geometry. Treating Euclidean, elliptic and hyperbolic geometry on an equal footing: in each case the symmetry group is a linear algebraic group of 3 × 3 matrices over a field <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>, points are certain 1d subspaces of <semantics>k 3<annotation encoding="application/x-tex">k^3</annotation></semantics>, and lines are certain 2d subspaces of <semantics>k 3<annotation encoding="application/x-tex">k^3</annotation></semantics>. In projective geometry we take the symmetry group to be all of <semantics>GL(3)<annotation encoding="application/x-tex">\mathrm{GL}(3)</annotation></semantics>, take points to be all 1d subspaces of <semantics>k 3<annotation encoding="application/x-tex">k^3</annotation></semantics>, and take lines to be all 2d subspaces of <semantics>k 3<annotation encoding="application/x-tex">k^3</annotation></semantics>. It thus subsumes Euclidean, elliptic and hyperbolic geometry. In general we define projective <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-space, <semantics>kP n<annotation encoding="application/x-tex">k\mathrm{P}^n</annotation></semantics>, to be the set of 1d subspaces of <semantics>k n+1<annotation encoding="application/x-tex">k^{n+1}</annotation></semantics>.

by john ( at October 21, 2016 08:18 PM

Tommaso Dorigo - Scientificblogging

A Dimuon Particle At 30 GeV In ALEPH ??
UPDATE: before you read the text below, one useful bit of information. The author of the analysis described below is not a member of ALEPH since 2004. He got access to the data as any of you could, since the ALEPH data is open access by now. There would be a lot to discuss about whether it is a good thing (I think so) or not that any regular joe or jane can take collider data and spin it his or her own way and claim new physics effects, but let's leave it for some other post. What is important is that ALEPH is not behind this publication, and members of it have tried to explain to the author that the claim was bogus. Indeed, on the matter of the source of the signal: it is clearly spurious, as the muons are collinear with the b-jets emitted in the Z decay.

read more

by Tommaso Dorigo at October 21, 2016 03:59 PM

Matt Strassler - Of Particular Significance

A Hidden Gem At An Old Experiment?

This summer there was a blog post from   claiming that “The LHC `nightmare scenario’ has come true” — implying that the Large Hadron Collider [LHC] has found nothing but a Standard Model Higgs particle (the simplest possible type), and will find nothing more of great importance. With all due respect for the considerable intelligence and technical ability of the author of that post, I could not disagree more; not only are we not in a nightmare, it isn’t even night-time yet, and hardly time for sleep or even daydreaming. There’s a tremendous amount of work to do, and there may be many hidden discoveries yet to be made, lurking in existing LHC data.  Or elsewhere.

I can defend this claim (and have done so as recently as this month; here are my slides). But there’s evidence from another quarter that it is far too early for such pessimism.  It has appeared in a new paper (a preprint, so not yet peer-reviewed) by an experimentalist named Arno Heister, who is evaluating 20-year old data from the experiment known as ALEPH.

In the early 1990s the Large Electron-Positron (LEP) collider at CERN, in the same tunnel that now houses the LHC, produced nearly 4 million Z particles at the center of ALEPH; the Z’s decayed immediately into other particles, and ALEPH was used to observe those decays.  Of course the data was studied in great detail, and you might think there couldn’t possibly be anything still left to find in that data, after over 20 years. But a hidden gem wouldn’t surprise those of us who have worked in this subject for a long time — especially those of us who have worked on hidden valleys. (Hidden Valleys are theories with a set of new forces and low-mass particles, which, because they aren’t affected by the known forces excepting gravity, interact very weakly with the known particles.  They are also often called “dark sectors” if they have something to do with dark matter.)

For some reason most experimenters in particle physics don’t tend to look for things just because they can; they stick to signals that theorists have already predicted. Since hidden valleys only hit the market in a 2006 paper I wrote with then-student Kathryn Zurek, long after the experimenters at ALEPH had moved on to other experiments, nobody went back to look in ALEPH or other LEP data for hidden valley phenomena (with one exception.) I didn’t expect anyone to ever do so; it’s a lot of work to dig up and recommission old computer files.

This wouldn’t have been a problem if the big LHC experiments (ATLAS, CMS and LHCb) had looked extensively for the sorts of particles expected in hidden valleys. ATLAS and CMS especially have many advantages; for instance, the LHC has made over a hundred times more Z particles than LEP ever did. But despite specific proposals for what to look for (and a decade of pleading), only a few limited searches have been carried out, mostly for very long-lived particles, for particles with mass of a few GeV/c² or less, and for particles produced in unexpected Higgs decays. And that means that, yes, hidden physics could certainly still be found in old ALEPH data, and in other old experiments. Kudos to Dr. Heister for taking a look.

Now, has he actually found something hidden at ALEPH? It’s far too early to say. Dr. Heister is careful not to make a strong claim: his paper refers to an observed excess, not to the discovery of or even evidence for anything. But his analysis can be interpreted as showing a hint of a new particle (let’s call it the V particle, just to have a name for it) decaying sometimes to a muon and an anti-muon, and probably also sometimes to an electron and an anti-electron, with a rest mass about 1/3 of that of the Z particle — about 30 GeV/c². Here’s one of the plots from his paper, showing the invariant mass of the muon and anti-muon in Z decays that also have evidence of a bottom quark and a bottom anti-quark (each one giving a jet of hadrons that has been “b-tagged”).  There’s an excess at about 30 GeV.


ALEPH data as analyzed in Heister’s paper, showing the number of Z particle decays with two bottom quark jets and a muon/anti-muon pair, as a function of the invariant mass of the muon/anti-muon pair.  The bump at around 30 GeV is unexpected; might it be a new particle?  Not likely, but not impossible.

The simplest physical effect that would produce such a bump is a new particle; indeed this is how the Z particle itself was identified, over three decades ago.

However, the statistical significance of the bump is still only (after look-elsewhere effect) at most 3 standard deviations, according to the paper. So this bump could just be a fluke; we’ve seen similar ones disappear with more data, for example this one. There are also a couple of serious issues that will give experts pause (the width of the bump is surprisingly large; the angular correlations seem consistent with background rather than a new signal; etc.) So the data itself is not enough to convince anyone, including Dr. Heister, though it is certainly interesting.

Conversely it is intriguing that the bump in the plot above is observed in events with bottom quarks. It is common for hidden valleys (including everything from a simple abelian Higgs models to more complex confining models) to contain

  • at least one spin-one particle V (which can decay to muon/anti-muon or electron/positron) and
  • at least one spin-zero particle S (which can decay to bottom/anti-bottom preferentially, with occasional decays to tau/anti-tau.)

For example, in such models, a rare decay such as Z  ⇒ V + S, producing a muon/anti-muon pair plus two bottom quark/anti-quark jets, would often be a possibility.*

*[In this case the bottom and anti-bottom jets would themselves show a peak in their invariant mass, but unfortunately their distribution in the presence of a candidate V was not shown. One other obvious prediction of such a model is a handful of striking Z ⇒ V + S ⇒ muon/anti-muon + tau/anti-tau events; but the expected number is very small and somewhat model-dependent.]

Another possibility (also common in hidden valleys) is that the bottom-tagged jets aren’t actually from real bottom quarks, and are instead fake bottom jets generated by one or two new long-lived hidden valley particles.

But clearly, before anyone gets excited, far more evidence is required. We’ll need to see similar studies done at one or more of the three other experiments that ran concurrently with ALEPH — L3, OPAL, and DELPHI. And of course ATLAS, CMS, and LHCb will surely take a look in their own data; for instance, ATLAS and CMS could search for a dilepton resonance in events with at least two bottom-tagged jets, where the whole system of bottom-tagged jets and dileptons has a invariant mass not greater than about 100 GeV/c². [[IMPORTANT NOTE ADDED: It has been pointed out to me (thanks Matt Reece) that there was a relevant CMS search from 2015 that had somehow entirely escaped my attention, in which one b-tag was required and a di-muon bump was sought between 25 and 60 GeV.  Although not aimed at hidden valleys, it provides one of the few constraints upon them in this mass range.  And at first glance, it seems to disfavor any signal large enough to explain the ALEPH excess.  But there might be subtleties, so let me not draw firm conclusions yet.]] They should also look for the V particle in other ways — perhaps following the methods I’ve suggested repeatedly (see for example pages 40-45 of this 2008 talk) — since the V might not only appear in Z particle decays. [That is: look for boosted V’s; look for V’s in high-energy events or high missing-energy events; look for V’s in events with many jets, possibly with bottom-tags; etc.] In any case, if anything like the V particle really exists, several (and perhaps all) of the experiments should see some evidence for it, and in more than just a single context.

Though we should be skeptical that today’s paper on ALEPH data is the first step toward a major discovery, at minimum it is important for what it indirectly confirms: that searches at the LHC are far from complete, and that discoveries might lie hidden, for example in rare Z decays (and in rare decays of other particles, such as the top quark.) Neither ATLAS, CMS nor LHCb have ever done a search for rare but spectacular Z particle decays, but they certainly could, as they recently did for the Higgs particle; and if Heister’s excess turns out to be a real signal, they will be seen to have missed a huge opportunity.  So I hope that Heister’s paper, at a minimum, will encourage the LHC experiments to undertake a broader and more comprehensive program of searches for low-mass particles with very weak interactions.  Otherwise, my own nightmare, in which the diamonds hidden in the rough might remain undetected — perhaps for decades — might come true.

Filed under: Other Collider News, Particle Physics Tagged: ALEPH, atlas, cms, dilepton, HiddenValleys, LEP, LHC, LHCb

by Matt Strassler at October 21, 2016 03:19 PM

Christian P. Robert - xi'an's og

positions in French universities [deadline]

Université Paris-DauphineFor ‘Og’s readers interested in lecturer or professor positions in French universities next academic year, including a lecturer position at Paris-Dauphine in applied and computational statistics!, you need to apply for a qualification label by a national committee which strict deadline is next Tuesday, October 25, at 4pm (Paris/CET time).  (The whole procedure is exclusively in French!)

Filed under: Kids, Statistics, University life Tagged: academic position, assistant professor position, deadline, France, French universities, lecturer, qualification, Université Paris Dauphine

by xi'an at October 21, 2016 12:18 PM

astrobites - astro-ph reader's digest

Recognizing the minds behind scientific software

Title: The Astropy Problem

Authors: Demitri Muna et al.

First author’s institution: Center for Cosmology and AstroParticle Physics and Department of Astronomy, The Ohio State University, USA

Status: available on arXiv

Today is one of those occasional Fridays we will go beyond Astrobites’ usual subjects. This time we are remaining within the astro-ph boundaries; however, today’s paper is not about science itself, rather about the tools we use for doing it. More specifically, about the recognition and importance we (don’t) give to those behind such tools. Today’s paper focuses on Astropy, a core package for Astronomy in the programming language Python widely used by our community. The problems faced by the Astropy Collaboration that are raised in this paper extend to anyone involved in software development for Astronomy, and arguably science in general.

A Brief History of Astropy

If you have ever written a Python code to help you with a task in Astronomy, there’s a good chance it starts with “import astropy”. You’ve probably also read papers in which Astropy is used to process and analyze data. If you happen to have any of those laying around, quickly check something: do the authors cite Astropy Collaboration et al. (2013)? If the answer is no, this is probably not the only case. Said paper has only about 400 citations, which clearly does not reflect its importance to our community. Even if you don’t write any code, you’ve probably come across data that was either collected, reduced, or analyzed by code that uses Astropy.

Astropy is older than you’d think. The foundations for it began in 1998 at the Space Telescope Science Institute (STScI). IDL was the most popular programming language in Astronomy at the time, but it was subjected to a license fee. STScI was the first to point out the potential a move from IDL to a modern, free programming language such as Python would offer. At the time, all development was in direct support of the STScI mission; any generalization to other telescopes had to be additionally argued for. In the early 2000’s, this move to Python was something that was spreading more generally in the community. Many people no longer had patience with some of the existing, outdated tools; Python represented a modern alternative.

This move was not an organized effort at first. Many people were doing it at the same time and independently. At one point, Erik Tollerud, a graduate student at UC Irvine at the time, realized there were fourteen independent Python packages to perform coordinate system transformations. This called attention to the need for a coordinated development effort, which in 2011 lead to the Astropy project. It incorporated much of the software developed by STScI cited above, with some additional functionality. Nowadays it includes handling coordinate systems, times and dates, modeling, FITS, ASCII, and VO file access, cosmological calculations, data visualization, and much, much more. In short, it’s an awesome tool for the everyday needs of an astronomer.

The problem: who’s paying for all that?

If Astropy is so awesome, of course it gets good funding, right? Wrong. It gets none. Except for the initial efforts at STScI, nobody else got paid for their work in Astropy. The development is led by graduate students and postdocs, with some contributions from undergrads and faculty members, in their spare time. This complete lack of funding of something that is nowadays so essential to our community, plus the little recognition in form of citations, is something that shouldn’t be overlooked. Today’s paper intends to raise the issue, and proposes some potential solutions.

But who should be paying? Employees of NASA say their money has to go to specific projects; it cannot be spent for general software. Academic institution members state that their primary mission is education, so they cannot fund software development. Other scientific institutions argue that their responsibility is to operate and run the telescope and data archives already under their remit. Finally, individual surveys received money to deliver data, not to develop software for the community. These negative opinions are widespread in Astronomy. In short, our community as a whole appears to have decided that Astropy and general software development is not something worth funding. Nonetheless, we all depend critically on it, in one way or another. And it is not cheap, as you can see from Table 1. How crazy is that?

Moreover, people not only use Astropy as is, there’s also an expectation that it will continue to be developed, have more features added, bugs fixed, and so on. We sort of think that software is something that just “happens”, the authors say, and therefore expect it to be free. However, the developers do pay a cost by investing their time, at the expense of their research and their publication output. Yet, their efforts at Astropy are usually not considered by hiring committees.

Table 1: Some statistics for Astropy based on the repository as of July 2016, version v1.2. All repositories under their GitHub account have been included, but external C libraries (namely cfitsio, ERFA, expat, and wcslib) have been excluded. The cost and development were estimated using David A. Wheeler’s "SLOCCount" software.

Table 1: Some statistics for Astropy based on the repository as of July 2016, version v1.2. All repositories under their GitHub account have been included, but external C libraries (namely cfitsio, ERFA, expat, and wcslib) have been excluded. The cost and development were estimated using David A. Wheeler’s SLOCCount software.

Possible Solutions

The authors point out that funding should be made available not only to support today’s software, but also what we’ll need in the future. We should create career positions that offer stability and good salaries to people willing to do software development. Some paths to this state of affairs are suggested by the authors. One is a subscription fee to Astropy. Institutions would pay a fee on a volunteer basis, whose value could be based perhaps on the number of users. This would not require a license server or restrictions to the use of the code, or actually alter the way Astropy is currently distributed, it would merely be a form to collectively cover its costs.

Another option is the creation of full-time developer positions within existing projects, and future ones, once they reach a certain level of funding, say $40M for example, which is the approximate cost of the Sloan Digital Sky Survey. The idea is that any project with funding above that should hire a developer to work within and be hosted by the survey/mission, serving as a liaison with Astropy. They would of course work on code that would directly benefit the mission, but keep in mind that it must be adapted for use by the general community.

These are only some of the ways to overcome the so-called “Astropy problem”, but serve to show that there are easy solutions to this problem. Our community should be considering them. As it is now, we receive and even expect enormous utility from the developers of scientific software without support, compensation, career paths and even recognition. It is common knowledge that we lack the tools necessary to comprehensively analyze the sheer amount of data available today, so we clearly need people willing to work on software that will allow us to do it. It’s reasonable that the community starts rewarding these efforts, as well as respects, encourages, and enables it. What are your thoughts? Do you have more suggestions to fix the problem? Let us know in the comments!

Note: The paper discussed here is not an official paper of nor officially endorsed by the Astropy Project, rather it is a reflection of the opinions of the authors.

by Ingrid Pelisoli at October 21, 2016 11:29 AM

Emily Lakdawalla - The Planetary Society Blog

Remembering Ewen Whitaker, the "careful and caring" scientist who found Surveyors 1 and 3
Ewen Whitaker was one of the founding members of the University of Arizona Lunar and Planetary Laboratory, one of the world's first research institutions dedicated to studying the moon and planets.

October 21, 2016 11:02 AM

Peter Coles - In the Dark

Mahler: Symphony No. 2 (“Resurrection”)

Last night I was at St David’s Hall in Cardiff yet again, this time for a piece that I’ve never heard in a live performance: Symphony No.2 (“Resurrection”) by Gustav Mahler. This is a colossal work, in five movements, that lasts about 90 minutes. The performance involved not only a huge orchestra, numbering about a hundred musicians, but also two solo vocalists and a sizeable choir (although the choir does not make its entrance until the start of the long final movement, about an hour into the piece). In my seat before the concert I was particularly struck by the size of the brass section of the orchestra, but it turned out to be even larger than it looked as there were three trumpets and three French horns hidden offstage in the wings for most of the performance – they joined the rest of the orchestra onstage for the finale.

The musicians involved last night were the Orchestra and Chorus of Welsh National Opera, and the Welsh National Opera Community Choir, conducted by Tomáš Hanus who is the new music director of Welsh National Opera; this was his St David’s Hall debut. Soloists were soprano Rebecca Evans (who was born in Pontrhydyfen, near Neath, and is a local favourite at St David’s Hall) and mezzosoprano Karen Cargill (making her St David’s debut).

I don’t really have the words to describe what a stunning musical experience this was. I was gripped all the way through, from the relatively sombre but subtly expressive opening movement through the joyously dancing second movement that recalls happier times, the third which is based on a Jewish folk tune and which ends in a shattering climax Mahler described as “a shriek of despair”, the fourth movement is built around a setting of one of the songs from Das Knaben Wunderhorn, sung beautifully by Karen Cargill who has a lovely velvety voice very well suited to this piece, which seems more like a contralto part than a mezzo. The changing moods of the work are underlined by a tonality that shifts from minor to major and back again. All that was wonderfully performed, but it was in the climactic final movement – which lasts almost half an hour and is based on setting of a poem mostly written by Mahler himself, sung by Rebecca Evans, that what was already a very good concert turned into something truly remarkable.

On many occasions I’ve written about Welsh National Opera performances in the opera theatre and in the course of doing so I’ve very often mentioned the superb WNO Chorus. They weren’t called upon until the final movement, but as soon as they started to sing they lifted the concert to another level. At first they sang sitting down, which struck me as a little strange, but later on I realised that they were holding something in reserve for the final moments of the work. As the symphony moved inexorably towards its climax I noticed the offstage brass players coming onto the stage, the choir standing up, and the organist (who had been sitting patiently with nothing to for most of the performance) took his seat. The hairs on the back of my neck stood up in anticipation of a thrilling sound to come. I wasn’t disappointed. The final stages of this piece are sublime, jubilant, shattering, transcendent but, above all, magnificently, exquisitely loud! The WNO Chorus, responding in appropriate fashion to Mahler’s instruction to sing “”mit höchster Kraft” combined with the full force of the Orchestra and the St David’s Hall organ to create an overwhelming wall of radiant sound. Superb.

Mahler himself wrote of the final movement: “The increasing tension, working up to the final climax, is so tremendous that I don’t know myself, now that it is over, how I ever came to write it.” Well, who knows where genius comes from, but Mahler was undoubtedly a genius. People often stay that his compositions are miserable, angst-ridden and depressing. I don’t find that at all. It’s true that this, as well as Mahler’s other great works, takes you on an emotional journey that is at times a difficult one. There are passages that are filled with apprehension or even dread. But without darkness there is no light. The ending of the Resurrection Symphony is all the more triumphant because of what has come before.

The end of the performance was greeted with rapturous applause (and a well-deserved standing ovation). Congratulations to Tomáš Hanus, Karen Cargill, Rebecca Evans and all the musicians who took part in last night’s concert which is one that I’ll remember for a very long time.

P.S. You might be interested to know that St David’s Hall has been ranked in the world’s Top Ten Concert Halls in terms of sound quality. Those of us lucky enough to live in or near Cardiff are blessed to have such a great venue and so many superb great concerts right on our doorstep!

P.P.S. The concert got a five-star review in the Guardian.

by telescoper at October 21, 2016 10:32 AM

astrobites - astro-ph reader's digest

48th DPS/11th EPSC Meeting: Day 4

Welcome to Day 4 of the joint 48th meeting of the Division for Planetary Sciences (DPS) and 11th European Planetary Science Congress (EPSC) in Pasadena, California! Two astrobiters are attending this conference, and we will report highlights from each day here. If you’d like to see more timely updates during the day, we encourage you to search the #DPSEPSC hashtag!

Plenary Talks (by Susanna Kohler)

asteroid family

Artist’s illustration of how an asteroid family is created. Chaotic motion affects the orbits of such families of asteroids that form from catastrophic collisions. [NASA/JPL-Caltech]

Farinella Prize Lecture
Flavors of Chaos in the Asteroid Belt

This year’s Farinella Prize was awarded to Kleomenis Tsiganis (Aristotle University of Thessaloniki, Greece), for “timely and deep examples of applications of celestial mechanics to the natural bodies of the solar system”. Tsiganis is one of the developers of the Nice model, a model that describes the migration of Jupiter, Saturn, Uranus and Neptune during the early phases of the solar system’s evolution to their currently observed positions.

Tsiganis’s prize lecture described how we can combine our knowledge of chaotic dynamics with observations to understand several interesting features of the asteroid belt, such as mean-motion resonances of asteroids, or the behavior of asteroid families (produced when a collision disrupts a parent body into fragments).


Harold C. Urey Prize Lecture
In for the Long Haul: Exploring Atmospheric Cycles on the Giant Planets

This year’s Harold C. Urey Prize for outstanding achievement in planetary research by a young scientist was awarded to Leigh Fletcher (University of Leicester), in recognition of his ground-breaking research in understanding physical and chemical processes in the atmospheres of the outer planets.

Why study the atmospheres of the giant planets? These atmospheres, Fletcher argued, are the heatbeats of the giant worlds. We are just now at a stage where we are able to measure and better understand the way that giant planet atmospheres change, shift, and oscillate on short and long timescales — and it’s an exciting time!

vanishing band

Jupiter’s prominent southern belt mysteriously vanished in 2010, before later reappearing. Ground-based observations revealed that a plume from deeper in the atmosphere may have been instrumental in reviving the belt. [ESA/Hubble]

Fletcher’s lecture provided an overview of the different cycles we’ve been able to study in the atmospheres of giant planets. These include:

  1. Seasonal evolution
  2. Stratospheric oscillations
  3. Belt/zone upheavals
  4. Storm eruptions

An example: the clouding out of one of Jupiter’s most prominent belts. In 2010 we watched Jupiter’s South Equatorial Belt turn white as it was clouded out; a few months later the clouds dissipated and it returned to normal. High-resolution ground-based observations revealed a plume punching up from lower layers to deliver material high into Jupiter’s tropopause before the belt returned. This activity likely helped to clear up the clouds and revive the belt.

saturn storm

Stunning Cassini observations showing a false-color mosaic of a giant storm that raged on Saturn in 2010/2011. [NASA/JPL-Caltech/Space Science Institute]

Another example: Saturnian storms. Roughly once every Saturnian year, a large storm erupts on the planet. In 2010-2011, Cassini took some spectacular images of an enormous storm that briefly formed the largest non-polar vortex in the solar system! Observations from Cassini were used to understand how energy was transported through the atmosphere in this storm.

Fletcher concluded by discussing what’s next for giant-planet atmosphere studies. He expressed his fervent hope that a dedicated mission to our outer ice-giant planets — which haven’t been visited since Voyager 2’s flyby in the late 1980s — would be completed within the next half century. He argued that studying the atmospheres of Uranus and Neptune is important because they represent a missing link between giant planet atmospheres and smaller terrestrial atmospheres. Plus these planets have plenty of satellites, so a mission to the ice giants would be able to find many targets to make everybody happy!

Until then, we can look forward to continued discoveries with ground-based telescopes, upcoming observations with JWST, and future programs, like a return to Jupiter with JUICE and NASA’s Europa mission.


The Ancient Habitability of Gale Crater, Mars, after Four Years of Exploration by Curiosity

In the final plenary of the meeting, Ashwin Vasavada (Project Scientist for Curiosity, JPL) and Sanjeev Gupta (Imperial College London) gave a tag-team overview of what we’ve learned about the past habitability of Mars from observations by the Curiosity rover.

Curiosity trek

Context for Curiosity’s journey. The path it’s already traversed is shown in yellow. Green is where it’s headed. [NASA/JPL/T.Reyes]

Vasavada opened the talk by mentioning that Curiosity has recently drilled its 15th hole in the surface of Mars! When the rover drills samples from rocks, the resulting powder is analyzed by its CheMin instrument, allowing us to determine the exact mineral composition of the sample. This has revealed a lot of information about the terrain that Curiosity has been traveling through.

Vasavada took us on a tour of the first part of the rover’s journey within Gale crater as it has made its way to Mount Sharp. The crater is 150km wide, and the layers of rock around Mount Sharp likely contain a record spanning several hundred million years during a period of dramatic climate evolution in Mars’s early history. Almost immediately after landing, Curiosity was able to probe that history when it drove across an ancient streambed littered with rounded pebbles, providing unmistakable evidence that Mars once had flowing water on its surface.

Not long after that point, the rover arrived at Yellowknife Bay, where it took its first mineralogical samples and gathered data indicating that the region was likely an ancient lakebed. Curiosity’s observations suggest that ancient Mars was capable of supporting life: it appears to have had liquid water with a neutral pH and low salinity, key elements and nutrients necessary for life, and energy for metabolism.

Gale crater

Curiosity’s images of what was likely once an ancient lake bed on Mars. [NASA/JPL-Caltech/MSSS]


Gupta took over describing Curiosity’s ongoing journey to the base of Mount Sharp. The observations that Curiosity has made along the way have provided additional signs of past lakes and groundwater systems — not just at one point in Mars’s history, but in fact active hydrological cycles spanning millions to tens of millions of years! This suggests that habitable conditions may have existed on Mars for quite some time in the past.

Curiosity still has plenty of mission time and a long journey ahead of it, so it seems likely that we can count on many more revelations from this intrepid little rover.

by Astrobites at October 21, 2016 08:40 AM

Clifford V. Johnson - Asymptotia



No, I'm not here to knock on the door of the Big Five*.

I was a couple of doors down at the Simons Foundation....


*P.S. But I do hope to have exciting news to report on the publishing front soon... Click to continue reading this post

The post Flatiron appeared first on Asymptotia.

by Clifford at October 21, 2016 04:42 AM

October 20, 2016

Christian P. Robert - xi'an's og

Suffrage Science awards in maths and computing

On October 11, at Bletchley Park, the Suffrage Science awards in mathematics and computer sciences were awarded for the first time to 12 senior female researchers. Among whom three statisticians, Professor Christl Donnelly from Imperial College London, my colleague at Warwick, Jane Hutton, and my friend and co-author, Sylvia Richardson, from MRC, Cambridge University. This initiative was started by the Medical Research Council in 2011 by Suffrage Science awards for life sciences, followed in 2013 by one for engineering and physics, and this year for maths and computing. The name of the award aims to connect with the Suffragette movement of the late 19th and early 20th Centuries, which were particularly active in Britain. One peculiar aspect of this award is that the recipients are given pieces of jewellery, created for each field, pieces that they will themselves give two years later to a new recipient of their choice, and so on in an infinite regress! (Which suggests a related puzzle, namely to figure out how many years it should take until all female scientists have received the award. But since the number increases as the square of the number of years, this is not going to happen unless the field proves particularly hostile to women scientists!) This jewellery award also relates to the history of the Suffragette movement since the WPSU commissioned their own jewellery awards. A clever additional touch was that the awards were delivered on Ada Lovelace Day, October 11.

Filed under: pictures, Statistics, University life Tagged: Ada Lovelace, Bletchley Park, Cambridge University, Euler's formula, Great-Britain, Imperial College London, jewellery, MRC Unit, Suffrage Science awards, Suffragettes, University of Warwick, WPSU

by xi'an at October 20, 2016 10:16 PM

Symmetrybreaking - Fermilab/SLAC

99 percent invisible

With a small side project, astronomers discover a new type of galaxy.

In 2011, astronomers Pieter van Dokkum and Roberto “Bob” Abraham found themselves in a restaurant in Toronto nursing something of a mid-life crisis. Abraham, a professor at the University of Toronto, and van Dokkum, at Yale, had become successful scientists, but they discovered that often meant doing less and less science and more and more managing large, complex projects.

“They’re important and they’re great and you feel this tremendous obligation once you’ve reached a certain age to serve on these committees because you have to set things up for the next generation,” Abraham says. “At the same time, it was no longer very much fun.”

The two friends fantasized about finding a small, manageable project that might still have some impact. By the time a few hours had passed, they picked an idea: using new camera lenses to find objects in the sky that emit very little light.

They had no way of knowing then that within the next five years, they’d discover an entirely new class of galactic object.

From the handmade telescopes of Galileo to spacefaring technological marvels like Hubble, all telescopes are designed for one basic task: gathering light. Telescope technology has advanced far enough that Hubble can pick up light from stars that were burning just 400 million years after the universe first popped into existence. 

But telescopes often miss objects with light that’s spread out, or diffuse, which astronomers describe as having low surface brightness. Telescopes like Hubble have large mirrors that scatter light from bright objects in the sky, masking anything more diffuse. “There’s this bit of the universe that’s really quite unexplored because our telescope designs are not good at detecting these things,” Abraham says.

When van Dokkum and Abraham sat down at that bar, they decided to try their hands at studying these cosmic castaways. The key turned out to be van Dokkum’s hobby as an amateur insect photographer. He had heard of new camera lenses developed by Canon that were coated with nanoparticles designed to prevent light scattering. Although they were intended for high-contrast photography—say, snapping a photo of a boat in a sunny bay—van Dokkum thought these lenses might be able to spot diffuse objects in the sky.

Amateur insect photographer van Dokkum has a collection of dragonfly photos.

Pieter van Dokkum

Abraham was skeptical at first: “Yeah, I’m sure the Canon corporation has come up with a magical optical coating,” he recalls thinking. But when the pair took one to a parking lot in a dark sky preserve in Quebec, they were sold on its capabilities. They brought graduate students on board and acquired more and more lenses—not an easy task, at $12,000 a pop—eventually gathering 48 of them, and arranged them in an ever-growing honeycomb shape to form what can rightly be called a telescope. They named it Dragonfly.

In 2014, both van Dokkum and Abraham were at a conference in Oxford when van Dokkum examined an image that had come in from Dragonfly. (At the time, it had just eight lenses.) It was an image of the Coma Cluster, one of the most photographed galaxy clusters in the universe, and it was dotted with faint smudges that didn’t match any objects in Coma Cluster catalogs.

Van Dokkum realized these smudges were galaxies, and that they were huge, despite their hazy light. They repeated their observations using the Keck telescope, which enabled them to calculate the velocities of the stars inside their mysterious galaxies. One was measured at 50 kilometers per second, 10 times the speed the galaxy should be moving based on the mass of its stars alone.

“We realized that for these extremely tenuous objects to survive as galaxies and not be ripped apart by their movement through space and interactions with other galaxies, there must be much more than meets the eye,” van Dokkum says.

The galaxy, dubbed Dragonfly 44, has less than 1 percent as many stars as the Milky Way, and yet it has to be just as massive. That means that the vast majority of its matter is not the matter that makes up stars and planets and people—everything we can see—but dark matter, which seems to interact with regular matter through gravity alone.

Astronomers have known for decades that galaxies can be made almost entirely of dark matter. But those galaxies were always small, a class known as dwarf galaxies, which have between 100 million and a few billion stars. A dark-matter-dominated galaxy as large as the Milky Way, with its 200 billion or more stars, needed an entirely new category. Van Dokkum and Abraham coined a term for them: ultradiffuse.

“You look at a galaxy and you see this beautiful spiral structure and they’re gorgeous. I love galaxies,” Abraham says. “But what you see is really just kind of the frosting on the cake. The cake is the dark matter.”

No one knows how many of these galaxies might exist, or whether they can have an even larger percentage of dark matter than Dragonfly 44. Perhaps there are galaxies that have no luminous matter at all, simply massive dark blobs hurtling through empty space. Though such galaxies have thus far evaded observation, evidence of their existence may be lurking in unexamined data from the past.

And Dragonfly could be the key for finding them. “When people knew they were real and that these things could exist and could be part of these galaxy clusters, suddenly they turned up in large numbers,” van Dokkum says. “They just escaped attention for all these decades.”

by Laura Dattaro at October 20, 2016 05:17 PM

Emily Lakdawalla - The Planetary Society Blog

ExoMars: Schiaparelli Analysis to Continue
The fate of the ExoMars lander, Schiaparelli, remains uncertain. European Space Agency mission controllers had been optimistic on Wednesday night that a definitive answer would be known by Thursday morning’s news briefing. However, although some more details have been made public about the lander’s descent, it is not yet clear whether it hit the martian surface at a speed it could not survive.

October 20, 2016 05:05 PM

Peter Coles - In the Dark

Remembering the Aberfan Disaster

Tomorrow marks the 50th anniversary of a truly appalling tragedy:the disaster at Aberfan which took place on 21st October 1966. A colliery spoil heap in the Welsh village of Aberfan, near Merthyr Tydfil, underwent a catastrophic collapse caused by a build-up of water, and more than 40,000 cubic metres of debris slid downhill into the village. The classrooms at Pantglas Junior School were immediately inundated; young children and teachers died from impact or suffocation. In all, 144 people lost their lives that day, including 116 children at the school. The collapse occurred at 9.15am. Had the disaster struck a few minutes earlier, the children would not have been in their classrooms, and if it had struck a few hours later, they would have left for the half-term holiday. As it happened, it was a tragedy of unbearable dimensions, that shattered many lives and devastated the community. It was caused largely by negligence on behalf of the National Coal Board

The First Minister of Wales, Carwyn Jones, has called upon the people of Wales to pause and remember the Aberfan disaster with a minute’s silence at 9.15am tomorrow (i.e. on Friday 21 October). Cardiff University will be observing this silence, and so will I. I hope readers of this blog will pause to reflect at that time too.

Here is a short video featuring the voice of Jeff Edwards, a survivor of the Aberfan disaster, recalling his harrowing experiences of that day in a conversation with Dr Robert Parker of the School of Earth and Ocean Sciences at Cardiff University. Rob’s research looks at landslide processes, landscape evolution, catastrophe modelling and post-disaster assessment.

by telescoper at October 20, 2016 02:25 PM

astrobites - astro-ph reader's digest

48th DPS/11th EPSC Meeting: Day 3

Welcome to Day 3 of the joint 48th meeting of the Division for Planetary Sciences (DPS) and 11th European Planetary Science Congress (EPSC) in Pasadena, California! Two astrobiters are attending this conference, and we will report highlights from each day here. If you’d like to see more timely updates during the day, we encourage you to search the #DPSEPSC hashtag!


Third Press Conference: Updates on ExoMars and Planet Nine (by Susanna Kohler)

Did you follow ExoMars’s Schiaparelli module as it attempted to land on Mars’s surface this morning? Today’s first press conference opened with an overview of the latest news from the attempt, given by Olivier Witasse (ESA). Witasse summarized the first phase of the mission, in which ExoMars’s Trace Gas Orbiter (TGO) enters in orbit around Mars and Schiaparelli attempts its landing. The primary goal of this phase is to test the technology for an entry, descent, and landing of a payload on Mars, but the mission has secondary science goals of studying the Martian atmosphere and conducting surface environment measurements.

Schiaparelli's planned descent and landing timeline. Click to read! [ESA/ATG medialab]

Schiaparelli’s planned descent and landing timeline. Click to read! [ESA/ATG medialab]

Schiaparelli’s anticipated landing on Mars was a complex, 6-minute process. Witasse informed us that, while the team received confirmation during this time of key milestones like the parachute deployment, contact was cut off before the final landing. As images from the lander weren’t supposed to be sent until after the landing, we don’t have more information from the lander itself; we’re currently waiting for telemetry from the Mars Reconnaissance Orbiter and info from TGO to find out what Schiaparelli’s status is. Witasse emphasized that the mission is “not nominal”, but it’s too soon to tell what happened.

Meanwhile, however, we can focus on the very successful insertion of TGO into orbit! The next speaker, Ann Carine Vandaele (Royal Belgian Institute for Space Aeronomy), discussed some of the science that will be done with TGO. In particular, TGO’s NOMAD instrument will measure the chemical composition of the atmosphere, Mars climatology and seasonal cycles, and help us to build models of the planet’s atmosphere. TGO’s timeline begins with 2 orbits for calibration, and then the next year will be spent aerobraking to get the spacecraft into its final circular orbit around Mars. The official science mission begins after that point!

Artist's illustration of the hypothetical Planet Nine, a massive planet lurking in the outskirts of our solar system. [Caltech/Robert Hurt]

Artist’s illustration of the hypothetical Planet Nine, a massive planet lurking in the outskirts of our solar system. [Caltech/Robert Hurt]

Next, the focus of the press conference shifted to the hypothetical Planet Nine. Renu Malhotra talked about how we’ve further constrained the massive planet’s location based on the orbits of trans-Neptunian objects (TNOs). The longest-period TNOs all have periods that are integer multiples of each other. If we assume that’s because they’re in resonance with a distant, massive planet, we can get constraints on the orbit of that planet. Malhotra and collaborators find that a planet on an orbit with a ~665 AU semi-major axis could produce resonant TNO orbits consistent with what we see. You can read more about this here or check out the press release here.

Finally, Mike Brown spoke about one of the latest discoveries he and his team have made about the possible effects of Planet Nine on the solar system. Brown pointed out that the Sun’s axis is tilted by about 6° with respect to the axis of the solar system. If everything formed from the same protostellar nebula, why the misalignment? He showed that the presence of a distant, massive planet on an inclined orbit can actually create that misalignment over time, as a result of the planet’s large mass and long lever arm. The tug of the planet’s gravitational force effectively tilts the solar system over its lifetime! Brown’s models show that the predicted orbit and mass of Planet Nine are consistent with the 6° tilt that we see: chalk this up as one more piece of evidence supporting Planet Nine’s existence. The press release can be found here.


Fourth Press Conference: Updates from the Juno Mission at Jupiter (by Susanna Kohler)

In the second press conference of the day, we received updates on the mission status and science outcomes of the Juno mission to Jupiter. Juno, launched in 2011, is the second of NASA’s New Frontiers programs (the first was New Horizons mission to Pluto). It arrived at Jupiter on July 4th of this year, and it’s currently in a 53.4-day highly elliptical orbit around the planet.


Artist’s concept of the Juno mission to Jupiter. [NASA/JPL-Caltech]

David Schurr, the deputy director of NASA’s Planetary Science Division, opened the conference with some bad news: the Juno spacecraft went into safe mode at 10:47 PDT last night, just 13 hours from its closest approach to Jupiter in its orbit. This means that the team wasn’t able to do the science pass they had planned during this second “perijove” flyby of the mission.

Scott Bolton, Juno’s PI, provided more information about Juno’s current state. Bolton’s take on the recent events was calm: though the spacecraft evidently encountered something unexpected, it responded exactly how it was supposed to. Juno is currently healthy and in no danger, and it will continue on its current 53.4-day orbit while scientists analyze what triggered the safe-mode entry.

This glitch follows on the heels of another, unrelated delay: last week the decision was made to postpone a burn of Juno’s engines to reduce its period from 53.4 days to 14 days. This “period reduction maneuver” was put off when unexpected behavior was discovered with a pair of valves operating the propulsion system. The team is currently analyzing this as well, to determine whether it will be safe to fire the engines on Juno’s next pass. When asked what he considered to be the worst-case scenario for the mission, Bolton drew laughs with his response: “I have to be patient and get the science slowly.” He elaborated that Juno’s current orbit will produce exactly the same science results as the planned 14-day orbit, just a bit slower!

Bolton then walked us through a little of what we’ve learned from Juno so far. Details from this are discussed below, in Natasha’s summary of Bolton’s plenary talk later in the afternoon.

Jupiter's pole

This image of the cyclones at Jupiter’s south pole was processed starting from JunoCam’s raw data, which is publicly available. [NASA/SwRI/MSSS/Roman Tkachenko]

Next Candice Hansen, JunoCam imaging scientist from the Planetary Science Institute, told us about the public outreach camera mounted on Juno. This camera was designed to engage the public with the Juno mission, but it turns out it’ll also produce lots of interesting science! Already, we have some spectacular images of Jupiter’s south pole (the first time this has been imaged) which reveal the region’s cloud structure. In particular, the pole is swirling with cyclonic storms, and the presence of one on the terminator provides us with new 3D information about the dynamics of the atmosphere. Analysis of this storm shows that it’s a whopping 7,000 km (more than half the size of Earth) across and towers ~85 km vertically.

The public has several ways they can engage with JunoCam. Public input will drive the target selection for the camera, with proposals, discussion, and eventual voting planned to be enabled late this year or early next year. And all images taken with JunoCam are uploaded in raw format for anyone to download, process, and upload their final images. This has already resulted in some spectacular public-produced images using JunoCam data, viewable here, and should continue to result in stunning visuals and exciting science in the future!


Plenary Talks (by Natasha Batalha)

Early Results from Juno Mission at Jupiter: Scott Bolton

Dr. Scott Bolton followed up his press conference by giving a plenary talk on Juno’s early science results. Though Juno is currently in safe mode, the team is continuing to analyze data from the close flyby from August 27. Dr. Bolton started his talk by showing a new brightness map of the northern aurora. Just from this first map the team is “seeing things we did not expect”. Both the aurora and the magnetic field are much stronger than models previously suggested. The team is working on creating new models to match their preliminary observations. Gravity science also is forcing theorists to revisit their models and on December 11th, we will have a factor of 20 improvement in this data.

Preliminary Juno data from the Microwave Radiometer instrument shows the bands we see on the surface of Jupiter, extend down to depths of ~200 bars (300-400 km).

Preliminary Juno data from the Microwave Radiometer instrument shows the bands we see on the surface of Jupiter, extend down to depths of ~200 bars (300-400 km). Image credit: NASA/JPL

Next, Dr. Bolton showed the audience observations taken with Juno’s Microwave Radiometer instrument (MWR). MWR’s mission is to understand and characterize the interior layers of Jupiter. There are six channels ranging from 1-50 cm wavelength which will each be able to penetrate down to a pressure of 200 bars (400 km). Already, the team can tell that the large banding structure that we see on the surface extends down to ~300-400 km.

Juno’s main mission is to gain understanding of the Solar System’s beginnings by revealing the origin and evolution of Jupiter. Despite the current difficulties the team is having, Dr. Bolton ended by reassuring the audience that the team will still be able to complete all the goals it sought out at the start.

Dawn at Ceres: Michael Toplis, Cristina De Sanctis

After Dawn launched in September of 2007, it spent four years focused on studying Vesta, the second most massive asteroid. Since February 2015, it has been on course to study Ceres. Dr. Michael Toplis and Dr. Cristina De Sanctis tag-teamed the second plenary to reveal preliminary studies concerning Ceres.

Like with Juno, Dawn is also trying to measure the gravity field of Ceres. Doing so will give us insights into whether or not Ceres is homogenous (is it a solid cue ball or a layered onion?). New gravity data has already ruled out a homogenous mixture. Instead, the Dawn team suggests that Ceres has a weak interior and that water and other light materials may have been partially separated from the rock. This layering is much weaker than the layering you find in Earth’s interior or even our very own Moon.

A photo of Ahuna Mons, the only mountain on the entire surface of Ceres. Ahuna Mons is only 3 miles high and it is likely volcanic in origin.

A photo of Ahuna Mons, the only mountain on the entire surface of Ceres. Ahuna Mons is only 3 miles high and it is likely volcanic in origin. Image Credit: NASA/JPL

The Dawn team has also been looking carefully at the surface of Ceres. Right off the bat it is easy to see that Ceres is highly cratered, with one small exception: Ahuna Mons, a small, 3-mile-high mountain discovered by the Dawn team. Ahuna Mons is the only mountain on the entire object that might have been volcanic in origin. Along with the heavy cratering, Dr. Cristina De Sanctis discussed the discovery of phyllosilicates all over Ceres’ surface. Phyllosilicates are rich in magnesium with some ammonium in their crystalline structure. The fact that these are found all over Ceres’ surface means that the surface has been altered by some interaction with water.

The team concluded by rounding up what all the information about Ceres is pointing to. It is possible that either Ceres formed in the trans-Neptune disk before it was implanted into the main belt, OR Ceres formed closer to its present position by accreting material that drifted from out at greater distances. More data will start to yield a better image of what is going on with this interesting object.

by Astrobites at October 20, 2016 09:02 AM

October 19, 2016

Emily Lakdawalla - The Planetary Society Blog

Brief update: Opportunity's attempt to image Schiaparelli unsuccessful
Today, the Opportunity rover attempted a difficult, never-before-possible feat: to shoot a photo of an arriving Mars lander from the Martian surface. Unfortunately, that attempt seems not to have succeeded. Opportunity has now returned the images from the observation attempt, but Schiaparelli is not visible.

October 19, 2016 10:35 PM

Emily Lakdawalla - The Planetary Society Blog

ExoMars: Long day’s journey into uncertainty
Trace Gas Orbit is successfully in orbit at Mars, but the fate of the Schiaparelli lander is uncertain.

October 19, 2016 08:04 PM

Peter Coles - In the Dark

KiDS-450: Testing extensions to the standard cosmological model [CEA]

Since I’ve just attended a seminar in Cardiff by Catherine Heymans on exactly this work, I couldn’t resist reblogging the arXiver entry for this paper which appeared on arXiv a couple of days ago.

The key finding is that the weak lensing analysis of KIDS data (which is mainly to the distribution of matter at low redshift) does seem to be discrepant with the predictions of the standard cosmological model established by Planck (which is sensitive mainly to high-redshift fluctuations).

Could this discrepancy be interpreted as evidence of something going on beyond the standard cosmology? Read the paper to explore some possibilities!


We test extensions to the standard cosmological model with weak gravitational lensing tomography using 450 deg$^2$ of imaging data from the Kilo Degree Survey (KiDS). In these extended cosmologies, which include massive neutrinos, nonzero curvature, evolving dark energy, modified gravity, and running of the scalar spectral index, we also examine the discordance between KiDS and cosmic microwave background measurements from Planck. The discordance between the two datasets is largely unaffected by a more conservative treatment of the lensing systematics and the removal of angular scales most sensitive to nonlinear physics. The only extended cosmology that simultaneously alleviates the discordance with Planck and is at least moderately favored by the data includes evolving dark energy with a time-dependent equation of state (in the form of the $w_0-w_a$ parameterization). In this model, the respective $S_8 = sigma_8 sqrt{Omega_{rm m}/0.3}$ constraints agree at the $1sigma$ level, and there is `substantial concordance’ between…

View original post 159 more words

by telescoper at October 19, 2016 02:26 PM

Peter Coles - In the Dark

Chuck Berry on a Summer’s Day

I was meaning to post this yesterday about Chuck Berry to mark his 90th Birthday. I’m putting it here as a bit of an oddity but I hope you find it interesting.

Chuck Berry appeared in Bert Stern’s classic film Jazz on Summer’s Day which was filmed at the 1958 Newport Jazz Festival. He performed on that occasion with a pick-up band called the Newport All-Stars, and the number that made it into the film was Sweet Little Sixteen, a tune that he actually wrote. I find two things fascinating about this performance. One is that the “backing band” is a stellar group of Jazz legends: the drummer is the great Jo Jones (who led the lightly swinging rhythm section of the great Count Basie band of the 1930s); the trumpeter is Buck Clayton, another Basie alumnus; and the trombonist is none other than Jack Teagarden. To a Jazz fan like myself, the talents of these musicians are totally wasted: they seem somewhat bemused by Chuck Berry’s gyrations on stage as well as bored by the material. When the time comes for the improvised solos that a jazz audience demands, only the relatively unknown clarinettist Rudy Rutherford – usually a tenor saxophonist who played with a number of bands, including Count Basie’s – was prepared to stand up and be counted, his strange effort is evidently a source of great amusement to the rest of the band, but at least he got into the spirit!

The other fascinating thing is what a historical document this is. During the 1950s Jazz was beginning to lose out to Rock and Roll in the popularity stakes, hence the plan of booking Chuck Berry to boost the audience figures at the Newport Jazz Festival. The tension on stage is almost palpable and even Chuck Berry occasionally looks a bit embarrassed by the whole thing. But it’s also a wonderfully observed portrayal of the styles of the time, especially through the audience shots. I wonder what happened to the cute couple dancing to this performance?

Anyway, belated best wishes on his 90th Birthday, here’s Chuck Berry recorded live 58 years ago at the Newport Jazz Festival in 1958, singing and playing Sweet Little Sixteen.


P.S. I forgot to mention the superb photography.







by telescoper at October 19, 2016 10:52 AM

Tommaso Dorigo - Scientificblogging

Physics Outreach With Music

Last August 27 a full-day outreach event was held in the nice small town of Veroia, in northern Greece, as one of the satellite activities to the international conference “Quark Confinement and the Hadron Spectrum” which took place in Thessaloniki during the following days.

read more

by Tommaso Dorigo at October 19, 2016 10:44 AM

astrobites - astro-ph reader's digest

48th DPS/11th EPSC Meeting: Day 2

Welcome to Day 2 of the joint 48th meeting of the Division for Planetary Sciences (DPS) and 11th European Planetary Science Congress (EPSC) in Pasadena, California! Two astrobiters are attending this conference, and we will report highlights from each day here. If you’d like to see more timely updates during the day, we encourage you to search the #DPSEPSC hashtag!

Town Hall: Observing the Solar System with JWST (by Susanna Kohler)


An artist’s illustration of the James Webb Space Telescope, a joint effort between NASA, the European Space Agency, and the Canadian Space Agency. A town hall today focused on how JWST can be used to observe the solar system. [NASA/JWST]

The James Webb Space Telescope, the upcoming infrared space telescope with an unprecedented 6.5-m mirror, is highly anticipated for the new view it will give us of the universe. But not all of JWST’s observations will be of distant stars and galaxies — the telescope is fully capable of making detections much closer to home.

Though JWST won’t be able to observe in the inner solar system, it will be able to examine planets, satellites, rings, asteroids, Kuiper belt objects, and comets located at the distance of Mars and outward. Here are just a few of the solar-system observations the planetary community has proposed that JWST will be well-suited to make.

  1. Observations of Mars
    JWST can examine Mars’s atmosphere globally, allowing us to learn about the planet’s past habitability, its current water sources, and the chemical stability of its atmosphere.
  2. Observations of Asteroids and Near-Earth Objects
    JWST can provide imaging and spectroscopy that will allow us to learn about the albedo, size, surface roughness, thermal inertia, and surface composition of small bodies in the solar system.
  3. Observations of the Giant Planets
    JWST can examine auroral processes, track atmospheric dynamics after impact events on the planets, and investigate major storm systems.
  4. Observations of Rings and Small Satellites
    JWST can discover new rings and moons, probe ring structure with occultations, probe the composition of the rings with spectroscopy, and investigate how these systems evolve over time.
  5. Observations of Titan
    JWST can provide long-term monitoring of the changing seasons on Saturn’s moon, investigating the evolution of its atmospheric composition, clouds and hazes, and surface temperatures and features.
  6. Observations of Dwarf Planets
    JWST can create detailed maps of the compositions of distant dwarf planets beyond the orbit of Neptune — in particular, those with significant inventories of volatile ices on their surfaces.

A nerve-wracking moment wherein JWST science instruments are lifted by crane above the mirror, and both are suspended face-down over the clean-room floor, as the observatory is assembled. [NASA/Chris Gunn]

A nerve-wracking moment wherein JWST science instruments are lifted by crane above the mirror, and both are suspended face-down over the clean-room floor as the observatory is assembled. [NASA/Chris Gunn]

In today’s town hall, JWST Deputy Project Scientist for Planetary Science Stefanie Milam (NASA/GSFC) provided an overview of JWST, discussed how it could be used for planetary science, and gave us an update on the telescope’s status.

JWST’s progress is grouped into yearly themes: 2013’s was instrument integration, 2014’s was manufacturing of the spacecraft, 2015’s was assembly of the mirror, and 2016’s has been assembly of the observatory. At this point the mirrors and instruments are installed, and the assembly of JWST is nearing completion! 2017 will be a year of testing all the components of the telescope at the Johnson Space Center, in preparation for a 2018 launch.

Next up in the town hall, STScI Solar System Science Lead John Stansberry provided us with details of JWST’s observing modes and capabilities, and discussed what they mean for astronomers interested in proposing observing time on the telescope, particularly for planetary observations. Will Grundy (Lowell Observatory) then discussed the use of JWST for high-resolution imaging. Ultimately JWST will have roughly comparable angular resolution to Hubble, but in infrared wavelengths instead of optical. The depth and detail that this will provide should make for both spectacular images and exciting new science!


An old-school stereoscopic slide viewer. [ThePassenger]

Finally, Joel Green (STScI) pitched an idea to use both JWST and Hubble for combined observations of the same targets. The two telescopes, which overlap in observing wavelength between 0.7–1.6 µm, are separated by a 1.5 million km baseline. This means that, when used together, they could produce stereoscopic images similar to the “magic eye” pictures many of us have spent hours staring at cross-eyed!

Why is this better than taking two images 6 months apart with the same telescope, making use of annual parallax to create a baseline? The advantage to using both JWST and Hubble together is simultaneity: if Hubble and JWST make their observations at the same time, we can create stereoscopic images of transient events. Potential cases where this is useful include comet/asteroid activity, cometary collisions, and cloud/storm features on giant planets.

Second Press Conference (by Natasha Batalha)

Today’s press conference all centered around New Horizons’s mission results. Before the speakers began, New Horizons’s Principal Investigator, Alan Stern, said a few words. To give you an idea of what it has been like for the team, Dr. Stern said that during it all they “felt like doctors triaging patients.” The full press release can be found here.

Pluto’s Extreme Surface Reflectance Variations: Bonnie Buratti (NASA Jet Propulsion Laboratory) 

One of the major goals of the New Horizons mission was to answer the question of how reflective Pluto really is and how exactly it scatters light. Dr. Bonnie Buratti was excited to show a map of the albedo (reflectivity) of Pluto. The map shows a region of very high (nearly perfect) reflectivity right in the middle of Sputnik Planitia, a large geological feature on Pluto. Dr. Buratti and her team were interested in knowing what, if any, other objects in the Solar System exhibit this same behavior. They found that the only other system with a similar large range in reflectivity was Iapetus, one of Saturn’s moons. They also looked at objects in the Kuiper belt and found one object, Eris, with regions with nearly perfect reflectivity. Dr. Buratti and her team are excited because they think this might mean Eris is geologically active, like Pluto.

Possible Clouds on Pluto: Alan Stern (Southwest Research Institute)

Moving from the surface of Pluto to the atmosphere, Dr. Alan Stern continued to talk about the possibility of clouds on Pluto. Clouds are common across nearly all the planets in our Solar System: Venus, Earth, Mars, Jupiter, etc. The New Horizons team had previously announced that Pluto’s atmosphere was enveloped in haze layers. This detection posed a number of mysteries. The hazes appeared very high in the sky and the team is still unsure how they have formed. These hazes are only about 25% optically thick, however — which means that to date, clouds, which are much more optically thick, have not been detected. Today, Dr. Stern announced the detection of 7 possible small clouds lying very low near the surface of Pluto. All together, these 7 clouds take up less than 1% the surface area of Pluto, meaning that generally, Pluto is cloud-free. In the future, it will be interesting to understand what these potential clouds are made of, and if they aren’t clouds, what we are in fact seeing.

Seven possible clouds in Pluto's atmosphere. Image credit: NASA/JHUAPL/SwRI

Seven possible clouds in Pluto’s atmosphere. [NASA/JHUAPL/SwRI]

Landslides on Charon: Ross Beyer (NASA Ames Research Center) 

Straying away from Pluto all together, Dr. Ross Beyer, discussed the discovery of landslides on Pluto’s largest moon, Charon. This is peculiar because the New Horizons data shows that Pluto is void of any landslides. And as Dr. Stern discussed in his plenary talk yesterday, Charon is geologically inactive, as compared to Pluto. So why does Charon have these features, but not Pluto? In fact, these are the first landslides we’ve seen this far away from the Sun. We don’t know what material the landslides are made of or why they are forming without a dedicated orbiter. It will be very interesting for the team to see if they can detect these landslides anywhere else in the Kuiper Belt.

Landslides on Charon, discovered by New Horizons. Image credit: NASA/JHUAPL/SwRI

Landslides on Charon, discovered by New Horizons. [NASA/JHUAPL/SwRI]

Hubble Reveals that New Horizons Flyby Target 2014 MU69 is Red: Susan Benecchi/Amanda Zangari (Planetary Science Institute) 

The last press release was given by Dr. Amanda Zangari, a post-doctoral researcher at Southwest Research Institute, who was filling in for Dr. Susan Benecchi. The New Horizons mission has completed its initial mission requirements and has already started preparing for its extended mission. During its extended mission, the main flyby target is 2014 MU69. This object was discovered by the Hubble Space Telescope and is located in what is called the “cold classical Kuiper Belt”. The cold classical Kuiper Belt is a very old primordial region where none of the objects are interacting with Neptune and all of the objects have very low inclinations. This leads scientists to think that 2014 MU69 is one of the ancient building blocks of the planets in our Solar System. One method of verifying this (before actually going there) is to measure the color of the object. Data from Hubble reveals that this object may in fact be part of the primordial region in the Kuiper Belt. Therefore, New Horizons will be heading there and on January 1, 2019 we will know for sure!


by Astrobites at October 19, 2016 06:54 AM

October 18, 2016

John Baez - Azimuth

Complex Adaptive System Design (Part 2)

Yesterday Blake Pollard and I drove to Metron’s branch in San Diego. For the first time, I met four of the main project participants: John Foley (math), Thy Tran (programming), Tom Mifflin and Chris Boner (two higher-ups involved in the project). Jeff Monroe and Tiffany Change give us a briefing on Metron’s ExAMS software. This lets you design complex systems and view them in various ways.

The most fundamental view is the ‘activity trace’, which consists of a bunch of parallel rows, one for each ‘performer’. Each row has a bunch of boxes which represent ‘activities’ that the performer can do. Two boxes are connected by a wire when one box’s activity causes another to occur. In general, time goes from left to right. Thus, if B can only occur after A, the box for B is drawn to the right of the box for A.

The wires can also merge via logic gates. For example, suppose activity D occurs whenever A and B but not C have occurred. Then wires coming out of the A, B, and C boxes merge in a logic gate and go into the A box. However, these gates are a bit more general than your ordinary Boolean logic gates. They may also involve ‘delays’, e.g. we can say that A occurs 10 minutes after B occurs.

I would like to understand the mathematics of just these logic gates, for starters. Ignoring delays for a minute (get the pun?), they seem to be giving a generalization of Petri nets. In a Petri net we only get to use the logical connective ‘and’. In other words, an activity can occur when all of some other activities have occurred. People have considered various generalizations of Petri nets, and I think some of them allow more general logical connectives, but I’m forgetting where I saw this done. Do you know?

In the full-fledged activity traces, the ‘activity’ boxes also compute functions, whose values flow along the wires and serve as inputs to other box. That is, when an activity occurs, it produces an output, which depends on the inputs entering the box along input wires. The output then appears on the wires coming out of that box.

I forget if each activity box can have multiple inputs and multiple outputs, but that’s certainly a natural thing.

The fun part is that one one can zoom in on any activity trace, seeing more fine-grained descriptions of the activities. In this more fine-grained description each box turns into a number of boxes connected by wires. And perhaps each wire becomes a number of parallel wires? That would be mathematically natural.

Activity traces give the so-called ‘logical’ description of the complex system being described. There is also a much more complicated ‘physical’ description, saying the exact mechanical functioning of all the parts. These parts are described using ‘plugins’ which need to be carefully described ahead of time—but can then simply be used when assembling a complex system.

Our little team is supposed to be designing our own complex systems using operads, but we want to take advantage of the fact that Metron already has this working system, ExAMS. Thus, one thing I’d like to do is understand ExAMS in terms of operads and figure out how to do something exciting and new using this understanding. I was very happy when Tom Mifflin embraced this goal.

Unfortunately there’s no manual for ExAMS: the US government was willing to pay for the creation of this system, but not willing to pay for documentation. Luckily it seems fairly simple, at least the part that I care about. (There are a lot of other views derived from the activity trace, but I don’t need to worry about these.) Also, ExAMS uses some DoDAF standards which I can read about. Furthermore, in some ways it resembles UML and SySML, or more precisely, certain parts of these languages.

In particular, the ‘activity diagrams’ in UML are a lot like the activity traces in ExAMS. There’s an activity diagram at the top of this page, and another below, in which time proceeds down the page.

So, I plan to put some time into understanding the underlying math of these diagrams! If you know people who have studied them using ideas from category theory, please tell me.

by John Baez at October 18, 2016 05:19 PM

Symmetrybreaking - Fermilab/SLAC

It came from the physics lab

Settle in for a physics-themed Halloween movie marathon.

Looking for a way to celebrate Halloween? Has 2016 got you too spooked to go outside? Pop some corn and sample Symmetry’s little-known series of physics horror films instead. (Actual movies not included.)

Artwork by Sandbox Studio, Chicago with Ana Kova

Someone’s taken their love of the Higgs boson one step too far!


Artwork by Sandbox Studio, Chicago with Ana Kova

I need an old theorist and a young theorist.


Artwork by Sandbox Studio, Chicago with Ana Kova

Entropy’s coming to get you, Barbara!


Artwork by Sandbox Studio, Chicago with Ana Kova

He’s heeere.

by Kathryn Jepsen at October 18, 2016 03:56 PM

astrobites - astro-ph reader's digest

Two weeks left to apply to write for Astrobites!

astrobites21There are two weeks left to apply to join the Astrobites team! If you are a graduate student in astronomy and astrophysics and you enjoy writing about science, we want to hear from you! The deadline is 1 November.

The application and further details can be found at If you have any questions contact us at

Good luck to all those working on applications!

by Astrobites at October 18, 2016 03:01 PM

Lubos Motl - string vacua and pheno

A Cambridge video introduction to strings
Giotis has told us about a new 30-minute video presenting the basics of string theory and related matters:
Elemental Ideas – String Theory Part One (click for the video)

Elemental Ideas – String Theory Part Two (new, added on October 18th)
It's a fun conservative video focusing on the physics ideas and not the sociological šit that tries to surround string theory in the recent decade.

I've never stopped counting Cambridge among the top 5 theoretical physics places on the European continent (a concept that includes certain nearby islands) so it's natural for them to offer some seriously good video.

The host and the generalized mastermind of the program is Kerstin Göpfrich who is smiling all the time. In fact, it seems that everyone is smiling much more than what you would statistically expect – and this excess may look like evidence in favor of some Hillary-style unnaturalness in the smiling patterns. ;-)

Maybe they're trying to emulate the first important string theorist who is interviewed, David Tong. David Tong is a great guy, I knew him very well in the Greater Boston area, and indeed, he was smiling all the time and his enthusiasm is largely contagious. Aside from important research, he's also produced various interesting lectures and free textbooks on string theory etc. I recommend you to Google search for those if you're at least slightly interested. You may also watch fourteen 1-hour-plus lectures on quantum field theory which have actually been watched by hundreds of thousands of people.

Tim Adamo is another researcher who is probably not as a natural smiler – he's a naturally muscular guy – but he smiles a lot in the moving pictures, too. His research focus is on twistor-based and Wilson-lines-based calculations of scattering amplitudes but it seems to me that he knows a lot of string theory. His main excitement about string theory – it holds also/mainly for his twistor formulae – is that lots of Feynman diagrams may be replaced by one within string theory. Later, he also spends quite some energy by explaining the important point that supersymmetry really makes things simpler and cleaner. It doesn't "add mess" which is how supersymmetry is often misinterpreted in the popular press.

At any rate, the video is trying to be more technical than the generic popular introduction to string theory that millions have watched. The physicists in the show don't avoid words such as "general relativity", "its supersymmetric generalization", "scattering amplitudes", "gauge theory", "one-thousand-loop Feynman diagram contributing to 6-graviton scattering", "punctures", "gauge fixing", "anomaly-free operators", "conformal transformations", and so on. Of course, I don't expect viewers to understand these concepts perfectly if they haven't understood them to start with. But it may give them hints about some essential ideas they may want to be interested in, or make Internet searches about.

After Adamo, Tong returns with his somewhat more philosophical comments about the need to use Nature's own language, that of mathematics, that nevertheless makes the progression logical. In another segment, Tong tries to explain the Dirac equation as his favorite equation. By pure thought, Dirac could predict antimatter as well etc. (I wouldn't historically agree with David's assertion that Dirac needed 2 years to see that his equation led to the spin – it was really a starting point before he began to search for the equation – or it had negative energy equations – he surely realized that very early, too.) When asked why people found the abstract idea of strings, Tong sensibly talks about flux tubes between quarks.

To some extent, you may always see some degree of local patriotism and self-promotion. Dirac was clearly one of the big guys connected with Cambridge.

Part Two is coming on October 17th. Update: I've added the link to it at the top.

David Tong is explaining solitons and flux tubes – those are words that appear rather frequently in David's papers and research talks.

Kerstin begins with the ancient history of elements and the more recent history of nuclear physics. Something is said about the uncertainty principle. It's rather fast. I would probably not absorb it if I didn't know these things. But it's pleasant to look at Kerstin's smile and gestures. ;-)

She shows what the magnet on the bottom side of some tablecloth does to the ferromagnetic dust on top of it.

David immediately begins some more hardcore talk. Strings may be obtained as solitons – while they are elementary in string theory. What is elementary and what is made of something else often depends on the formalism or viewpoint or "the choice of a dual theory".

He writes down some relevant QFT equations for solitons. The covariant derivative of a Higgs field vanishes. Some magnetic field is stuck within the resulting tube. Kerstin called it the Tong equation which is a bit exaggerated, David, isn't it? ;-)

Dr Tong is drowning in the tub. I've already seen it. Despite the last bubbles of death at the end, he ultimately saves his life. Using a blug drug, he shows a vortex in the tub. While he is saying that he wants to study everything in terms of mathematics, he shows some incredible solitonic strings-of-excited-water produced by a pump connected to an aquarium.

David takes a ladder... and becomes a construction worker? No, he is going to produce vortices using his hand. Another incredible thing – some very thin and long solitonic vertical string-of-vacuum in the water. He has to become a smoker to show some solitons shot by a cannon next to a cigarette. He sees solitons everywhere.

What holds everything together? You can ironically find out if you smash things so hard that they fall apart. No one has ever seen a quark. The elastic band can't be beaten. "No one knows why only the color is confined," they claim. Well, it depends.

There's a fun research discussion between several physicists (Amihay Hanany is one of them). It looks very close to an authentic one but I would still figure out that it's mostly staged, anyway. ;-) Tong mentions the Clay Institute $1 million problem on turbulence. Additional scenes of Tong's taking a bath. Riding on a scoobike, playing soccer in his office etc.

For 5 years, he tried to figure out why color was confining, so he studied a different system with scalars etc. and found all the solitons. It sounds like an almost exactly sensible description of a meaningful research project but not quite. ;-) I would have to look at the papers to know what he was actually trying to do.

At 15:15, they switch to black holes in string theory.

Stephen Hawking wants to brainwash us into thinking that no one has a clue what a black hole is. How does a black hole conduct electricity? Surprisingly, the mechanism is similar to the way how some exotic metals conduct electricity.

Kerstin says that string theory has unexpected applications and David says something even more original. String theory has been immensely helpful to make us frame the right questions, especially in the black hole physics context.

They switch from David to Ms Sasha Haco, basically a hot yet smart Hawking's PhD student. (Does Hawking use similar ways to hire the students as Trump's Miss contests LOL?) Black holes are cool and a useful playground to test ideas etc. Haco correctly defines a black hole as a region with gravity that is so strong that not even light can escape, and therefore they look black. (That's seemingly trivial but some folks, like Ms Sabine Hossenfelder, can't even get this far – she totally incorrectly defines a black hole as something with a singularity.)

We can't safely visit the black holes but Haco studies how black holes affect the stars around them etc. She studies theoretical aspects of those things. Kerstin is meaningfully surprised that those things are unknown. Well, some of them are, some of them are not, Haco vaguely but correctly answers.

Haco actually seems to say a statement – a typically Hawking's but already admitted to be wrong – that pure states are evolving into mixed states. Haco talks about three laws of black holes – soon reinterpreted as thermodynamics.

Maybe it's just my feeling but Kerstin asks Haco a bit more "skeptical" questions, like if she were a journalist and Haco were in the wrong party and Tong in the right one. Even if that's the case, and it's just my impression, there could be somewhat good reasons for this attitude, of course, such as this one.

Haco explains that she does a theory work – pen and paper instead of spaceships flying to a black hole. When asked, Haco reveals that she wants to have a PhD thesis on a solution of the information loss problem.

Good luck to Sasha!

Finally, she's also led to write an equation. Hers is \(S=A/4\), arguably a more fundamental one than the previous Tong equation. ;-) Kerstin notices it's a simpler equation than the previous one. Sasha Haco confirms that simpler equations are often better than others. There's a spacetime diagram on the blackboard and some variations of some fields. Sasha talks about the pair creation near the horizon.

What will the solution of the information loss problem bring to us?

We will see how quantum mechanics and GR actually work – in a way that will allow us to discover new physics.

In the final minute, Kerstin says that we may forever remain ignorant on whether string theory describes the world around us. But even if that's the case, it will be immensely helpful as a mathematical tool to do physics. That's why the physicists from Cambridge and its suburbs – i.e. the rest of the world – will keep on using string theory.

And Kerstin Göpfrich is now able (at least that's what she says) to understand why it's so important to study mathematics, the language in which God wrote the world.

by Luboš Motl ( at October 18, 2016 01:21 PM

October 17, 2016

The n-Category Cafe

Linear Algebraic Groups (Part 1)

I’m teaching an elementary course on linear algebraic groups. The main aim is not to prove a lot of theorems, but rather to give some sense of the main examples and the overall point of the subject. I’ll start with the ideas of Klein geometry, and their origin in old questions going back almost to Euclid.

John Simanyi has been taking wonderful notes in LaTeX, so you can read those!

Here’s the first lecture:

  • Lecture 1 (Sept. 22) - The definition of a linear algebraic group. Examples: the general linear group <semantics>GL(n)<annotation encoding="application/x-tex">\mathrm{GL}(n)</annotation></semantics>, the special linear group <semantics>SL(n)<annotation encoding="application/x-tex">\mathrm{SL}(n)</annotation></semantics>, the orthogonal group <semantics>O(n)<annotation encoding="application/x-tex">\mathrm{O}(n)</annotation></semantics>, the special orthogonal group <semantics>SO(n)<annotation encoding="application/x-tex">\mathrm{SO}(n)</annotation></semantics>, and the Euclidean group <semantics>E(n)<annotation encoding="application/x-tex">\mathrm{E}(n)</annotation></semantics>. The origin of groups in geometry: the parallel postulate and Euclidean versus non-Euclidean geometry. Elliptic and hyperbolic geometry.

It’s sort of spooky how rather old questions about the parallel postulate ultimately led to non-Euclidean geometry and then Klein geometry. When did people start trying to derive the parallel postulate from the other postulates?

And since spherical trigonometry goes back to the Babylonians, why the heck did it take so long for people to notice that spherical — okay, elliptic — geometry obeys all the postulates of Euclidean geometry except the parallel postulate? Was it just the need to switch from the sphere to <semantics>P 2<annotation encoding="application/x-tex">\mathbb{R}P^2</annotation></semantics>?

The idea of alternative geometries should not be all that weird if you spend your nights using spherical trigonometry to study the stars and your days using ordinary trigonometry to study figures drawn on the sand. You might even get the idea that the Earth is a sphere, and that spherical geometry reduces to Euclidean geometry in the limit where the radius of the Earth goes to infinity.

by john ( at October 17, 2016 01:45 AM

The n-Category Cafe

Linear Algebraic Groups (Part 5)

Now let’s look at projective geometry from a Kleinian viewpoint. We’ll take the most obvious types of figures — points, lines, planes, and so on — and see which subgroups of <semantics>GL(n)<annotation encoding="application/x-tex">\mathrm{GL}(n)</annotation></semantics> they correspond to. This leads us to the concept of ‘maximal parabolic subgroup’, which we’ll later generalize to other linear algebraic groups.

We’ll also get ready to count points in Grassmannians over finite fields. For that, we need the <semantics>q<annotation encoding="application/x-tex">q</annotation></semantics>-deformed version of binomial coefficients.

  • Lecture 5 (Oct. 6) - Projective geometry from a Kleinian perspective. The Grassmannians <semantics>Gr(n,j)<annotation encoding="application/x-tex">\mathrm{Gr}(n,j)</annotation></semantics> as spaces of points, lines, planes, etc. in projective geometry. The Grassmannians as quotients of the general linear group by the maximal parabolic subgroups <semantics>P n,j<annotation encoding="application/x-tex">P_{n,j}</annotation></semantics>. Claim: the cardinality of <semantics>Gr(n,j)<annotation encoding="application/x-tex">\mathrm{Gr}(n,j)</annotation></semantics> over the finite field <semantics>𝔽 q<annotation encoding="application/x-tex">\mathbb{F}_q</annotation></semantics> is the <semantics>q<annotation encoding="application/x-tex">q</annotation></semantics>-binomial coefficient <semantics>(nj) q<annotation encoding="application/x-tex">\binom{n}{j}_q</annotation></semantics>. The mysterious sense in which set theory is linear algebra over the ‘field with one element’.

by john ( at October 17, 2016 12:40 AM

October 16, 2016

Andrew Jaffe - Leaves on the Line

Wussy (Best Band in America?)

It’s been a year since the last entry here. So I could blog about the end of Planck, the first observation of gravitational waves, fatherhood, or the horror (comedy?) of the US Presidential election. Instead, it’s going to be rock ’n’ roll, though I don’t know if that’s because it’s too important, or not important enough.

It started last year when I came across Christgau’s A+ review of Wussy’s Attica and the mentions of Sonic Youth, Nirvana and Television seemed compelling enough to make it worth a try (paid for before listening even in the streaming age). He was right. I was a few years late (they’ve been around since 2005), but the songs and the sound hit me immediately. Attica was the best new record I’d heard in a long time, grabbing me from the first moment, “when the kick of the drum lined up with the beat of [my] heart”, in the words of their own description of the feeling of first listening to The Who’s “Baba O’Riley”. Three guitars, bass, and a drum, over beautiful screams from co-songwriters Lisa Walker and Chuck Cleaver.


And they just released a new record, Forever Sounds, reviewed in Spin Magazine just before its release:

To certain fans of Lucinda Williams, Crazy Horse, Mekons and R.E.M., Wussy became the best band in America almost instantaneously…

Indeed, that list nailed my musical obsessions with an almost google-like creepiness. Guitars, soul, maybe even some politics. Wussy makes me feel almost like the Replacements did in 1985.

IMG 1764

So I was ecstatic when I found out that Wussy was touring the UK, and their London date was at the great but tiny Windmill in Brixton, one of the two or three venues within walking distance of my flat (where I had once seen one of the other obsessions from that list, The Mekons). I only learned about the gig a couple of days before, but tickets were not hard to get: the place only holds about 150 people, but their were far fewer on hand that night — perhaps because Wussy also played the night before as part of the Walpurgis Nacht festival. But I wanted to see a full set, and this night they were scheduled to play the entire new Forever Sounds record. I admit I was slightly apprehensive — it’s only a few weeks old and I’d only listened a few times.

But from the first note (and after a good set from the third opener, Slowgun) I realised that the new record had already wormed its way into my mind — a bit more atmospheric, less song-oriented, than Attica, but now, obviously, as good or nearly so. After the 40 or so minutes of songs from the album, they played a few more from the back catalog, and that was it (this being London, even after the age of “closing time”, most clubs in residential neighbourhoods have to stop the music pretty early). Though I admit I was hoping for, say, a cover of “I Could Never Take the Place of Your Man”, it was still a great, sloppy, loud show, with enough of us in the audience to shout and cheer (but probably not enough to make very much cash for the band, so I was happy to buy my first band t-shirt since, yes, a Mekons shirt from one of their tours about 20 years ago…). I did get a chance to thank a couple of the band members for indeed being the “best band in America” (albeit in London). I also asked whether they could come back for an acoustic show some time soon, so I wouldn’t have to tear myself away from my family and instead could bring my (currently) seven-month old baby to see them some day soon.

They did say UK tours might be a more regular occurrence, and you can follow their progress on the Wussy Road Blog. You should just buy their records, support great music.

by Andrew at October 16, 2016 06:15 PM

October 14, 2016

CERN Bulletin

Long-Term Collection

Dear Colleagues,

As previously announced in Echo (No. 254), your delegates took action to draw attention to the projects of the Long-Term Collections (LTC), the humanitarian body of the CERN Staff Association.

On Tuesday, 11 October, at noon, small Z-Cards were widely distributed at the entrances of CERN restaurants and we thank you all for your interest. We hope to have achieved an important part of our goal, which was to inform you, convince you and find new supporters among you. We will find out in the next few days! An exhibition of the LTC was also set up in the Main Building for the entire week.

The Staff Association wants to celebrate the occasion of the Long-Term Collection’s 45th anniversary at CERN because, ever since 1971, CERN personnel have showed great support in helping the least fortunate people on the planet in a variety of ways according to their needs.

On a regular basis, joint fundraising appeals are made with the Directorate to help the victims of natural disasters around the world. This is a way to take part in the global movement of solidarity in the aftermath of devastating disasters (tsunamis, earthquakes, floods) but this action has no direct link to the projects of the LTC. Still, we thank you all for contributing regularly to these collections too!

However, there is another way you can help in the long term those less fortunate. Remember that depending on whether you come from the South or the North, your life will be different. That is why, for 45 years already, every six months with the help of regular donations, the Staff Association Long-Term Collections have made small miracles happen for disadvantaged people in their local area. Please consult our page to see for yourself:! 74 projects over the past 45 years, 9 of which are still ongoing. Join the LTC!

We want to sincerely thank all of our loyal contributors. Without your support, this would not have been possible. Still, we at the Association want to believe that new and young CERN employees are just as concerned about these inequalities as the founders of the LTC and those working for the cause today. It’s time for us all to take action.

Contact us at
and join the LTC family as a regular contributor!

October 14, 2016 04:10 PM

CERN Bulletin


What is CAPA?

The CAPA, or the Individual Cases Commission (Commission des Cas Particuliers), is responsible for assisting members of the Staff Association in their disputes with the Organization. A professional career is not always a bed of roses and, in certain situations, assistance is welcome.

In practical terms, CAPA is a group of 7 to 8 delegates of the Staff Association who are at the service of their colleagues, there to give them advice, inform them of their rights and obligations, guide and accompany them, and offer them support in various procedures, in full confidentiality and free of charge.

What kinds of issues are addressed by CAPA?

A variety of subjects are covered, including:

  • contractual situation: probation period – limited duration contracts (LD) and indefinite contracts (IC);
  • career evolution: advancement, promotion, Performance Improvement Plan;
  • administrative decisions;
  • equal opportunities and diversity;
  • health and security;
  • appeal procedure and disciplinary procedure;
  • relations between colleagues, between supervisors and supervisees; etc.

How to ask CAPA for assistance?

Contact the Staff Association Secretariat. If you are not a member of the Staff Association and if your case is of general interest, you can also contact us.

When to ask CAPA for assistance?

Seek assistance as early on as possible, right away when a difficult situation arises. Experience shows that prevention is better than cure.

Can I contact CAPA if I have already contacted other services?

Yes, of course. Any person (staff member, fellow, associated member of personnel, contractor’s personnel, etc.) can contact the CAPA even if they have already sought assistance from other CERN services, such as the Medical Service, Social Affairs Service, Human Resources, the Ombudsperson, etc. We are used to working together with different services.

How can CAPA intervene?

We offer a complimentary approach to that of other CERN services. It is not our intention to take action in your place and find a solution on your behalf. Rather, we help you find a solution by yourself through explaining or exploring possible courses of action together with you in strictly confidential meetings. In general, an appointment is made to discuss your situation. Two members of CAPA, who are your colleagues, will listen to you and offer you advice. We also regularly consult legal advisors specialised in international law.

What is the topic of the moment?

Currently an issue concerning many of us is the implementation of the five-yearly review and in particular the impacts that this review can have on your/our careers and your/our pensions (Echo No. 248 and Echo No. 252). Indeed, following the new career structure, effective from 1st September 2016, several colleagues have questions concerning their new classification within benchmark jobs and grades. You can find the answers to most of these questions in the FAQ list prepared by the Human Resources Department (HR):

Benchmark jobs and placements?

We remind you that CERN defines a benchmark job as “a grouping of individual work situations with similar main activities and a common aim”. A benchmark job covers a range of two or three grades in the new career structure, which encompasses 10 grades in total.

Your grade: your old career path and salary band define unequivocally your new grade, which cannot be contested.

Your benchmark job: your placement in a benchmark job is based on your job title and professional code as registered in the HR database when the change came into effect on 1st September 2016.

These placements may lead to several surprises. Therefore:

  • If you find that your professional code is not up-to-date and the benchmark job assigned to you does not correspond to your current functions, you should request a change of Benchmark Job Title while keeping your current grade;
  • If you find that one or more colleagues with the same job as you have been placed in a benchmark job covering a higher ranking of grades you may request a change of Benchmark Job Title while keeping your current grade;
  • If you find that one or more colleagues with the same grade as you have been placed in a benchmark job covering a higher ranking of grades, you are free to request a change of Benchmark Job Title while keeping your current grade;
  • If for any other reason you find that the benchmark job provisionally assigned to you does not reflect your current functions, you can request a revision of the benchmark job while keeping your current grade.

How to proceed?

You have to send, as soon as possible, a letter to your supervisors and your Human Resources Advisor (HRA). Letter templates are available from your delegates or from the Staff Association.

The final confirmation of the Benchmark Job Title assigned to you will be communicated to you on 1st May 2017 at the latest. Let us be very clear, this discussion period with your management and the HRAs can only result in a change of your Benchmark Job Title and will not involve any change of grade. Indeed, the transition to a superior grade can only be considered within the framework of a promotion.

Worrying situations – Personal positions superior to the maximum of a grade?

We must also talk about our colleagues classed in a personal position superior to the maximum of their grade. Transitory measures will be applied to their cases through 2017, 2018, 2019 and 2020, but once this ends, they will be definitively blocked in this personal position, unless they are granted a promotion. Essentially, being blocked at the maximum of the grade or in a personal position exceeding this maximum is not shocking when it happens at the end of one’s career. This scenario already existed in the old career structure and demonstrated that the colleagues in question had built a remarkable career because they had reached the very top of their career path before retirement.

Unfortunately, some colleagues have made it known that they will be blocked in the new career structure much earlier, that is, 10 to 15 years before the end of their career. Being blocked before the age of 50 is not acceptable for the person nor for CERN. The Staff Association is working jointly with the Management to resolve the situation by developing validation of skills acquired through experience (VAE), internal mobility, and career development interviews. Moreover, the Association has requested the Management to re-examine these situations as a priority, and is waiting for further information.

Finally, a very small number of colleagues who were granted exceptional career extension (ECE) before 1st September now find their career evolution prospects significantly reduced, along with their future pension, with no hope of improvement. Indeed, even if they were promoted to a superior grade, they would be again blocked in a personal position. In fact, they would need two successive promotions to regain a prospect of career evolution.

We encourage all colleagues involved in a blockage situation to get in contact with the Staff Association.

In conclusion, in most cases, the implementation of this new career structure is transparent and has little to no effect on career evolution. However, side effects are to be expected: positive, with new perspectives of evolution opened for some colleagues, but also negative for those whose careers will be blocked in the short term (after transitory measures). For them, the youngest in particular, the financial impacts can be of such a magnitude that an action is required. The CAPA is here to advise and help them where possible and to the best of their abilities.


October 14, 2016 04:10 PM

CERN Bulletin

Collection for Italy

Following the earthquake of 24 August in central Italy, many of you have expressed your solidarity.

The collection to support the victims raised a total of 10 000 CHF, which was transferred in its entirety to Italy’s civil protection through the Italian delegation to the CERN Council.

The CERN Directorate and the CERN Staff Association sincerely thank you for your generosity.

October 14, 2016 03:10 PM

CERN Bulletin


Le GAC organise chaque mois des permanences avec entretiens individuels.

La prochaine permanence se tiendra le :

le mardi 1er novembre de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel.

La permanence suivante aura lieu le mardi 29 novembre 2016.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations :
e-mail :

October 14, 2016 03:10 PM

CERN Bulletin


Join the Staff Association now for 2017, the remaining quarter of 2016 is free!

The membership fee of the Staff Association is free for everyone joining during the last quarter of 2016. Take this opportunity to become a member of the SA. You can also enjoy our offers and partnerships, especially as we approach the holiday season.

As a reminder, the membership fee is:

  • 0.2 % of the annual basic salary for staff members with an indefinite contract (IC); the amount will be automatically;
  • 50.00 CHF for staff members with a limited duration contract (LD), fellows and associated members of personnel.


Don’t wait any longer, join the Staff Association that represents all of you!

More information on

October 14, 2016 03:10 PM

Tommaso Dorigo - Scientificblogging

Another Stone On The Diphotonium Grave
Last December, when the ATLAS and CMS experiments gave two bacl-to-back talks at the end-of-the-year LHC "physics jamboree" in the CERN main auditorium, the whole world of particle physics was confronted with a new question nobody had seen coming: could a 750 GeV particle be there, decaying a sizable fraction of the time into pairs of energetic photons? What new physics could account for it? And how to search for an experimental confirmation in other channels or phenomena?

read more

by Tommaso Dorigo at October 14, 2016 10:56 AM

October 13, 2016

Clifford V. Johnson - Asymptotia



Well, since I just lost the last two and a half hours' work to a mystery crash (and Illustrator CS6 has no autosave*), I figured I lose another 20 minutes and prep a panel from a page of the book I've been working on today to:

(1) Share something from the project after a while of not doing so, and

(2) Show what I'd much rather be doing right now. I'm annoyed but trying to imagine myself in the picture... breathe...

Ok. Back to it. Looks like another 3am bedtime coming up...


*I know this, I just forgot to hit cmd-S for a bit. I've got very good at doing it regularly, but this time... Yeah I know...
Click to continue reading this post

The post Relaxin’… appeared first on Asymptotia.

by Clifford at October 13, 2016 12:56 AM

October 12, 2016

Symmetrybreaking - Fermilab/SLAC

Citizen scientists join search for gravitational waves

A new project pairs volunteers and machine learning to sort through data from LIGO.

Barbara Téglás was looking to try something different while on a break from her biotechnology work.

So she joined Zooniverse, a website dedicated to citizen science projects, and began to hunt pulsars and classify cyclones from her home computer.

“It’s a great thing that scientists share data and others can analyze it and participate,” Téglás says. “The project helps me stay connected with science in other fields, from anywhere.”

In April, at her home in the Caribbean Islands, Téglás saw a request for volunteers to help with a new gravitational-wave project called Gravity Spy. Inspired by the discovery of gravitational waves by the Laser Interferometer Gravitational-wave Observatory, or LIGO, she signed up the same day.

“To be a complete outsider and have the opportunity to contribute to an astrophysics project such as LIGO, it’s extraordinary,” Téglás says.

Tuning out the noise

It took a century after Albert Einstein predicted the existence of gravitational waves—or ripples in space-time—for scientists to build an instrument sophisticated enough to see them. LIGO observed these ripples for the first (and second) time, using two L-shaped detectors called interferometers designed to measure infinitesimal changes in distance. These changes were generated by two black holes that collided a billion years in the past, giving off gravitational waves that eventually passed through Earth. As they traveled through our planet, these gravitational waves stretched and shrank the 4-kilometer arms of the detectors.

The LIGO detectors can measure a change in distance about 10,000 times smaller than the diameter of a proton. Because the instruments are so sensitive, this also makes them prone to capturing other vibrations, such as earthquakes or heavy vehicles driving near the detectors. Equipment fluctuations can also create noise.

The noise, also called a glitch, can move the arms of the detector and potentially mimic an astrophysical signal.

The two detectors are located nearly 2000 miles apart, one in Louisiana and the other in Washington state. Gravitational waves from astrophysical events will hit both detectors at nearly the same time, since gravitational waves travel straight through Earth at the speed of light. However, the distance between the two makes it unlikely that other types of vibrations will be felt simultaneously.

“But that’s really not enough,” says Mike Zevin, a physics and astronomy graduate student at Northwestern University and a member of the Gravity Spy science team. “Glitches happen often enough that similar vibrations can appear in both detectors at nearly the same time. The glitches can tarnish the data and make it unusable.”

Gravity Spy enlists the help of volunteers to analyze noise that appears in LIGO detectors.

This information is converted to an image called spectrogram, and the patterns show the time and frequencies of the noise. Shifts in blue, green and yellow indicate the loudness of the glitch, or how much the noise moved the arms of the detector. The glitches show up frequently in the large amount of information generated by the detectors.

“Some of these glitches in the spectrograms are easily identified by computers, while others aren’t,” Zevin says. “Humans are actually better at spotting new patterns in the images.”

The Gravity Spy volunteers are tasked with labeling these hard-to-identify categories of glitches. In addition, the information is used to create training sets for computer algorithms.

As the training sets grow larger, the computers become better at classifying glitches. That can help scientists eliminate the noise from the detectors or find ways to account for glitches as they look at the data.

“One of our goals is to create a new way of doing citizen science that scales with the big-data era we live in now,” Zevin says.

Gravity Spy is a collaboration between Adler Planetarium, California State University-Fullerton, Northwestern University, Syracuse University, University of Alabama at Huntsville, and Zooniverse. The project is supported by an interdisciplinary grant from the National Science Foundation.

About 1400 people volunteered for initial tests of Gravity Spy. Once the beta testing of Gravity Spy is complete, the volunteers will look at new images created when LIGO begins to collect data during its second observing run.

Artwork by Sandbox Studio, Chicago with Ana Kova

A human endeavor

The project also provides an avenue for human-computer interaction research.

Another goal for Gravity Spy is to learn the best ways to keep citizen scientists motivated while looking at immense data sets, says Carsten Oesterlund, information studies professor at Syracuse University and member of the Gravity Spy research team.

“What is really exciting from our perspective is that we can look at how human learning and machine learning can go hand-in-hand,” Oesterlund says. “While the humans are training the machines, how can we organize the task to also facilitate human learning? We don’t want them simply looking at image after image. We want developmental opportunities for the volunteers.”

The researchers are examining how to encourage the citizen scientists to collaborate as a team. They also want to support new discoveries, or make it easier for people to find unique sets of glitches.

One test involves incentives—in an earlier study, the computing researchers found if a volunteer knows that they are the first to classify an image, they go on to classify more images.

“We’ve found that the sense of novelty is actually quite motivating,” says Kevin Crowston, a member of the Gravity Spy science team and associate dean for research at Syracuse University’s School of Information Studies.

Almost every day, Téglás works on the Gravity Spy project. When she has spare time, she sits down at her computer and looks at glitches. Since April, she’s classified nearly 15,000 glitches and assisted other volunteers with hundreds of additional images through talk forums on Zooniverse.

She’s pleased that her professional skills developed while inspecting genetics data can also help many citizen science projects.

On her first day with Gravity Spy, Téglás helped identify a new type of glitch. Later, she classified another unique glitch called “paired doves” after its repeating, chirp-like patterns, which closely mimic the signal created by binary black holes. She’s also found several new variations of known glitches. Her work is recognized in LIGO’s log, and the newly found glitches are now part of the official workflow for the experiment.

Different experiences, backgrounds and ways of thinking can make citizen science projects stronger, she says.

“For this project, you’re not only using your eyes,” Téglás says. “It’s also an opportunity to understand an important experiment in modern science.”

by Amanda Solliday at October 12, 2016 04:44 PM

October 11, 2016

Symmetrybreaking - Fermilab/SLAC

Recruiting team geoneutrino

Physicists and geologists are forming a new partnership to study particles from inside the planet.

The Earth is like a hybrid car. 

Deep under its surface, it has two major fuel tanks. One is powered by dissipating primordial energy left over from the planet’s formation. The other is powered by the heat that comes from radioactive decay. 

We have only a shaky understanding of these heat sources, says William McDonough, a geologist at the University of Maryland. “We don’t have a fuel gauge on either one of them. So we’re trying to unravel that.” 

One way to do it is to study geoneutrinos, a byproduct of the process that burns Earth’s fuel. Neutrinos rarely interact with other matter, so these particles can travel straight from within the Earth to its surface and beyond. 

Geoneutrinos hold clues as to how much radioactive material the Earth contains. Knowing that could lead to insights about how our planet formed and its modern-day dynamics. In addition, the heat from radioactive decay plays a key role in driving plate tectonics. Understanding the composition of the planet and the motion of the plates could help geologists model seismic activity.

To effectively study geoneutrinos, scientists need knowledge both of elementary particles and of the Earth itself. The problem, McDonough says, is that very few geologists understand particle physics, and very few particle physicists understand geology. That’s why physicists and geologists have begun coming together to build an interdisciplinary community. 

“There’s really a need for a beyond-superficial understanding of the physics for the geologists and likewise a nonsuperficial understanding of the Earth by the physicists,” McDonough says, “and the more that we talk to each other, the better off we are.” 

There are hurdles to overcome in order to get to that conversation, says Livia Ludhova, a neutrino physicist and geologist affiliated with Forschungzentrum Jülich and RWTH Aachen University in Germany. “I think the biggest challenge is to make a common dictionary and common understanding—to get a common language. At the basic level, there are questions on each side which can appear very naïve.”

In July, McDonough, Ludhova and Gianpaolo Bellini, emeritus scientist of the Italian National Institute of Nuclear Physics and retired physics professor at the University of Milan, organized a summer institute for geology and physics graduate students to bridge the divide.

“In general, geology is more descriptive,” Bellini says. “Physics is more structured.” 

This can be especially troublesome when it comes to numerical results, since most geologists are not used to working with the defined errors that are so important in particle physics. 

At the summer institute, students began with a sort of remedial “preschool,” in which geologists were taught how to interpret physical uncertainty and the basics of elementary particles and physicists were taught about Earth’s interior. Once they gained basic knowledge of one another’s fields, the scientists could begin to work together.

This is far from the first interdisciplinary community within science or even particle physics. Ludhova likens it to the field of radiology: There is one expert to take an X-ray and another to determine a plan of action once all the information is clear. Similarly, particle physicists know how to take the necessary measurements, and geologists know what kinds of questions they could answer about our planet.

Right now, only two major experiments are looking for geoneutrinos: KamLAND at the Kamioka Observatory in Japan and Borexino at the Gran Sasso National Laboratory in Italy. Between the two of them, these observatories detect fewer than 20 geoneutrinos a year. 

Because of the limited results, geoneutrino physics is by necessity a small discipline: According to McDonough, there are only about 25 active neutrino researchers with a deep knowledge of both geology and physics.

Over the next decade, though, several more neutrino detectors are anticipated, some of which will be much larger than KamLAND or Borexino. The Jiangmen Underground Neutrino Observatory (JUNO) in China, for example, should be ready in 2020. Whereas Borexino’s detector is made up of 300 tons of active material, and KamLAND’s contains 1000, JUNO’s will have 20,000 tons.

The influx of data over the next decade will allow the community to emerge into the larger scientific scene, Bellini says. “There are some people who say ‘now this is a new era of science’—I think that is exaggerated. But I do think that we have opened a new chapter of science in which we use the methods of particle physics to study the Earth.”

by Leah Crane at October 11, 2016 01:00 PM

October 10, 2016

The n-Category Cafe

Jobs at Edinburgh

I’m pleased to announce that we’re advertising two Lectureships in “algebra, geometry & topology and related fields such as category theory and mathematical physics”. Come and join us! We’re a happy, well-resourced department with a very positive atmosphere. The algebra/geometry/topology group provides an excellent home for a category theorist.

To be clear, these positions are for practical purposes permanent, i.e. as close as the UK gets to tenure. There’s no one-to-one correspondence between UK and US job titles, so I’ll just say that Lecturer is the usual starting position for someone in their first permanent job, followed by Senior Lecturer, Reader, then Professor. The ad adds “Exceptionally, the appointments may be to Readership”.

by leinster ( at October 10, 2016 05:33 PM

ZapperZ - Physics and Physicists

Physics In "Doctor Strange"
Adam Frank, a physics professor at the University of Rochester, talks about being a consultant for the upcoming Marvel movie "Doctor Strange".

I suppose the biggest and most dicey issue that he had to deal with is how to deal with "consciousness", because as he stated, we actually do not have a concrete description of it. This is where many movies, and many pseudoscientists, allow themselves wide liberty at abusing the concept.

I will see "Doctor Strange" when it comes up, and I'll see for myself how the movie deals with this.


by ZapperZ ( at October 10, 2016 03:41 PM

October 07, 2016

John Baez - Azimuth

Kosterlitz–Thouless Transition

Three days ago, the 2016 Nobel Prize in Physics was awarded to Michael Kosterlitz of Brown University:


David Thouless of the University of Washington:


and Duncan Haldane of Princeton University:


They won it for their “theoretical discovery of topological phase transitions and topological phases of matter”, which was later confirmed by many experiments.

Sadly, the world’s reaction was aptly summarized by Wired magazine’s headline:

Nobel Prize in Physics Goes to Another Weird Thing Nobody Understands

Journalists worldwide struggled to pronounce ‘topology’, and a member of the Nobel prize committee was reduced to waving around a bagel and a danish to explain what the word means:

That’s fine as far as it goes: I’m all for using food items to explain advanced math! However, it doesn’t explain what Kosterlitz, Thouless and Haldane actually did. I think a 3-minute video with the right animations would make the beauty of their work perfectly clear. I can see it in my head. Alas, I don’t have the skill to make those animations—hence this short article.

I’ll just explain the Kosterlitz–Thouless transition, which is an effect that shows up in thin films of magnetic material. Haldane’s work on magnetic wires is related, but it deserves a separate story.

I’m going to keep this very quick! For more details, try this excellent blog article:

• Brian Skinner, Samuel Beckett’s guide to particles and antiparticles, Ribbonfarm, 24 September 2015.

I’m taking all my pictures from there.

The idea

Imagine a thin film of stuff where each atom’s spin likes to point in the same direction as its neighbors. Also suppose that each spin must point in the plane of the material.

Your stuff will be happiest when all its spins are lined up, like this:


What does ‘happy’ mean? Physicists often talk this way. It sounds odd, but it means something precise: it means that the energy is low. When your stuff is very cold, its energy will be as low as possible, so the spins will line up.

When you heat up your thin film, it gets a bit more energy, so the spins can do more interesting things.

Here’s one interesting possibility, called a ‘vortex’:


The spins swirl around like the flow of water in a whirlpool. Each spin is fairly close to being lined up to its neighbors, except near the middle where they’re doing a terrible job.

The total energy of a vortex is enormous. The reason is not the problem at the middle, which certainly contributes some energy. The reason is that ‘fairly’ close is not good enough. The spins fail to perfectly line up with their neighbors even far away from the middle of this picture. This problem is bad enough to make the energy huge. (In fact, the energy would be infinite if our thin film of material went on forever.)

So, even if you heat up your substance, there won’t be enough energy to make many vortices. This made people think vortices were irrelevant.

But there’s another possibility, called an ‘antivortex’:


A single antivortex has a huge energy, just like a vortex. So again, it might seem antivortices are irrelevant if you’re wondering what your stuff will do when it has just a little energy.

But here’s what Kosterlitz and Thouless noticed: the combination of a vortex together with an antivortex has much less energy than either one alone! So, when your thin film of stuff is hot enough, the spins will form ‘vortex-antivortex pairs’.

Brian Skinner has made a beautiful animation showing how this happens. A vortex-antivortex pair can appear out of nothing:


… and then disappear again!

Thanks to this process, at low temperatures our thin film will contain a dilute ‘gas’ of vortex-antivortex pairs. Each vortex will stick to an antivortex, since it takes a lot of energy to separate them. These vortex-antivortex pairs act a bit like particles: they move around, bump into each other, and so on. But unlike most ordinary particles, they can appear out of nothing, or disappear, in the process shown above!

As you heat up the thin film, you get more and more vortex-antivortex pairs, since there’s more energy available to create them. But here’s the really surprising thing. Kosterlitz and Thouless showed that as you turn up the heat, there’s a certain temperature at which the vortex-antivortex pairs suddenly ‘unbind’ and break apart!

Why? Because at this point, the density of vortex-antivortex pairs is so high, and they’re bumping into each other so much, that we can’t tell which vortex is the partner of which antivortex. All we’ve got is a thick soup of vortices and antivortices!

What’s interesting is that this happens suddenly at some particular temperature. It’s a bit like how ice suddenly turns into liquid water when it warms above its melting point. A sudden change in behavior like this is called a phase transition.

So, the Kosterlitz–Thouless transition is the sudden unbinding of the vortex-antivortex pairs as you heat up a thin film of stuff where the spins are confined to a plane and they like to line up.

In fact, the pictures above are relevant to many other situations, like thin films of superconductive materials. So, these too can exhibit a Kosterlitz–Thouless transition. Indeed, the work of Kosterlitz and Thouless was the key that unlocked a treasure room full of strange new states of matter, called ‘topological phases’. But this is another story.


What is the actual definition of a vortex or antivortex? As you march around either one and look at the little arrows, the arrows turn around—one full turn. It’s a vortex if when you walk around it clockwise the little arrows make a full turn clockwise:


It’s an antivortex if when you walk around it clockwise the little arrows make a full turn counterclockwise:


Topologists would say the vortex has ‘winding number’ 1, while the antivortex has winding number -1.

In the physics, the winding number is very important. Any collection of vortex-antivortex pairs has winding number 0, and Kosterlitz and Thouless showed that situations with winding number 0 are the only ones with small enough energy to be important for a large thin film at rather low temperatures.

Now for the puzzles:

Puzzle 1: What’s the mirror image of a vortex? A vortex, or an antivortex?

Puzzle 2: What’s the mirror image of an antivortex?

Here are some clues, drawn by the science fiction writer Greg Egan:

and the mathematician Simon Willerton:

For more

To dig a bit deeper, try this:

• The Nobel Prize in Physics 2016, Topological phase transitions and topological phases of matter.

It’s a very well-written summary of what Kosterlitz, Thouless and Haldane did.

Also, check out Simon Burton‘s simulation of the system Kosterlitz and Thouless were studying:

In this simulation the spins start out at random and then evolve towards equilibrium at a temperature far below the Kosterlitz–Thouless transition. When equilibrium is reached, we have a gas of vortex-antivortex pairs. Vortices are labeled in blue while antivortices are green (though this is not totally accurate because the lattice is discrete). Burton says that if we raise the temperature to the Kosterlitz–Thouless transition, the movie becomes ‘a big mess’. That’s just what we’d expect as the vortex-antivortex pairs unbind.

I thank Greg Egan, Simon Burton, Brian Skinner, Simon Willerton and Haitao Zhang, whose work made this blog article infinitely better than it otherwise would be.

by John Baez at October 07, 2016 08:05 PM

Symmetrybreaking - Fermilab/SLAC

Hunting the nearly un-huntable

The MINOS and Daya Bay experiments weigh in on the search for sterile neutrinos.

In the 1990s, the Liquid Scintillator Neutrino Detector (LSND) experiment at Los Alamos National Laboratory saw intriguing hints of an undiscovered type of particle, one that (as of yet) cannot be detected. In 2007, the MiniBooNE experiment at the US Department of Energy’s Fermi National Accelerator Laboratory followed up and found a similar anomaly.

Today scientists on two more neutrino experiments—the MINOS experiment at Fermilab and the Daya Bay experiment in China—entered the discussion, presenting results that limit the places where these particles, called sterile neutrinos, might be hiding.

“This combined result was a two-year effort between our collaborations,” says MINOS scientist Justin Evans of the University of Manchester. “Together we’ve set what we believe is a very strong limit on a very intriguing possibility.” 

In three separate papers—two published individually by MINOS and Daya Bay and one jointly, all in Physical Review Letters—scientists on the two experiments detail the results of their hunt for sterile neutrinos.

Both experiments are designed to see evidence of neutrinos changing, or oscillating, from one type to another. Scientists have so far observed three types of neutrinos, and have detected them changing between those three types, a discovery that was awarded the 2015 Nobel Prize in physics.

What the LSND and MiniBooNE experiments saw—an excess of electron neutrino-like signals—could be explained by a two-step change: muon neutrinos morphing into sterile neutrinos, then into electron neutrinos. MINOS and Daya Bay measured the rate of these steps using different techniques.

MINOS, which is fed by Fermilab’s neutrino beam—the most powerful in the world—looks for the disappearance of muon neutrinos. MINOS can also calculate how often muon neutrinos should transform into the other two known types and can infer from that how often they could be changing into a fourth type that can’t be observed by the MINOS detector.

Daya Bay performed a similar observation with electron anti-neutrinos (assumed, for the purposes of this study, to behave in the same way as electron neutrinos).

The combination of the two experiments’ data (and calculations based thereon) cannot account for the apparent excess of neutrino-like signals observed by LSND. That along with a reanalysis of results from Bugey, an older experiment in France, leaves only a very small region where sterile neutrinos related to the LSND anomaly could be hiding, according to scientists on both projects.

“There’s a very small parameter space left that the LSND signal could correspond to,” says Alex Sousa of the University of Cincinnati, one of the MINOS scientists who worked on this result. “We can’t say that these light sterile neutrinos don’t exist, but the space where we might find them oscillating into the neutrinos we know is getting narrower.”

Both Daya Bay and MINOS’ successor experiment, MINOS+, have already taken more data than was used in the  analysis here. MINOS+ has completely analyzed only half of its collected data to date, and Daya Bay plans to quadruple its current data set. The potential reach of the final joint effort, says Kam-Biu Luk, co-spokesperson of the Daya Bay experiment, “could be pretty definitive.”

The IceCube collaboration, which measures atmospheric neutrinos with a detector deep under the Antarctic ice, recently conducted a similar search for sterile neutrinos and also came up empty.

All of this might seem like bad news for fans of sterile neutrinos, but according to theorist André de Gouvea of Northwestern University, the hypothesis is still alive.

Sterile neutrinos are “still the best new physics explanation for the LSND anomaly that we can probe, even though that explanation doesn’t work very well,” de Gouvea says. “The important thing to remember is that these results from MINOS, Daya Bay, Ice Cube and others don’t rule out the concept of sterile neutrinos, as they may be found elsewhere.”  

Theorists have predicted the existence of sterile neutrinos based on anomalous results from several different experiments. The results from MINOS and Daya Bay address the sterile neutrinos predicted based on the LSND and MiniBooNE anomalies. Theorists predict other types of sterile neutrinos to explain anomalies in reactor experiments and in experiments using the chemical gallium. Much more massive types of sterile neutrinos would help explain why the neutrinos we know are so very light and how the universe came to be filled with more matter than antimatter.

Searches for sterile neutrinos have focused on the LSND neutrino excess, de Gouvea says, because it provides a place to look. If that particular anomaly is ruled out as a key to finding these nigh-undetectable particles, then they could be hiding almost anywhere, leaving no clues. “Even if sterile neutrinos do not explain the LSND anomaly, their existence is still a logical possibility, and looking for them is always interesting,” de Gouvea says.

Scientists around the world are preparing to search for sterile neutrinos in different ways.

Fermilab is preparing a three-detector suite of short-baseline experiments dedicated to nailing down the cause of both the LSND anomaly and an excess of electrons seen in the MiniBooNE experiment. These liquid-argon detectors will search for the appearance of electron neutrinos, a method de Gouvea says is a more direct way of addressing the LSND anomaly. One of those detetors, MicroBooNE, is specifically chasing down the MiniBooNE excess.

Scientists at Oak Ridge National Laboratory are preparing the Precision Oscillation and Spectrum Experiment (PROSPECT), which will search for sterile neutrinos generated by a nuclear reactor. CERN’s SHiP experiment, which stands for Search for Hidden Particles, is expected to look for sterile neutrinos with much higher predicted masses.

Obtaining a definitive answer to the sterile neutrino question is important, Evans says, because the existence (or non-existence) of these particles might impact how scientists interpret the data collected in other neutrino experiments, including Japan’s T2K, the United States’ NOvA, the forthcoming DUNE, and other future projects. DUNE in particular will be able to look for sterile neutrinos across a broad spectrum, and evidence of a fourth kind of neutrino would enhance its already rich scientific program.

“It’s absolutely vital that we get this question resolved,” Evans says. “Whichever way it goes, it will be a crucial part of neutrino experiments in the future.” 

by Andre Salles at October 07, 2016 04:56 PM

Tommaso Dorigo - Scientificblogging

Horse Dung In The Detector, And Other Stories

The text below is part of a chapter of "Anomaly!" which I eventually removed from the book, mainly due to the strict page limit set by my publisher. It is a chapter that discusses the preparations for Run 2 of the Fermilab Tevatron, which started in 2002 and lasted almost 10 years. There were many, many stories connected to the construction of the CDF II detector, and it is a real pity that they did not get included in the book. So at least I can offer some of them here for your entertainment... [A disclaimer: the text has not been proofread and is in its initial, uncorrected state.]

read more

by Tommaso Dorigo at October 07, 2016 02:24 PM

Robert Helling - atdotde

My two cents on this year's physics Nobel prize
This year's Nobel prize is given for quite abstract concepts. So the popular science outlets struggle in giving good explanations for what it is awarded for. I cannot add anything to this, but over at math overflow, mathematicians asked for a mathematical explanation. So here is my go of an outline for people familiar with topology but not so much physics:

Let me try to give a brief explanation: All this is in the context of Fermi liquid theory, the idea that you can describe the low energy physics of these kinds of systems by pretending they are generated by free fermions in an external potential. So, all you need to do is to solve the single particle problem for the external potential and then fill up the energy levels from the bottom until you reach the total particle number (or actually the density). It is tempting (and conventional) to call these particles electrons, and I will do so here, but of course actual electrons are not free but interacting. This "Fermi Liquid" explanation is just and effective description for long wavelength (the IR end of the renormalization group flow) where it turns out, that at those scales the interactions play no role (they are "irrelevant operators" in the language of the renormalization group).

The upshot is, we are dealing with free "electrons" and the previous paragraph was only essential if you want to connect to the physical world (but this is MATH overflow anyway).

Since the external potential comes from a lattice (crystal) it is invariant under lattice translations. So Bloch theory tells you, you can restrict your attention as far as solving the Schrödinger equation to wave functions living in the unit cell of the lattice. But you need to allow for quasi-periodic boundary conditions, i.e. when you go once around the unit cell you are allowed to pick up a phase. In fact, there is one phase for each generator of the first homotopy group of the unit cell. Each choice of these phases corresponds to one choice of boundary conditions for the wave function and you can compute the eigenvalues of the Hamiltonian for these given boundary conditions (the unit cell is compact so we expect discrete eigenvalues, bounded from below).

But these eigenvalues depend on the boundary conditions and you can think of the as a function of the phases. Each of the phases takes values in U(1) so the space of possible phases is a torus and you can think of the eigenvalues as functions on the torus. Actually, when going once around an irreducible cycle of the torus not all eigenvalues have to come back to themselves, you can end up with a permutation it this is not really a function but a section of a bundle but let's not worry too much about this as generally this "level crossing" does not happen in two dimensions and only at discrete points in 3D (this is Witten's argument with the 2x2 Hamiltonian above).

The torus of possible phases is called the "Brioullin zone" (sp?) by physicists and its elements "inverse lattice vectors" (as you can think of the Brioullin zone as obtained from modding out the dual lattice of the lattice we started with).

Now if your electron density is N electrons per unit cell of the lattice Fermi Liquid theory asks you to think of the lowest N energy levels as occupied. This is the "Fermi level" or more precisely the graph of the N-th eigenvalue over the Bioullin zone. This graph (views as a hyper-surface) can have non-trivial topology and the idea is that by doing small perturbations to the system (like changing the doping of the physical probe or changing the pressure or external magnetic field or whatever) stuff behaves continuously and thus the homotopy class cannot change and is thus robust (or "topological" as the physicist would say).

If we want to inquire about the quantum Hall effect, this picture is also useful: The Hall conductivity can be computed to leading order by linear response theory. This allows us to employ the Kubo formula to compute it as a certain two-point function or retarded Green's function. The relevant operators turn out to be related to the N-th level wave function and how it changes when we move around in the Brioullin zone: If we denote by u the coordinates of the Brioullin zone and by $\psi_u(x)$ the N-th eigenfunction for the boundary conditions implied by u, we can define a 1-form
$$ A = \sum_i \langle \psi_u|\partial_{u_i}|\psi_u\rangle\, du^i = \langle\psi_u|d_u|\psi\rangle.$$
This 1-form is actually the connection of a U(1) bundle and the expression the Kubo-formula asks us to compute turns out to be the first Chern number of that bundle (over the Brioullin zone).

Again that, as in integer, cannot change upon small perturbations of the physical system and this is the explanation of the levels in the QHE.

In modern applications, an important role is played by the (N-dimensional and thus finite dimensional) projector the subspace of Hilbert space spanned by the eigenfunctions corresponding to he N lowest eigenvalues, again fibered over the Brioullin zone. Then one can use K-theory (and KO-theory in fact) related to this projector to classify the possible classes of Fermi surfaces (these are the "topological phases of matter", as eventually, when the perturbation becomes too strong even the discrete invariants can jump which then physically corresponds to a phase transition).

by Robert Helling ( at October 07, 2016 10:07 AM

Robert Helling - atdotde

My two cents on this years physics Nobel prize
This year's Nobel prize is given for quite abstract concepts. So the popular science outlets struggle in giving good explanations for what it is awarded for. I cannot add anything to this, but over at math overflow, mathematicians asked for a mathematical explanation. So here is my go of an outline for people familiar with topology but not so much physics:

Let me try to give a brief explanation: All this is in the context of Fermi liquid theory, the idea that you can describe the low energy physics of these kinds of systems by pretending they are generated by free fermions in an external potential. So, all you need to do is to solve the single particle problem for the external potential and then fill up the energy levels from the bottom until you reach the total particle number (or actually the density). It is tempting (and conventional) to call these particles electrons, and I will do so here, but of course actual electrons are not free but interacting. This "Fermi Liquid" explanation is just and effective description for long wavelength (the IR end of the renormalization group flow) where it turns out, that at those scales the interactions play no role (they are "irrelevant operators" in the language of the renormalization group).

The upshot is, we are dealing with free "electrons" and the previous paragraph was only essential if you want to connect to the physical world (but this is MATH overflow anyway).

Since the external potential comes from a lattice (crystal) it is invariant under lattice translations. So Bloch theory tells you, you can restrict your attention as far as solving the Schrödinger equation to wave functions living in the unit cell of the lattice. But you need to allow for quasi-periodic boundary conditions, i.e. when you go once around the unit cell you are allowed to pick up a phase. In fact, there is one phase for each generator of the first homotopy group of the unit cell. Each choice of these phases corresponds to one choice of boundary conditions for the wave function and you can compute the eigenvalues of the Hamiltonian for these given boundary conditions (the unit cell is compact so we expect discrete eigenvalues, bounded from below).

But these eigenvalues depend on the boundary conditions and you can think of the as a function of the phases. Each of the phases takes values in U(1) so the space of possible phases is a torus and you can think of the eigenvalues as functions on the torus. Actually, when going once around an irreducible cycle of the torus not all eigenvalues have to come back to themselves, you can end up with a permutation it this is not really a function but a section of a bundle but let's not worry too much about this as generally this "level crossing" does not happen in two dimensions and only at discrete points in 3D (this is Witten's argument with the 2x2 Hamiltonian above).

The torus of possible phases is called the "Brioullin zone" (sp?) by physicists and its elements "inverse lattice vectors" (as you can think of the Brioullin zone as obtained from modding out the dual lattice of the lattice we started with).

Now if your electron density is N electrons per unit cell of the lattice Fermi Liquid theory asks you to think of the lowest N energy levels as occupied. This is the "Fermi level" or more precisely the graph of the N-th eigenvalue over the Bioullin zone. This graph (views as a hyper-surface) can have non-trivial topology and the idea is that by doing small perturbations to the system (like changing the doping of the physical probe or changing the pressure or external magnetic field or whatever) stuff behaves continuously and thus the homotopy class cannot change and is thus robust (or "topological" as the physicist would say).

If we want to inquire about the quantum Hall effect, this picture is also useful: The Hall conductivity can be computed to leading order by linear response theory. This allows us to employ the Kubo formula to compute it as a certain two-point function or retarded Green's function. The relevant operators turn out to be related to the N-th level wave function and how it changes when we move around in the Brioullin zone: If we denote by u the coordinates of the Brioullin zone and by $\psi_u(x)$ the N-th eigenfunction for the boundary conditions implied by u, we can define a 1-form
$$ A = \sum_i \langle \psi_u|\partial_{u_i}|\psi_u\rangle\, du^i = \langle\psi_u|d_u|\psi\rangle.$$
This 1-form is actually the connection of a U(1) bundle and the expression the Kubo-formula asks us to compute turns out to be the first Chern number of that bundle (over the Brioullin zone).

Again that, as in integer, cannot change upon small perturbations of the physical system and this is the explanation of the levels in the QHE.

In modern applications, an important role is played by the (N-dimensional and thus finite dimensional) projector the subspace of Hilbert space spanned by the eigenfunctions corresponding to he N lowest eigenvalues, again fibered over the Brioullin zone. Then one can use K-theory (and KO-theory in fact) related to this projector to classify the possible classes of Fermi surfaces (these are the "topological phases of matter", as eventually, when the perturbation becomes too strong even the discrete invariants can jump which then physically corresponds to a phase transition).

by Robert Helling ( at October 07, 2016 08:42 AM

October 06, 2016

ZapperZ - Physics and Physicists

Detecting Particles By Seeing Them Move Faster Than Light
No, this is not a topic on superluminal particles. Rather, it is an article on how we detect particles by using faster-than-light particles in a medium, i.e. by observing the Cherenkov radiation.

But photons only move at that perfect speed-of-light (c) if they’re in a vacuum, or the complete emptiness of space. Put one in a medium — like water, glass, or acrylic — and they’ll move at the speed of light in that medium, which is less than 299,792,458 m/s by quite a bit. Even air, which is pretty close to a vacuum, slows down light by 0.03% from its maximum possible speed. This isn’t that much, but it does mean something remarkable: these high-energy particles that come into the atmosphere are now moving faster than light in that medium, which means they emit a special type of radiation known as Cherenkov radiation.

The article listed several detectors that make use of this effect, but it is missing A LOT more. Practically all neutrino detectors use this principle (i.e. SuperKamiokande). Auger Observatory also looks out for these Cherenkov radiation.

But the part that I think should fascinate the layperson is when the speed of various things are listed, up to the most accurate decimal places:

It’s true that Einstein had it right all the way back in 1905: there is a maximum speed to anything in the Universe, and that speed is the speed of light in a vacuum (c), 299,792,458 m/s. Cosmic ray particles can go faster than anything on Earth, even at the LHC. Here’s a fun list of how fast various particles can go at a variety of accelerators, and from space:
  • 980 GeV: fastest Fermilab proton, 0.99999954c, 299,792,320 m/s.
  • 6.5 TeV: fastest LHC proton, 0.9999999896c, 299,792,455 m/s.
  • 104.5 GeV: fastest LEP electron (fastest accelerator particle ever), 0.999999999988c, 299,792,457.9964 m/s.
  • 5 x 10^19 GeV: highest energy cosmic rays ever (assumed to be protons), 0.99999999999999999999973c, 299,792,457.999999999999918 m/s.
 Just notice how much energy we had to put in to, say, the proton in going from 0.99999954c to 0.9999999896c. And then, notice how high of an energy cosmic rays have when compared to the LHC. If these types of collisional energy can create "catastrophic blackholes", we would be gone by now, thankyouverymuch!


by ZapperZ ( at October 06, 2016 06:07 PM

Lubos Motl - string vacua and pheno

Stable AdS flux vacua must be supersymmetric: a conjecture
Famous physicists Hiroši Ooguri and Cumrun Vafa proposed a new branch of the Swampland program in their new paper
Non-supersymmetric AdS and the Swampland
In 2005, Cumrun Vafa coined the term swampland to describe would-be theories or their low-energy effective field theory limits that look consistent according to the rules of effective quantum field theory but that are banned according to the more stringent rules of string theory or quantum gravity (which are ultimately equivalent concepts) i.e. that have no realization within string/M-theory.

The swampland (TRF) is the "larger" but messier realm surrounding the stringy landscape. The swampland shouldn't be confused with the related but inequivalent technical notion of the part of the Internet and media that is critical towards string theory. It's not called a "swampland" but rather a "cesspool" and the technical term for the individuals in the cesspool is "scumbags". The largest and stinkiest two scumbags are known as "Šmoits" but I don't want to overwhelm this blog post with the review of the standard terminology.

The extra constraints imposed by string/M-theory may be interpreted as "general predictions of string/M-theory". They're usually qualitative. Our weak gravity conjecture is the most intensely studied example of such extra constraints. It says that there have to exist particles light enough so that the repulsive electric force between them trumps the gravitational attraction. In this sense, gravity is the weakest force and it has to be.

This statement may be justified by various arguments using many viewpoints and it generally seems nontrivial yet at least much more correct than you would expect if it were a random guess. However, we couldn't settle – and the researchers still haven't agreed – on many of the details. Do the light charged particle have to exist for every type of an electric force? Every direction in the charge space? Every site in the lattice of charges allowed by the Dirac quantization rules and so on? Should the terms in the inequality be modified in some way?

And isn't there a deeper insight or structure from which the principle may be derived – much like Heisenberg's uncertainty relation (inequality) may be derived from the nonzero commutators?

In the new paper, Hiroši and Cumrun assume a somewhat stronger version of the weak gravity conjecture. It surely has to apply to every kind of an electric-like force, including forces between branes. If I understand well, they basically say that the weak gravity conjecture should imply the existence of branes of low enough tension, not just point-like charged particles, for every kind of a \(p\)-form field i.e. a generalization of electromagnetism with many indices.

Now, the existence of these charged branes has consequences for the flux vacua.

The flux vacua carry some nonzero values of \(\int_C F_p\), the integral of a field strength \(p\)-form over some \(p\)-cycle in the compactification manifold. These fluxes could be used as just some sourceless, electromagnetic-style fields. However, the weak gravity conjecture says that the charged sources actually have to exist and their tension has to be low enough. When it's true, it's being shown that such branes may nucleate inside the anti de Sitter space, get to the boundary in a finite time, and reduce the flux by a unit.

The whole vacuum spontaneously changes in this way. So the original vacuum we started with was unstable. The authors conclude that every anti de Sitter vacuum supported by fluxes has to be either unstable or supersymmetric. Supersymmetric vacua get an "exception" because the total attractive gravitational force exactly cancels against the repulsive generalized electromagnetic force – because of the well-known BPS relations. The BPS condition is nothing else than the example in which the defining inequality of the weak gravity conjecture is saturated – gravity is as strong as the non-gravitational force. Consequently, the spontaneous nucleation of the brane cannot occur with a finite probability and/or the brane isn't driven to the AdS boundary at a finite time.

It's also my understanding that all stable AdS flux vacua that have been found are supersymmetric. Well, my preferred example of a non-supersymmetric AdS vacuum with a CFT dual would be Witten's pure AdS3 gravity whose CFT dual carries the monster group symmetry (it only exists for the minimum radius so the curvature is "Planckian" and you might say that it's not a "full-fledged" example of a gravitational theory with a nearly flat space). Either the low spacetime dimension or the absence of the appropriate "flux" is probably what allows an exception for this case. I am not sure about the precise logic here.

You could say that even if it is true, the derived restriction doesn't have "practical" consequences because vacua may be unstable but very long-lived. They argue that while the lifetime could be long for a CFT, the near-horizon dual geometry in the bulk sees the instability as a much stronger one. So the restriction is harsh and real.

If these statements are right – and ideally derivable in a more rigorous way – then you might interpret the result as a prediction of string theory. We have observed the world to be rather stable and non-supersymmetric – so it cannot be AdS. We may use the stability to predict that the cosmological constant is non-negative. It's about one predicted bit of information but it's a prediction nevertheless. Needless to say, the implications for our understanding of the inner structure of string/M-theory and its configuration space could be more far-reaching.

The conditions look conceptually different. Whether the gravitational force is stonger than another one seems to be just a "boring technicality". On the other hand, this technicality may imply, like in this case, that a whole class of candidate vacua – non-supersymmetric stable AdS flux vacua – is actually non-existent. So even previously overlooked technicalities such as the weak gravity conjecture may have the potential to solve the vacuum selection problem and other seemingly insurmountable hurdles.

by Luboš Motl ( at October 06, 2016 05:39 PM

October 05, 2016

Quantum Diaries

Solving the Measurement Problem (Guest Post)

The following is a guest posting from Ken Krechmer of the College of Engineering at Applied Science at the University of Colorado, at Boulder. 

Ken Krechmer

Ken Krechmer

The dichotomy between quantum measurement theory and classical measurement results has been termed: the measurement disturbance, measurement collapse and the measurement problem.   Experimentally it is observed that the measurement of the position of one particle changes the momentum of the same particle instantaneously.  This is described as the measurement disturbance.  Quantum measurement theory calculates the probability of a measurement result but does not calculate an actual measurement result.  What occurs that causes the quantum measurement probability to collapse into a classical measurement result?  Different approaches have been proposed to resolve one or both of these issues including hidden variables, non-local variables and decoherence, but none of these approaches appear to fully resolve both these aspects of the measurement problem.

Further complicating this measurement problem: 1. The quantum effect called entanglement is another measurement disturbance where the measurement of one particle instantaneously impacts a similar measurement of another, far remote, particle.  2. The quantum effect called uncertainty which defines the minimum variation between two measurement results and changes depending on the order of the two measurements.

Relational measurements and uncertainty,” also available at Measurement, resolves both aspects of the measurement problem by expanding the definition of a classical measurement to include sampling and calibration to a reference. Experimentally, it is well known that a measurement must be sampled and calibrated to a reference to establish a measurement result. This paper proves that the measurement collapse is due to the effect of sampling and calibration which is equal to the universal quantum measurement uncertainty.  The universal quantum measurement uncertainty has been verified in independent quantum experiments. Next, one quantum measurement is shown to instantaneously disturb another because one sampling and calibration process is applied to both measurement results.

 The paper resolves the dichotomy between quantum theory and classical measurement results, derives the quantum uncertainty relations using classical physics, unifies the measurement process across all scales and formally models calibration and sampling.


Ken Krechmer, University of Colorado (CU) Scholar in Residence, has taught a graduate level engineering course on standards and standardization at CU.  He authored prize winning papers on standards and standardization in 1995, 2000, 2006 and 2012. Krechmer co-founded the journal Communications Standards Review.  He was active in standardization committees in the ITU, ETSI, TIA, IEEE, and many consortia for over 20 years.  Krechmer is a Senior Member of the IEEE and a Member of Society of Engineering Standards.

by Quantum Diaries at October 05, 2016 02:06 PM

October 04, 2016

The n-Category Cafe

Mathematics Research Community in HoTT

I am delighted to announce that from June 4-10, 2017, there will be a workshop on homotopy type theory as one of the AMS’s Mathematical Research Communities.

The MRC program, whose workshops are held in the “breathtaking mountain setting” of Snowbird Resort in Utah,

nurtures early-career mathematicians — those who are close to finishing their doctorates or have recently finished — and provides them with opportunities to build social and collaborative networks to inspire and sustain each other in their work.

The organizers for the HoTT MRC include our fearless leader Chris Kapulkin, Dan Christensen, Dan Licata, Mike Shulman and myself.

The goal of this workshop is to bring together advanced graduate students and postdocs having some background in one (or more) areas such as algebraic topology, category theory, mathematical logic, or computer science, with the goal of learning how these areas come together in homotopy type theory, and working together to prove new results. Basic knowledge of just one of these areas will be sufficient to be a successful participant.

If you are a peridoctoral student (within a few years of your Ph.D. on either side) and are interested in HoTT, perhaps as a new research direction (as it is for me), please consider applying! The MRC program is run by the American Mathematical Society and therefore directed at U.S. citizens or students affiliated with U.S. institutions, though a few international participants or researchers beyond the targeted mathematical age range may be accepted on a case-by-case basis. Women and underrepresented minorities are especially encouraged to apply.

Even though the application deadline is not until March 1, we would appreciate it for planning purposes if interested folks could apply as soon as possible. I think this has the potential to be a really exciting week and a great way to “jump-start” a research career in HoTT.

The program description contains more information about potential research directions that participants will work on during the week of the workshop. The specific problems we work on, for instance in synthetic homotopy theory which is ripe with “low-hanging fruit”, will be tailored to the strengths and interests of those attending, which is another reason why it would be helpful to apply as soon as possible.

Feel free to direct questions to any of the organizers or ask them in the comments below.

by riehl ( at October 04, 2016 07:42 PM

Clifford V. Johnson - Asymptotia

The 2016 Physics Nobel Prize goes to…!

Wow! Topology in the mainstream news. I never thought I'd see the day. Congratulations to the winners! Citation:

The Nobel Prize in Physics 2016 was divided, one half awarded to David J. Thouless, the other half jointly to F. Duncan M. Haldane and J. Michael Kosterlitz "for theoretical discoveries of topological phase transitions and topological phases of matter".

Here is a link to the Nobel Prize site with more information, and also, here's a BBC breakdown of some of the science.

An important (to some) side note: Duncan Haldane was at USC when he wrote the cited papers. Great that USC was supportive of this kind of work, especially in that early part of his career.

-cvj Click to continue reading this post

The post The 2016 Physics Nobel Prize goes to…! appeared first on Asymptotia.

by Clifford at October 04, 2016 06:44 PM

Symmetrybreaking - Fermilab/SLAC

Creating the universe in a computer

Computer simulations help cosmologists unlock the mystery of how the universe evolved.

Astronomers face a unique problem. While scientists from most fields can conduct experiments—particle physicists build massive particle colliders to test their theories of subatomic material, and microbiologists probe the properties of microbes on petri dishes—astronomers cannot conduct experiments with the stars and planets. Even the most advanced telescopes can provide only snapshots of the cosmos, and very little changes during our lifetimes. 

Yet many questions remain, such as how the Milky Way formed, what dark matter is and the role of supermassive black holes at the center of galaxies. In an attempt to edge closer to answering these unsolved mysteries, some scientists have embarked on ambitious projects: creating virtual universes.

The EAGLE simulation shows how supermassive black holes help shape galaxies.

Courtesy of EAGLE

Evolving the cosmos 

The earliest observational evidence of the universe come from the cosmic microwave background, the afterglow created by the Big Bang. Computational cosmologists use this data to model the conditions at this time, when the universe was around a few hundred thousand years old. 

Then they add the basic ingredients: baryonic (or ordinary) matter, from which the stars and planets form; dark matter, which enables galactic structures to grow; and dark energy, the mysterious force behind cosmic acceleration. These are coded into a simulation along with equations that describe various physical processes such as supernova explosions and black holes. Cosmologists then wait as the simulation evolves: The virtual universe expands, gas condenses into small structures and eventually form into stars and galaxies. 

“The exciting thing is that if you do this, the universe that develops in a computer looks remarkably like the real universe,” says Joop Schaye of Leiden University and the principal investigator of the EAGLE (Evolution and Assembly of GaLaxies and their Environments) Project. “You get galaxies of all kinds of sizes and morphologies that look a lot like the real galaxies.”

A number of groups around the world are working on these simulations. In 2014, both the EAGLE Project and the Illustris Project, led by theoretical astrophysicist Mark Vogelsberger from MIT, made major steps forward with their groundbreaking, realistic universes. Both simulations are massive, covering a cubic space of around 300 million light years on each side. They also require a hefty amount of computing power—just one complete run requires large supercomputers to run for months at a time. 

“What we ended up doing is running the big simulation once, but we want to understand why the universe behaved as it did,” says Richard Bower, a cosmologist at Durham University and member of the EAGLE Project. “So we’ve been running lots of other simulations where we change things a little bit.” 

These simulations have already revealed some interesting properties of evolving galaxies. Bower and his colleagues, for instance, discovered that the number and size of galaxies is dependent on a fine balance between supernovae and black holes. 

Using their simulation, they found that without supernovae, the universe created far too many galaxies. This is because without supernovae exploding, many small galaxies were not being blown apart. 

On the other hand, they found that including only supernovae made galaxies grow too massive—10 times the mass of the Milky Way. To manage the size of those galaxies, they needed to also include black holes. 

“The supernovae and the black holes are both kind of competing to use up the material that's supplied to the galaxy,” explains Bower. “Once the supernovae begin to wane, the black hole takes over, and it's the end of forming stars and the beginning of forming bigger and bigger black holes.”

Dark matter density (left) transitions to gas density (right).

Courtesy of Illustris

Zooming in  

There are two type of simulations in this field of study—representative volume simulations, which model huge volumes of the observable universe, and zoom simulations, which focus on individual galaxies or galaxy clusters. 

As astronomers collect more and more detailed snapshots of the universe, cosmologists such as Andrew Pontzen at the University College London are using zoom simulations to try to investigate the properties of individual galaxies at the same level of specificity. “We’re trying to push forward on understanding the individual galaxies in enough detail that we can make meaningful comparisons to this really cutting-edge data,” Pontzen says. 

To do so, Pontzen and his colleagues have developed a technique called genetic modification, which involves creating many different versions of galaxies. “It almost becomes like an experiment,” says Pontzen. “You have your control over how a particular object forms, and then you can say if it forms in this particular way, then the galaxy that comes out at the end looks like this.” For example, they can change the way that mass arrives in galaxies over time and see how it affects the galaxy that emerges. 

In a similar way, cosmologists working on larger-scale simulations can “turn the knobs” by changing certain variables—the laws of gravity or the properties of dark matter, for example—and see what the universe that emerges looks like. “I think what's very interesting is to try to constrain the properties of dark matter and dark energy through these simulations,” says Vogelsberger. “We don't know what they are, but by tweaking minor parameters of these models we can try to constrain the properties of dark matter or dark energy in more detail.”

These scientists also work closely with observers to compare how the simulations stack up against what is actually out there in the universe. “That’s the critical part,” says Pontzen. “We want to be able to link all of these things together.”

by Diana Kwon at October 04, 2016 04:43 PM

ZapperZ - Physics and Physicists

Nobel Prize Goes Vintage This Year
Wow. While deserving, I didn't see this one coming because I thought the ship had left the harbor a long time ago.

The Nobel committee decided to dig deep and went back in time to award the prize to 3 condensed matter physicists for work done in the early 70's. This year's prize goes to David Thouless, Duncan Haldane, and Michael Kosterlitz.

In the early 1970s, Kosterlitz and Thouless overturned the then-current theory that superconductivity could not occur in extremely thin layers.
"They demonstrated that superconductivity could occur at low temperatures and also explained the mechanism -- phase transition -- that makes superconductivity disappear at higher temperatures," explained the Foundation. 
Around a decade later, Haldane also studied matter that forms threads so thin they can be considered one-dimensional.

Any condensed matter student would have heard of the Haldane chain, and the Kosterlitz-Thouless transition. These are textbooks concepts that are now widely used and accepted. It certainly took then long enough to decide to award the prize to these people.

I wonder if the Nobel committee is delaying the prize for the gravitational wave for another year to make sure it is verified, and to narrow down the people they award it to. Just like the award for the Higgs, there are several people, more than 3, that can easily deserve the prize.


by ZapperZ ( at October 04, 2016 01:18 PM

October 03, 2016

Tommaso Dorigo - Scientificblogging

Anomaly! Book News

The first few copies of my new book, “Anomaly! – Collider Physics and the Quest for New Phenomena at Fermilab” arrived this morning from Singapore.

read more

by Tommaso Dorigo at October 03, 2016 11:26 AM

October 02, 2016

John Baez - Azimuth

Complex Adaptive System Design (Part 1)

In January of this year, I was contacted by a company called Metron Scientific Solutions. They asked if I’d like to join them in a project to use category theory to design and evaluate complex, adaptive systems of systems.

What’s a ‘system of systems’?

It’s a system made of many disparate parts, each of which is a complex system in its own right. The biosphere is a system of systems. But so far, people usually use this buzzword for large human-engineered systems where the different components are made by different organizations, perhaps over a long period of time, with changing and/or incompatible standards. This makes it impossible to fine-tune everything in a top-down way and have everything fit together seamlessly.

So, systems of systems are inherently messy. And yet we need them.

Metron was applying for a grant from DARPA, the Defense Advanced Research Projects Agency, which funds a lot of cutting-edge research for the US military. It may seem surprising that DARPA is explicitly interested in using category theory to study systems of systems. But it actually shouldn’t be surprising: their mission is to try many things and find a few that work. They are willing to take risks.

Metron was applying for a grant under a DARPA program run by John S. Paschkewitz, who is interested in

new paradigms and foundational approaches for the design of complex systems and system-of-systems (SoS) architectures.

This program is called CASCADE, short for Complex Adaptive System Composition and Design Environment. Here’s the idea:

Complex interconnected systems are increasingly becoming part of everyday life in both military and civilian environments. In the military domain, air-dominance system-of-systems concepts, such as those being developed under DARPA’s SoSITE effort, envision manned and unmanned aircraft linked by networks that seamlessly share data and resources in real time. In civilian settings such as urban “smart cities”, critical infrastructure systems—water, power, transportation, communications and cyber—are similarly integrated within complex networks. Dynamic systems such as these promise capabilities that are greater than the mere sum of their parts, as well as enhanced resilience when challenged by adversaries or natural disasters. But they are difficult to model and cannot be systematically designed using today’s tools, which are simply not up to the task of assessing and predicting the complex interactions among system structures and behaviors that constantly change across time and space.

To overcome this challenge, DARPA has announced the Complex Adaptive System Composition and Design Environment (CASCADE) program. The goal of CASCADE is to advance and exploit novel mathematical techniques able to provide a deeper understanding of system component interactions and a unified view of system behaviors. The program also aims to develop a formal language for composing and designing complex adaptive systems. A special notice announcing a Proposers Day on Dec. 9, 2015, was released today on FedBizOpps here:

“CASCADE aims to fundamentally change how we design systems for real-time resilient response within dynamic, unexpected environments,” said John Paschkewitz, DARPA program manager. “Existing modeling and design tools invoke static ‘playbook’ concepts that don’t adequately represent the complexity of, say, an airborne system of systems with its constantly changing variables, such as enemy jamming, bad weather, or loss of one or more aircraft. As another example, this program could inform the design of future forward-deployed military surgical capabilities by making sure the functions, structures, behaviors and constraints of the medical system—such as surgeons, helicopters, communication networks, transportation, time, and blood supply—are accurately modeled and understood.”

CASCADE could also help the Department of Defense fulfill its role of providing humanitarian assistance in response to a devastating earthquake, hurricane or other catastrophe, by developing comprehensive response models that account for the many components and interactions inherent in such missions, whether in urban or austere environs.

“We need new design and representation tools to ensure resilience of buildings, electricity, drinking water supply, healthcare, roads and sanitation when disaster strikes,” Paschkewitz said. “CASCADE could help develop models that would provide civil authorities, first responders and assisting military commanders with the sequence and timing of critical actions they need to take for saving lives and restoring critical infrastructure. In the stress following a major disaster, models that could do that would be invaluable.”

The CASCADE program seeks expertise in the following areas:

• Applied mathematics, especially in category theory, algebraic geometry and topology, and sheaf theory

• Operations research, control theory and planning, especially in stochastic and non-linear control

• Modeling and applications responsive to challenges in battlefield medicine logistics and platforms, adaptive logistics, reliability, and maintenance

• Search and rescue platforms and modeling

• Adaptive and resilient urban infrastructure

Metron already designs systems of systems used in Coast Guard search and rescue missions. Their grant proposal was to use category theory and operads to do this better. They needed an academic mathematician as part of their team: that was one of the program’s requirements. So they asked if I was interested.

I had mixed feelings.

On the one hand, I come from a line of peaceniks including Joan Baez, Mimi Fariña, their father the physicist Albert Baez, and my parents. I don’t like how the US government puts so much energy into fighting wars rather than solving our economic, social and environmental problems. It’s interesting that ‘systems of systems engineering’, as a field, is so heavily dominated by the US military. It’s an important subject that could be useful in many ways. We need it for better energy grids, better adaptation to climate change, and so on. I dream of using it to develop ‘ecotechnology’: technology that works with nature instead of trying to battle it and defeat it. But it seems the US doesn’t have the money, or the risk-taking spirit, to fund applications of category theory to those subjects.

On the other hand, I was attracted by the prospect of using category theory to design complex adaptive systems—and using it not just to tackle foundational issues, but also concrete challenges. I liked the idea of working with a team of people who are more practical than me. In this project, a big part of my job would be to write and publish papers: that’s something I can do. But Metron had other people who would try to create prototypes of software for helping the Coast Guard design search and rescue missions.

So I was torn.

In fact, because of my qualms, I’d already turned down an offer from another company that was writing a proposal for the CASCADE program. But the Metron project seemed slightly more attractive—I’m not sure why, perhaps because it was described to me in a more concrete way. And unlike that other company, Metron has a large existing body of software for evaluating complex systems, which should help me focus my theoretical ideas. The interaction between theory and practice can make theory a lot more interesting.

Something tipped the scales and I said yes. We applied for the grant, and we got it.

And so, an interesting adventure began. It will last for 3 years, and I’ll say more about it soon.

by John Baez at October 02, 2016 10:11 PM

September 30, 2016

Clifford V. Johnson - Asymptotia


Are you going to watch the Luke Cage series that debuts today on Netflix? I probably will at some point (I've got several decades old reasons, and also it was set up well in the excellent Jessica Jones last year).... but not soon as I've got far too many deadlines. Here'a a related item: Using the Luke Cage character as a jumping off point, physicist Martin Archer has put together a very nice short video about the business of strong and tough (not the same thing) materials in the real world.


Have a look if you want to appreciate the nuances, and learn a bit about what's maybe just over the horizon for new amazing materials that might be come part of our every day lives. Video embed below: [...] Click to continue reading this post

The post Super-Strong…? appeared first on Asymptotia.

by Clifford at September 30, 2016 07:40 PM

ZapperZ - Physics and Physicists

Dark Matter Biggest Challenge
A very nice article on Forbes' website on the latest challenge in understanding Dark Matter.

It boils down to on why in some cases, Dark Matter dominates, while in others, it seems that everything can be satisfactorily explained without using it. It is why we continue to study this and why we look for possible Dark Matter candidates.  There is still a lot of physics to be done here.


by ZapperZ ( at September 30, 2016 12:56 PM

September 29, 2016

ZapperZ - Physics and Physicists

Could You Pass A-Level Physics Now?
This won't tell if you will pass it, since A-Level Physics consists of several papers, including essay questions. But it is still an interesting test, and you might make a careless mistake if you don't read the question carefully.

And yes, I did go through the test, and I got 13 out of 13 correct even though I guessed at one of them (I wasn't sure what "specific charge" meant and was too lazy to look it up). The quiz at the end asked if I was an actual physicist! :)

You're probably an actual physicist, aren't you?

Check it out. This is what those A-level kids had to contend with.


by ZapperZ ( at September 29, 2016 07:54 PM

Symmetrybreaking - Fermilab/SLAC

LHC smashes old collision records

The Large Hadron Collider is now producing about a billion proton-proton collisions per second.

The LHC is colliding protons at a faster rate than ever before, approximately 1 billion times per second. Those collisions are adding up: This year alone the LHC has produced roughly the same number of collisions as it did during all of the previous years of operation together.

This faster collision rate enables scientists to learn more about rare processes and particles such as Higgs bosons, which the LHC produces about once every billion collisions.

“Every time the protons collide, it’s like the spin of a roulette wheel with several billion possible outcomes,” says Jim Olsen, a professor of physics at Princeton University working on the CMS experiment. “From all these possible outcomes, only a few will teach us something new about the subatomic world. A high rate of collisions per second gives us a much better chance of seeing something rare or unexpected.”

Since April, the LHC has produced roughly 2.4 quadrillion particle collisions in both the ATLAS and CMS experiments. The unprecedented performance this year is the result of both the incremental increases in collision rate and the sheer amount of time the LHC is up and running.

“This year the LHC is stable and reliable,” says Jorg Wenninger, the head of LHC operations. “It is working like clockwork. We don’t have much downtime.”

Scientists predicted that the LHC would produce collisions around 30 percent of the time during its operation period. They expected to use the rest of the time for maintenance, rebooting, refilling and ramping the proton beams up to their collision energy. However, these numbers have flipped; the LHC is actually colliding protons 70 percent of the time.

“The LHC is like a juggernaut,” says Paul Laycock, a physicist from the University of Liverpool working on the ATLAS experiment. “We took around a factor of 10 more data compared to last year, and in total we already have more data in Run 2 than we took in the whole of Run 1. Of course the biggest difference between Run 1 and Run 2 is that the data is at twice the energy now, and that’s really important for our physics program.”

This unexpected performance comes after a slow start-up in 2015, when scientists and engineers still needed to learn how to operate the machine at that higher energy.

“With more energy, the machine is much more sensitive,” says Wenninger. “We decided not to push it too much in 2015 so that we could learn about the machine and how to operate at 13 [trillion electronvolts]. Last year we had good performance and no real show-stoppers, so now we are focusing on pushing up the luminosity.”

The increase in collision rate doesn’t come without its difficulties for the experiments.

“The number of hard drives that we buy and store the data on is determined years before we take the data, and it’s based on the projected LHC uptime and luminosity,” Olsen says. “Because the LHC is outperforming all estimates and even the best rosy scenarios, we started to run out of disk space. We had to quickly consolidate the old simulations and data to make room for the new collisions.”

The increased collision rate also increased the importance of vigilant detector monitoring and adjustments of experimental parameters in real time. All the LHC experiments are planning to update and upgrade their experimental infrastructure in winter 2017.

“Even though we were kept very busy by the deluge of data, we still managed to improve on the quality of that data,” says Laycock. “I think the challenges that arose thanks to the fantastic performance of the LHC really brought the best out of ATLAS, and we’re already looking forward to next year.”

Astonishingly, 2.4 quadrillion collisions represent just 1 percent of the total amount planned during the lifetime of the LHC research program. The LHC is scheduled to run through 2037 and will undergo several rounds of upgrades to further increase the collision rate.

“Do we know what we will find? Absolutely not,” Olsen says. “What we do know is that we have a scientific instrument that is unprecedented in human history, and if new particles are produced at the LHC, we will find them.”

by Sarah Charley at September 29, 2016 05:59 PM

September 28, 2016

Axel Maas - Looking Inside the Standard Model

Searching for structure
This time I want to report on a new bachelor thesis, which I supervise. In this project we try to understand a little better the foundations of so-called gauge symmetries. In particular we address some of the ground work we have to lay for understanding our theories.

Let me briefly outline the problem: Most of the theories in particle physics include some kind of redundancy I.e., there are more things in it then we actually see in experiments. The surplus stuff is actually not real. It is just a kind of mathematical device to make calculations simpler. It is like a ladder, which we bring to climb a wall. We come, use the ladder, and are on top. The ladder we take again with us, and the wall remains as it was. The ladder made live simpler. Of course, we could have climbed the wall without it. But it would have been more painful.

Unfortunately, theories are more complicated than wall climbing.

One of the problems is that we usually cannot solve problems exactly. And as noted before, this can mess up the removal of the surplus stuff.

The project the bachelor student and I am working on has the following basic idea: If we can account for all of the surplus stuff, we should be able to know whether our approximations did something wrong. It is like preparing an engine. If something is left afterwards it is usually not a good sign. Unfortunately, things are again more complicated. For the engine, we just have to look through our workspace to see whether anything is left. But how to do so for our theories? And this is precisely the project.

So, the project is essentially about listing stuff. We start out with something we know is real and important. For this, we take the most simplest thing imaginable: Nothing. Nothing means in this case just an empty universe, no particles, no reactions, no nothing. That is certainly a real thing, and one we want to include in our calculations.

Of this nothing, there are also versions where some of the surplus stuff appears. Like some ghost image of particles. We actually know how to add small amounts of ghost stuff. Like a single particle in a whole universe. But these situations are not so very interesting, as we know how to deal with them. No, the really interesting stuff happens if well fill the whole universe with ghost images. With surplus stuff which we add just to make life simpler. At least originally. And the question is now: How can we add this stuff systematically? As the ghost stuff is not real, we know it must fulfill special mathematical equations.

Now we do something, which is very often done in theoretical physics: We use an analogy. The equations in question are not unique to the problem at hand, but appear also in quite different circumstances, although with a completely different meaning. In fact, the same equations describe how in quantum physics one particle is bound to each other. In quantum physics, depending on the system at hand, there may be one or more different ways how this binding occurs. You can count the number, and there is a set which one can label by whole numbers. Incidentally, this feature is where the name quantum originates from.

Returning to our original problem, we do the following analogy: Enumerating the ghost stuff can be cast into the same form as enumerating the possibilities of binding two particles together in quantum mechanics. The actual problem is only to find the correct quantum system which is the precise analogous one to our original problem. Finding this is still a complicated mathematical problem. Finding only one solution for one example is the aim of this bachelor thesis. But already finding one would be a huge step forward, as so far we do not have one at all. Having it will probably be like having a first stepping stone for crossing a river. From understanding it, we should be able to understand how to generate more. Hopefully, we will eventually understand how to create arbitrary such examples. And thus solve our enumeration problem. But this is still in the future. For the moment, we do the first step.

by Axel Maas ( at September 28, 2016 12:11 PM

September 27, 2016

Symmetrybreaking - Fermilab/SLAC

You keep using that physics word

I do not think it means what you think it means.

Physics can often seem inconceivable. It’s a field of strange concepts and special terms. Language often fails to capture what’s really going on within the math and theories. And to make things even more complicated, physics has repurposed a number of familiar English words. 

Much like Americans in England, folks from beyond the realm of physics may enter to find themselves in a dream within a dream, surrounded by a sea of words that sound familiar but are still somehow completely foreign. 

Not to worry! Symmetry is here to help guide you with this list of words that acquire a new meaning when spoken by physicists.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


The physics version of quench has nothing to do with Gatorade products or slaking thirst. Instead, a quench is what happens when superconducting materials lose their ability to superconduct (or carry electricity with no resistance). During a quench, the electric current heats up the superconducting wire and the liquid coolant meant to keep the wire at its cool, superconducting temperature warms and turns into a gas that escapes through vents. Quenches are fairly common and an important part of training magnets that will focus and guide beams through particle accelerators. They also take place in superconducting accelerating cavities.

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Cannibalism, strangulation and suffocation

These gruesome words take on a new, slightly kinder meaning in astrophysics lingo. They are different ways that a galaxy's shape or star formation rate can be changed when it is in a crowded environment such as a galaxy cluster. Galactic cannibalism, for example, is what happens when a large galaxy merges with a companion galaxy through gravity, resulting in a larger galaxy.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


Depending on how much you know about racecars and driving terms, you may or may not have heard of a chicane. In the driving world, a chicane is an extra turn or two in the road, designed to force vehicles to slow down. This isn’t so different from chicanes in accelerator physics, where collections of four dipole magnets compress a particle beam to cluster the particles together. It squeezes the bunch of particles together so that those in the head (the high-momentum particles at the front of the group) are closer to the tail (the particles in the rear).

Illustration by Sandbox Studio, Chicago with Corinne Mucha


A beam cooler won’t be of much use at your next picnic. Beam cooling makes particle accelerators more efficient by keeping the particles in a beam all headed the same direction. Most beams have a tendency to spread out as they travel (something related to the random motion, or “heat,” of the particles), so beam cooling helps kick rogue particles back onto the right path—staying on the ideal trajectory as they race through the accelerator.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


In particle physics, a house is a place for magnets to reside in a particle accelerator. House is also used as a collective noun for a group of magnets. Fermilab’s Tevatron particle accelerator, for example, had six sectors, each of which had four houses of magnets.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


A barn is a unit of measurement used in nuclear and particle physics that indicates the target area (“cross section”) a particle represents. The meaning of the science term was originally classified, owing to the secretive nature of efforts to better understand the atomic nucleus in the 1940s. Now you can know: One barn is equal to 10-24 cm2. In the subatomic world, a particle with that size is quite large—and hitting it with another particle is practically like hitting the broad side of a barn.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


Most people dread cavities, but not in particle physics. A cavity is the name for a common accelerator part. These metal chambers shape the accelerator’s electric field and propel particles, pushing them closer to the speed of light. The electromagnetic field within a radio-frequency cavity changes back and forth rapidly, kicking the particles along. The cavities also keep the particles bunched together in tight groups, increasing the beam’s intensity.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


Most people associate doping with drug use and sports. But doping can be so much more! It’s a process to introduce additional materials (often considered impurities) into a metal to change its conducting properties. Doped superconductors can be far more efficient than their pure counterparts. Some accelerator cavities made of niobium are doped with atoms of nitrogen. This is being investigated for use in designing superconducting magnets as well.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


In particle physics, injections don’t deliver a vaccine through a needle into your arm. Instead, injections are a way to transfer particle beams from one accelerator into another. Particle beams can be injected from a linear accelerator into a circular accelerator, or from a smaller circular accelerator (a booster) into a larger one.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


Most people associate decay with things that are rotting. But a particle decay is the process through which one particle changes into other particles. Most particles in the Standard Model are unstable, which means that they decay almost immediately after coming into being. When a particle decays, its energy is divided into less massive particles, which may then decay as well.

by Lauren Biron at September 27, 2016 04:21 PM

September 26, 2016

Clifford V. Johnson - Asymptotia

Where I’d Rather Be…?

floorboards_shareRight now, I'm much rather be on the sofa reading a novel (or whatever it is she's reading)....instead of drawing all those floorboards near her. (Going to add "rooms with lots of floorboards" to [...] Click to continue reading this post

The post Where I’d Rather Be…? appeared first on Asymptotia.

by Clifford at September 26, 2016 08:51 PM

September 25, 2016

Sean Carroll - Preposterous Universe

Live Q&As, Past and Future

On Friday I had a few minutes free, and did an experiment: put my iPhone on a tripod, pointed it at myself, and did a live video on my public Facebook page, taking questions from anyone who happened by. There were some technical glitches, as one might expect from a short-notice happening. The sound wasn’t working when I first started, and in the recording below the video fails (replacing the actual recording with a still image of me sideways, for inexplicable reasons) just when the sound starts working. (I don’t think this happened during the actual event, but maybe it did and everyone was too polite to mention it.) And for some reason the video keeps going long after the 20-some minutes for which I was actually recording.

But overall I think it was fun and potentially worth repeating. If I were to make this an occasional thing, how best to do it? This time around I literally just read off a selection of questions that people were typing into the Facebook comment box. Alternatively, I could just talk on some particular topic, or I could solicit questions ahead of time and pick out some good ones to answer in detail.

What do you folks think? Also — is Facebook Live the right tool for this? I know the kids these days use all sorts of different technologies. No guarantees that I’ll have time to do this regularly, but it’s worth contemplating.

What makes the most sense to talk about in live chats?

by Sean Carroll at September 25, 2016 08:32 PM

Lubos Motl - string vacua and pheno

Chen-Ning Yang against Chinese colliders
The plans to build the world's new greatest collider in China have many prominent supporters – including Shing-Tung Yau, Nima Arkani-Hamed, David Gross, Edward Witten – but SixthTone and South China Morning Post just informed us about a very prominent foe: Chen-Ning Yang, the more famous part of Lee-Yang and Yang-Mills.

He is about 94 years old now but his brain is very active and his influence may even be enough to kill the project(s).

The criticism is mainly framed as a criticism of CEPC (Circular Electron-Positron Collider), a 50-70-kilometer-long [by circumference] lepton accelerator. But I guess that if the relevant people decided to build another hadron machine in China, and recall that SPPC (Super Proton-Proton Collider) is supposed to be located in the same tunnel, his criticism would be about the same. In other words, Yang is against all big Chinese colliders. If you have time, read these 403 pages on the CEPC-SPPC project. Yang may arguably make all this work futile by spitting a few milliliters of saliva.

He wrote his essay for a Chinese newspaper 3 days ago,
China shouldn't build big colliders today (autom. EN; orig. CN)
The journalists frame this opinion as an exchange with Shing-Tung Yau who famously co-wrote a pro-Chinese-collider book.

My Chinese isn't flawless. Also, his opinions and arguments aren't exactly innovative. But let me sketch what he's saying.

He says that Yau has misinterpreted his views when he said that Yang was incomprehensibly against the further progress in high-energy physics. Yang claims to be against the Chinese colliders only. Well, I wouldn't summarize his views in this way after I have read the whole op-ed.

His reasons to oppose the accelerator are:
  1. In Texas, the SSC turned out to be painful and a "bottomless pit" or a "black hole". Yang suggests it must always be the case – well, it wasn't really the case of the LHC. And he suggests that $10-$20 billion is too much.
  2. China is just a developing country. Its GDP per capita is below that of Brazil, Mexico, or Malaysia. There are poor farmers, need to pay for the environment(alism), health, medicine etc. and those should be problems of a higher priority.
  3. The collider would also steal the money from other fields of science.
  4. Supporters of the collider argue that the fundamental theory isn't complete – because gravity is missing and unification hasn't been understood; and they want to find evidence of SUSY. However, Yang is eager to say lots of the usual anti-SUSY and anti-HEP clichés. SUSY has no experimental evidence – funny, that's exactly why people keep on dreaming about more powerful experiments.
  5. High-energy physics hasn't improved human lives in the last 70 years and won't do so. This item is the main one – but not only one – suggesting that the Chinese project isn't the only problem for Yang.
  6. China and IHEP in particular hasn't achieved anything in high-energy physics. Its contributions remain below 1% of the world. Also, if someone gets the Nobel prize for a discovery, he will probably be a non-Chinese.
  7. He recommends cheaper investments – to new ways to accelerate particles; and to work on theory, e.g. string theory.
You can see that it's a mixed bag with some (but not all) of the anti-HEP slogans combined with some left-wing propaganda. I am sorry but especially the social arguments are just bogus.

What decides about a country's ability to make a big project is primarily the total GDP, not the GDP per capita. Ancient China built the Great Chinese Wall despite the fact that the GDP per capita was much lower than the today's GDP per capita. Those people couldn't buy a single Xiaomi Redmi 3 Android smartphone for their salary (I am considering this octa-core $150 smartphone – which seems to be the #1 bestselling Android phone in Czechia now – as a gift now). But they still built the wall. And today, Chinese companies are among the most important producers of many high-tech products; I just mentioned one example. As you may see with your naked eyes, this capability in no way contradicts China's low GDP per capita.

The idea that a country makes much social progress by redistributing the money it has among all the people is just a communist delusion. That's how China worked before it started to grow some 30 years ago. You just shouldn't spend or devour all this money – for healthcare of the poor citizens etc. – if you want China to qualitatively improve. You need to invest into sufficiently well-defined things. You may take those $10-$20 billion for the Chinese collider projects and spread them among the Chinese citizens. But that will bring some $10-$20 to each person – perhaps one dinner in a fancier restaurant or one package of good cigarettes. It's normal for the poor people to spend the money in such a way that the wealth quickly evaporates. The concentration of the capital is even more needed in poor countries that want to grow.

Also, China's contribution to HEP physics – and other fields – is limited now. But that's mostly because similar moves and investments that would integrate China to the world's scientific community weren't done in the past or at least they were not numerous.

Yang's remarks about the hypothetical Nobel prizes are specious, too. I don't know who will get Nobel prizes for discoveries at Chinese colliders, if anyone, so it's a pure speculation. But the Nobel prize money is clearly not why colliders are being built. Higgs and Englert got some $1 million from the Nobel committee while the LHC cost $10 billion or so. The prizes can in no way be considered the "repayment of the investments". What the experiments like that bring to science and the mankind is much more than some personal wealth for several people.

You may see that regardless of the recipients of the prize money (and regardless of the disappointing pro-SM results coming from the LHC), everyone understands that because of the LHC and its status, Europe has become essential in the state-of-the-art particle physics. Many peope may like to say unfriendly things about particle physics but at the end, I think that they also understand that at least among well-defined and concentrated disciplines, particle physics is the royal discipline of science. A "center of mass" of this discipline is located on the Swiss-French border. In ten years, China could take this leadership from Europe. This would be a benefit for China that is far more valuable than $10-$20 billion. China – whose annual GDP was some $11 trillion in 2015 – is paying much more money for various other things.

Off-topic: Some news reports talk about a new "Madala boson". It seems to be all about this 2-weeks-old 5-page-long hep-ph preprint presenting a two-Higgs-doublet model that also claims to say something about the composition of dark matter (which is said to be composed of a new scalar \(\chi\)). I've seen many two-Higgs-doublet papers and papers about dark matter and I don't see a sense in which this paper is more important or more persuasive.

The boson should already be seen in the LHC data but it's not.

Update Chinese collider:

On September 25th or so, Maria Spiropulu linked to this new Chinese article where 2+2 scholars support/dismiss the Chinese collider plans. David Gross' pro-collider story is the most detailed argumentation.

by Luboš Motl ( at September 25, 2016 05:08 AM

John Baez - Azimuth

Struggles with the Continuum (Part 8)

We’ve been looking at how the continuum nature of spacetime poses problems for our favorite theories of physics—problems with infinities. Last time we saw a great example: general relativity predicts the existence of singularities, like black holes and the Big Bang. I explained exactly what these singularities really are. They’re not points or regions of spacetime! They’re more like ways for a particle to ‘fall off the edge of spacetime’. Technically, they are incomplete timelike or null geodesics.

The next step is to ask whether these singularities rob general relativity of its predictive power. The ‘cosmic censorship hypothesis’, proposed by Penrose in 1969, claims they do not.

In this final post I’ll talk about cosmic censorship, and conclude with some big questions… and a place where you can get all these posts in a single file.

Cosmic censorship

To say what we want to rule out, we must first think about what behaviors we consider acceptable. Consider first a black hole formed by the collapse of a star. According to general relativity, matter can fall into this black hole and ‘hit the singularity’ in a finite amount of proper time, but nothing can come out of the singularity.

The time-reversed version of a black hole, called a ‘white hole’, is often considered more disturbing. White holes have never been seen, but they are mathematically valid solutions of Einstein’s equation. In a white hole, matter can come out of the singularity, but nothing can fall in. Naively, this seems to imply that the future is unpredictable given knowledge of the past. Of course, the same logic applied to black holes would say the past is unpredictable given knowledge of the future.

If white holes are disturbing, perhaps the Big Bang should be more so. In the usual solutions of general relativity describing the Big Bang, all matter in the universe comes out of a singularity! More precisely, if one follows any timelike geodesic back into the past, it becomes undefined after a finite amount of proper time. Naively, this may seem a massive violation of predictability: in this scenario, the whole universe ‘sprang out of nothing’ about 14 billion years ago.

However, in all three examples so far—astrophysical black holes, their time-reversed versions and the Big Bang—spacetime is globally hyperbolic. I explained what this means last time. In simple terms, it means we can specify initial data at one moment in time and use the laws of physics to predict the future (and past) throughout all of spacetime. How is this compatible with the naive intuition that a singularity causes a failure of predictability?

For any globally hyperbolic spacetime M, one can find a smoothly varying family of Cauchy surfaces S_t (t \in \mathbb{R}) such that each point of M lies on exactly one of these surfaces. This amounts to a way of chopping spacetime into ‘slices of space’ for various choices of the ‘time’ parameter t. For an astrophysical black hole, the singularity is in the future of all these surfaces. That is, an incomplete timelike or null geodesic must go through all these surfaces S_t before it becomes undefined. Similarly, for a white hole or the Big Bang, the singularity is in the past of all these surfaces. In either case, the singularity cannot interfere with our predictions of what occurs in spacetime.

A more challenging example is posed by the Kerr–Newman solution of Einstein’s equation coupled to the vacuum Maxwell equations. When

e^2 + (J/m)^2 < m^2

this solution describes a rotating charged black hole with mass m, charge e and angular momentum J in units where c = G = 1. However, an electron violates this inequality. In 1968, Brandon Carter pointed out that if the electron were described by the Kerr–Newman solution, it would have a gyromagnetic ratio of g = 2, much closer to the true answer than a classical spinning sphere of charge, which gives g = 1. But since

e^2 + (J/m)^2 > m^2

this solution gives a spacetime that is not globally hyperbolic: it has closed timelike curves! It also contains a ‘naked singularity’. Roughly speaking, this is a singularity that can be seen by arbitrarily faraway observers in a spacetime whose geometry asymptotically approaches that of Minkowski spacetime. The existence of a naked singularity implies a failure of global hyperbolicity.

The cosmic censorship hypothesis comes in a number of forms. The original version due to Penrose is now called ‘weak cosmic censorship’. It asserts that in a spacetime whose geometry asymptotically approaches that of Minkowski spacetime, gravitational collapse cannot produce a naked singularity.

In 1991, Preskill and Thorne made a bet against Hawking in which they claimed that weak cosmic censorship was false. Hawking conceded this bet in 1997 when a counterexample was found. This features finely-tuned infalling matter poised right on the brink of forming a black hole. It almost creates a region from which light cannot escape—but not quite. Instead, it creates a naked singularity!

Given the delicate nature of this construction, Hawking did not give up. Instead he made a second bet, which says that weak cosmic censorshop holds ‘generically’ — that is, for an open dense set of initial conditions.

In 1999, Christodoulou proved that for spherically symmetric solutions of Einstein’s equation coupled to a massless scalar field, weak cosmic censorship holds generically. While spherical symmetry is a very restrictive assumption, this result is a good example of how, with plenty of work, we can make progress in rigorously settling the questions raised by general relativity.

Indeed, Christodoulou has been a leader in this area. For example, the vacuum Einstein equations have solutions describing gravitational waves, much as the vacuum Maxwell equations have solutions describing electromagnetic waves. However, gravitational waves can actually form black holes when they collide. This raises the question of the stability of Minkowski spacetime. Must sufficiently small perturbations of the Minkowski metric go away in the form of gravitational radiation, or can tiny wrinkles in the fabric of spacetime somehow amplify themselves and cause trouble—perhaps even a singularity? In 1993, together with Klainerman, Christodoulou proved that Minkowski spacetime is indeed stable. Their proof fills a 514-page book.

In 2008, Christodoulou completed an even longer rigorous study of the formation of black holes. This can be seen as a vastly more detailed look at questions which Penrose’s original singularity theorem addressed in a general, preliminary way. Nonetheless, there is much left to be done to understand the behavior of singularities in general relativity.


In this series of posts, we’ve seen that in every major theory of physics, challenging mathematical questions arise from the assumption that spacetime is a continuum. The continuum threatens us with infinities! Do these infinities threaten our ability to extract predictions from these theories—or even our ability to formulate these theories in a precise way?

We can answer these questions, but only with hard work. Is this a sign that we are somehow on the wrong track? Is the continuum as we understand it only an approximation to some deeper model of spacetime? Only time will tell. Nature is providing us with plenty of clues, but it will take patience to read them correctly.

For more

To delve deeper into singularities and cosmic censorship, try this delightful book, which is free online:

• John Earman, Bangs, Crunches, Whimpers and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, Oxford U. Press, Oxford, 1993.

To read this whole series of posts in one place, with lots more references and links, see:

• John Baez, Struggles with the continuum.

by John Baez at September 25, 2016 01:00 AM

September 21, 2016

Symmetrybreaking - Fermilab/SLAC

Small cat, big science

The proposed International Linear Collider has a fuzzy new ally.

Hello Kitty is known throughout Japan as the poster girl (poster cat?) of kawaii, a segment of pop culture built around all things cute.

But recently she took on a new job: representing the proposed International Linear Collider.

At the August International Conference on High Energy Physics in Chicago, ILC boosters passed out folders featuring the white kitty wearing a pair of glasses, a shirt with pens in the pocket and a bow with an L for “Lagrangian,” the name of the long equation in the background. Some picture the iconic cat sitting on an ILC cryomodule.

Hello Kitty has previously tried activities such as cooking, photography and even scuba diving. This may be her first foray into international research.

Japan is considering hosting the ILC, a proposed accelerator that could mass-produce Higgs bosons and other fundamental particles. Japan’s Advanced Accelerator Association partnered with the company Sanrio to create the special kawaii gear in the hopes of drawing attention to the large-scale project.

The ILC: Science you’ll want to snuggle.

by Ricarda Laasch at September 21, 2016 04:15 PM

Lubos Motl - string vacua and pheno

Nanopoulos' and pals' model is back to conquer the throne
Once upon a time, there was an evil witch-and-bitch named Cernette whose mass was \(750\GeV\) and who wanted to become the queen instead of the beloved king.

Fortunately, that witch-and-bitch has been killed and what we're experiencing is
The Return of the King: No-Scale \({\mathcal F}\)-\(SU(5)\),
Li, Maxin, and Nanopoulous point out. It's great news that the would-be \(750\GeV\) particle has been liquidated. They revisited the predictions of their class of F-theory-based, grand unified, no-scale models and found some consequences that they surprisingly couldn't have told us about in the previous 10 papers and that we should be happy about, anyway.

First, they suddenly claim that the theoretical considerations within their scheme are enough to assert that the mass of the gluino exceeds \(1.9\TeV\),\[

m_{\tilde g} \geq 1.9\TeV.

\] This is an excellent, confirmed prediction of a supersymmetric theory because the LHC experiments also say that with these conventions, the mass of the gluino exceeds \(1.9\TeV\). ;-)

Just to be sure, I did observe the general gradual increase of the masses predicted by their models so I don't take the newest ones too seriously. But I believe that there is still some justification so the probability could be something like 0.1% that in a year or two, we will consider their model to be a strong contender that has been partly validated by the experiments.

In the newest paper, they want the Higgs and top mass to be around\[

m_h\approx 125\GeV, \quad m_{\rm top} \approx 174\GeV

\] while the new SUSY-related parameters are\[

\tan\beta &\approx 25\\
M_V^{\rm flippon}&\approx (30-80)\TeV\\
M_{\chi^1_0}&\approx 380\GeV\\
M_{\tilde \tau^\pm} &\approx M_{\chi^1_0}+1 \GeV\\
M_{\tilde t_1} &\approx 1.7\TeV\\
M_{\tilde u_R} &\approx 2.7\TeV\\
M_{\tilde g} &\approx 2.1\TeV

\] while the cosmological parameter \(\Omega h^2\approx 0.118\), the anomalous muon's magnetic moment \(\Delta a_\mu\approx 2\times 10^{-10}\), the branching ratio of a bottom decay \(Br(b\to s\gamma)\approx 0.00035\), the muon pair branching ratio for a B-meson \(Br(B^0_s\to \mu^+\mu^-)\approx 3.2\times 10^{-9}\), the spin-independent cross section \(\sigma_{SI}\approx (1.0-1.5)\times 10^{-11}\,{\rm pb}\) and \(\sigma_{SD} \approx (4-6)\times 10^{-9}\,{\rm pb}\), and the proton lifetime\[

\tau (p\to e^+ \pi^0) \approx 1.3\times 10^{35}\,{\rm years}.

\] Those are cool, specific predictions that are almost independent of the choice of the point in their parameter space. If one takes those claims seriously, theirs is a highly predictive theory.

But one reason I wrote this blog post was their wonderfully optimistic, fairy-tale-styled rhetoric. For example, the second part of their conclusions says:
While SUSY enthusiasts have endured several setbacks over the prior few years amidst the discouraging results at the LHC in the search for supersymmetry, it is axiomatic that as a matter of course, great triumph emerges from momentary defeat. As the precession of null observations at the LHC has surely dampened the spirits of SUSY proponents, the conclusion of our analysis here indicates that the quest for SUSY may just be getting interesting.
So dear SUSY proponents, just don't despair, return to your work, and get ready for the great victory.

Off-topic: Santa Claus is driving a Škoda and he parks on the roofs whenever he brings gifts to kids in the Chinese countryside. What a happy driver.

by Luboš Motl ( at September 21, 2016 10:54 AM

September 19, 2016

Robert Helling - atdotde

Brute forcing Crazy Game Puzzles
In the 1980s, as a kid I loved my Crazy Turtles Puzzle ("Das verrückte Schildkrötenspiel"). For a number of variations, see here or here.

I had completely forgotten about those, but a few days ago, I saw a self-made reincarnation when staying at a friends' house:

I tried a few minutes to solve it, unsuccessfully (in case it is not clear: you are supposed to arrange the nine tiles in a square such that they form color matching arrows wherever they meet).

So I took the picture above with the plan to either try a bit more at home or write a program to solve it. Yesterday, I had about an hour and did the latter. I am a bit proud of the implementation I came up with and in particular the fact that I essentially came up with a correct program: It came up with the unique solution the first time I executed it. So, here I share it:


# 1 rot 8
# 2 gelb 7
# 3 gruen 6
# 4 blau 5

@karten = (7151, 6754, 4382, 2835, 5216, 2615, 2348, 8253, 4786);

foreach $karte(0..8) {
$farbe[$karte] = [split //,$karten[$karte]];

sub ausprobieren {
my $pos = shift;

foreach my $karte(0..8) {
next if $benutzt[$karte];
$benutzt[$karte] = 1;
foreach my $dreh(0..3) {
if ($pos % 3) {
# Nicht linke Spalte
$suche = 9 - $farbe[$gelegt[$pos - 1]]->[(1 - $drehung[$gelegt[$pos - 1]]) % 4];
next if $farbe[$karte]->[(3 - $dreh) % 4] != $suche;
if ($pos >= 3) {
# Nicht oberste Zeile
$suche = 9 - $farbe[$gelegt[$pos - 3]]->[(2 - $drehung[$gelegt[$pos - 3]]) % 4];
next if $farbe[$karte]->[(4 - $dreh) % 4] != $suche;

$benutzt[$karte] = 1;
$gelegt[$pos] = $karte;
$drehung[$karte] = $dreh;
#print @gelegt[0..$pos]," ",@drehung[0..$pos]," ", 9 - $farbe[$gelegt[$pos - 1]]->[(1 - $drehung[$gelegt[$pos - 1]]) % 4],"\n";

if ($pos == 8) {
print "Fertig!\n";
for $l(0..8) {
print "$gelegt[$l] $drehung[$gelegt[$l]]\n";
} else {
&ausprobieren($pos + 1);
$benutzt[$karte] = 0;

Sorry for variable names in German, but the idea should be clear. Regarding the implementation: red, yellow, green and blue backs of arrows get numbers 1,2,3,4 respectively and pointy sides of arrows 8,7,6,5 (so matching combinations sum to 9).

It implements depth first tree search where tile positions (numbered 0 to 8) are tried left to write top to bottom. So tile $n$ shares a vertical edge with tile $n-1$ unless it's number is 0 mod 3 (leftist column) and it shares a horizontal edge with tile $n-3$ unless $n$ is less than 3, which means it is in the first row.

It tries rotating tiles by 0 to 3 times 90 degrees clock-wise, so finding which arrow to match with a neighboring tile can also be computed with mod 4 arithmetic.

by Robert Helling ( at September 19, 2016 07:43 PM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
October 24, 2016 05:06 AM
All times are UTC.

Suggest a blog: