Particle Physics Planet


July 24, 2016

Christian P. Robert - xi'an's og

common derivation for Metropolis–Hastings and other MCMC algorithms

Khoa Tran and Robert Kohn from UNSW just arXived a paper on a comprehensive derivation of a large range of MCMC algorithms, beyond Metropolis-Hastings. The idea is to decompose the MCMC move into

  1. a random completion of the current value θ into V;
  2. a deterministic move T from (θ,V) to (ξ,W), where only ξ matters.

If this sounds like a new version of Peter Green’s completion at the core of his 1995 RJMCMC algorithm, it is bedowntown Sydney from under Sydney Harbour bridge, July 15, 2012cause it is indeed essentially the same notion. The resort to this completion allows for a standard form of the Metropolis-Hastings algorithm, which leads to the correct stationary distribution if T is self-inverse. This representation covers Metropolis-Hastings algorithms, Gibbs sampling, Metropolis-within-Gibbs and auxiliary variables methods, slice sampling, recursive proposals, directional sampling, Langevin and Hamiltonian Monte Carlo, NUTS sampling, pseudo-marginal Metropolis-Hastings algorithms, and pseudo-marginal Hamiltonian  Monte Carlo, as discussed by the authors. Given this representation of the Markov chain through a random transform, I wonder if Peter Glynn’s trick mentioned in the previous post on retrospective Monte Carlo applies in this generic setting (as it could considerably improve convergence…)


Filed under: Books, pictures, Statistics, Travel, University life Tagged: auxiliary variables, directional sampling, Gibbs sampling, Hamiltonian Monte Carlo, Metropolis-Hastings algorithms, Metropolis-within-Gibbs algorithm, NUTS, pseudo-marginal MCMC, recursive proposals, RJMCMC, slice sampling, Sydney, UNSW

by xi'an at July 24, 2016 10:16 PM

The n-Category Cafe

Topological Crystals (Part 1)

k4_crystal

Over on Azimuth I posted an article about crystals:

In the comments on that post, a bunch of us worked on some puzzles connected to ‘topological crystallography’—a subject that blends graph theory, topology and mathematical crystallography. You can learn more about that subject here:

Greg Egan and I got so interested that we wrote a paper about it!

I’ll explain the basic ideas in a series of posts here.

First, a few personal words.

I feel a bit guilty putting so much work into this paper when I should be developing network theory to the point where it does our planet some good. I seem to need a certain amount of beautiful pure math to stay sane. But this project did at least teach me a lot about the topology of graphs.

For those not in the know, applying homology theory to graphs might sound fancy and interesting. For people who have studied a reasonable amount of topology, it probably sounds easy and boring. The first homology of a graph of genus <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> is a free abelian group on <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> generators: it’s a complete invariant of connected graphs up to homotopy equivalence. Case closed!

But there’s actually more to it, because studying graphs up to homotopy equivalence kills most of the fun. When we’re studying networks in real life we need a more refined outlook on graphs. So some aspects of this project might pay off, someday, in ways that have nothing to do with crystallography. But right now I’ll just talk about it as a fun self-contained set of puzzles.

I’ll start by quickly sketching how to construct topological crystals, and illustrate it with the example of graphene, a 2-dimensional form of carbon:

I’ll precisely state our biggest result, which says when the construction gives a crystal where the atoms don’t bump into each other and the bonds between atoms don’t cross each other. Later I may come back and add detail, but for now you can find details in our paper.

Constructing topological crystals

The ‘maximal abelian cover’ of a graph plays a key role in Sunada’s work on topological crystallography. Just as the universal cover of a connected graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has the fundamental group <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> as its group of deck transformations, the maximal abelian cover, denoted <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>, has the abelianization of <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> as its group of deck transformations. It thus covers every other connected cover of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> whose group of deck transformations is abelian. Since the abelianization of <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> is the first homology group <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{Z})</annotation></semantics>, there is a close connection between the maximal abelian cover and homology theory.

In our paper, Greg and I prove that for a large class of graphs, the maximal abelian cover can naturally be embedded in the vector space <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{R})</annotation></semantics>. We call this embedded copy of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> a ‘topological crystal’. The symmetries of the original graph can be lifted to symmetries of its topological crystal, but the topological crystal also has an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional lattice of translational symmetries. In 2- and 3-dimensional examples, the topological crystal can serve as the blueprint for an actual crystal, with atoms at the vertices and bonds along the edges.

The general construction of topological crystals was developed by Kotani and Sunada, and later by Eon. Sunada uses ‘topological crystal’ for an even more general concept, but we only need a special case.

Here’s how it works. We start with a graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. This has a space <semantics>C 0(X,)<annotation encoding="application/x-tex">C_0(X,\mathbb{R})</annotation></semantics> of 0-chains, which are formal linear combinations of vertices, and a space <semantics>C 1(X,)<annotation encoding="application/x-tex">C_1(X,\mathbb{R})</annotation></semantics> of 1-chains, which are formal linear combinations of edges. There is a boundary operator

<semantics>:C 1(X,)C 0(X,)<annotation encoding="application/x-tex"> \partial \colon C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R}) </annotation></semantics>

This is the linear operator sending any edge to the difference of its two endpoints. The kernel of this operator is called the space of 1-cycles, <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics>. There is an inner product on the space of 1-chains such that edges form an orthonormal basis. This determines an orthogonal projection

<semantics>π:C 1(X,)Z 1(X,)<annotation encoding="application/x-tex"> \pi \colon C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R}) </annotation></semantics>

For a graph, <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics> is isomorphic to the first homology group <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{R})</annotation></semantics>. So, to obtain the topological crystal of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, we need only embed its maximal abelian cover <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> in <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics>. We do this by embedding <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> in <semantics>C 1(X,)<annotation encoding="application/x-tex">C_1(X,\mathbb{R})</annotation></semantics> and then projecting it down via <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>.

To accomplish this, we need to fix a basepoint for <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Each path <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> starting at this basepoint determines a 1-chain <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics>. These 1-chains correspond to the vertices of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>. The graph <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> has an edge from <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> to <semantics>c γ<annotation encoding="application/x-tex">c_{\gamma'}</annotation></semantics> whenever the path <semantics>γ<annotation encoding="application/x-tex">\gamma'</annotation></semantics> is obtained by adding an extra edge to <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics>. This edge is a straight line segment from the point <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> to the point <semantics>c γ<annotation encoding="application/x-tex">c_{\gamma'}</annotation></semantics>.

The hard part is checking that the projection <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> maps this copy of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> into <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics> in a one-to-one manner. In Theorems 6 and 7 of our paper we prove that this happens precisely when the graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has no ‘bridges’: that is, edges whose removal would disconnect <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

Kotani and Sunada noted that this condition is necessary. That’s actually pretty easy to see. The challenge was to show that it’s sufficient! For this, our main technical tool is Lemma 5, which for any path <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> decomposes the 1-chain <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> into manageable pieces.

We call the resulting copy of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> embedded in <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics> a topological crystal.

Let’s see how it works in an example!

Take <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to be this graph:

Since <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has 3 edges, the space of 1-chains is 3-dimensional. Since <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has 2 holes, the space of 1-cycles is a 2-dimensional plane in this 3-dimensional space. If we take paths <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> starting at the red vertex, form the 1-chains <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics>, and project them down to this plane, we obtain the following picture:

Here the 1-chains <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> are the white and red dots. These are the vertices of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>, while the line segments between them are the edges of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>. Projecting these vertices and edges onto the plane of 1-cycles, we obtain the topological crystal for <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. The blue dots come from projecting the white dots onto the plane of 1-cycles, while the red dots already lie on this plane. The resulting topological crystal provides the pattern for graphene:

That’s all there is to the basic idea! But there’s a lot more to say about the mathematics it leads to, and a lot of fun examples to look at: diamonds, triamonds, hyperquartz and more.

by john (baez@math.ucr.edu) at July 24, 2016 02:39 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 2
As explained in the previous installment of this series, these questions are a warm-up for my younger colleagues, who will in two months have to pass a tough exam to become INFN researchers.
By the way, when I wrote the first question yesterday I thought I would not need to explain it in detail, but it just occurred to me that a disclaimer would be useful. Here it is:

read more

by Tommaso Dorigo at July 24, 2016 12:46 PM

July 23, 2016

Christian P. Robert - xi'an's og

on Monte Rosa [a failed attempt at speed climbing]

With my daughter Rachel and her friend Clément, we tried last week to bag a few summits in the Monte Rosa massif, which stands between Italy (Aosta) and Switzerland (Zermatt). I wanted to take advantage of the Bastille Day break and we drove from Paris to Aosta in the very early morning, stopping in Chamonix to rent shoes and crampons, and meeting with our guide Abele Blanc at noon, before going together to the hut Rifugio Città di Mantova. At 3500m. Our goal was to spent the night there and climb to Punta Gnifetti (Rifugio Margherita) and Zumstein the next morning. Before heading back to Paris in the evening. However, it did not work out that way as I got a slight bout of mountain sickness that left me migrainous, nauseous, and having a pretty bad night, despite great conditions at the hut… So (despite my intense training of the previous weeks!) I did not feel that great when we left the hut at 5am. The weather was fine if cold and windy, but after two hours of moderate climbing in a fairly pleasant crispy snow of a glacier, Rachel was too out of breath to continue and Abele realised my nose had [truly] frozen (I could not feel anything!) and took us down before continuing with Clément to both peaks. This was quite a disappointment as we had planned this trip over several months, but it was clearly for the best as my fingers were definitely close to frozen (with my worst case ever of screamin’ barfies on the way down!). And we thus spent the rest of the morning waiting for our friends, warming up with tea in the sunshine. Upon reflection, planning one extra day of acclimatisation to altitude and cold would have been more reasonable and keeping handwarmers in our backpacks as well… In any case, Clément made it to the top with Abele and we got a good altitude training for the incoming San Francisco half-marathon. Plus an epic hike the next day around Cogne.


Filed under: Kids, Mountains, pictures, Running, Travel Tagged: Abele Blanc, Alps, Aosta, Chamonix, Italy, Mont Blanc, Monte Rosa, Punta Gnifetti, Rifugio Città di Mantova, Rifugio Margherita, screaming barfies, sunrise

by xi'an at July 23, 2016 10:16 PM

Peter Coles - In the Dark

Cinque Torri

Today I was mainly occupied with a hike to the Cinque Torri,  a group of five towering peaks about 12 km from Cortina d’Ampezzo. It wasn’t too strenuous, although the recent rains had made some of the trail a bit slippery.

The route we followed included a visit to trenches, gun emplacements and other fortifications built during the First World War. There was heavy fighting in this area, because it was the border between Austria and Italy at that time.
It must have been very grim for the soldiers dug in above the tree line, especially during the winter.

Anyway here’s a quick snap of me abour to pass through a gap in one of the rocky structures at the peak, just before we started to descend.

image


by telescoper at July 23, 2016 09:03 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 1
Today I wish to start a series of posts that are supposed to help my younger colleagues who will, in two months from now, compete for a position as INFN research scientists. 
The INFN has opened 73 new positions and the selection includes two written exams besides an evaluation of titles and an oral colloquium. The rules also say that the candidates will have to pass the written exams with a score of at least 140/200 on each, in order to access the oral colloquium. Of course, no information is given on how the tests will be graded, so 140 over 200 does not really mean much at this point.

read more

by Tommaso Dorigo at July 23, 2016 04:31 PM

July 22, 2016

Peter Coles - In the Dark

Cortina d’Ampezzo

So here I am, then, in a small hotel just outside Cortina d’Ampezzo in the Dolomites North of Venice.

image

The occasion for this trip is provided by the recent 60th birthday of John Peacock and an informal workshop organised by some friends. I was up at crazy o’clock this morning to get the plane to Venice and we’ve been dodging  thunderstorms all afternoon but I’m sure it will be a nice weekend.

Until later!


by telescoper at July 22, 2016 05:38 PM

astrobites - astro-ph reader's digest

UR #19: Outburst from an X-ray Binary

astrobitesURlogoThe undergrad research series is where we feature the research that you’re doing. If you’ve missed the previous installments, you can find them under the “Undergraduate Research” category here.

Are you doing an REU this summer? Were you working on an astro research project during this past school year? If you, too, have been working on a project that you want to share, we want to hear from you! Think you’re up to the challenge of describing your research carefully and clearly to a broad audience, in only one paragraph? Then send us a summary of it!

You can share what you’re doing by clicking here and using the form provided to submit a brief (fewer than 200 words) write-up of your work. The target audience is one familiar with astrophysics but not necessarily your specific subfield, so write clearly and try to avoid jargon. Feel free to also include either a visual regarding your research or else a photo of yourself.

We look forward to hearing from you!

************

Cormac Larkin
University College Cork

Cormac is entering his final year of high school in Cork, in the south of Ireland. Last summer he completed an internship in the physics department at University College Cork, where he worked on the following research project. Cormac is now the Managing Editor of the Young Scientists Journal and a Research Collaborator with Armagh Observatory working on data mining in the Small Magellanic Cloud in the search for new O-stars.

V-Band Photometry in V404 Cygni

V404 Cygni is a low-mass X-ray binary in the constellation Cygnus. The two stars comprising this system are an accretor (a black hole candidate or neutron star) and a donor star (a low-mass late type star). The accretor grows by accumulating matter from the donor star. Periodic outbursts of X-rays occur as mass is transferred from the donor to the accretor. It underwent a period of outburst this summer, beginning on June 15th 2015. I performed V band photometry on the system in August to attempt to ascertain whether the system had returned to quiescence or not. I used the McDonald 1m telescope in Texas, owned and operated by Las Cumbres Observatory Global Telescope Network. My observation time was awarded to me by the Faulkes Telescope Network. Using the Aperture Photometry Tool, I found the V magnitude on August 12th to be 17.24, which was lower (and thus brighter) than the quiescent V magnitude averaging 18.3-18.4 but higher (and thus dimmer) than the peak V magnitude of 12.1. From the data I obtained, the system appeared to be still active, but was dimmer than when at peak activity. From this, I inferred that activity in V404 Cygni was dissipating but not yet returned to quiescent levels. This work was presented in poster format at both the Irish National Astronomy Meeting 2015 and at the Young Scientists Journal Conference 2015, where it came in 3rd place overall.

The combined 5x60 second image used to measure the magnitude of V404 Cygni on August 12th 2015.

The combined 5×60 second image used to measure the magnitude of V404 Cygni on August 12th 2015.

by Astrobites at July 22, 2016 03:07 PM

John Baez - Azimuth

Topological Crystals (Part 1)

A while back, we started talking about crystals:

• John Baez, Diamonds and triamonds, Azimuth, 11 April 2016.

In the comments on that post, a bunch of us worked on some puzzles connected to ‘topological crystallography’—a subject that blends graph theory, topology and mathematical crystallography. You can learn more about that subject here:

• Tosio Sunada, Crystals that nature might miss creating, Notices of the AMS 55 (2008), 208–215.

Greg Egan and I got so interested that we wrote a paper about it!

• John Baez and Greg Egan, Topological crystals.

I’ll explain the basic ideas in a series of posts here.

First, a few personal words.

I feel a bit guilty putting so much work into this paper when I should be developing network theory to the point where it does our planet some good. I seem to need a certain amount of beautiful pure math to stay sane. But this project did at least teach me a lot about the topology of graphs.

For those not in the know, applying homology theory to graphs might sound fancy and interesting. For people who have studied a reasonable amount of topology, it probably sounds easy and boring. The first homology of a graph of genus g is a free abelian group on g generators: it’s a complete invariant of connected graphs up to homotopy equivalence. Case closed!

But there’s actually more to it, because studying graphs up to homotopy equivalence kills most of the fun. When we’re studying networks in real life we need a more refined outlook on graphs. So some aspects of this project might pay off, someday, in ways that have nothing to do with crystallography. But right now I’ll just talk about it as a fun self-contained set of puzzles.

I’ll start by quickly sketching how to construct topological crystals, and illustrate it with the example of graphene, a 2-dimensional form of carbon:

I’ll precisely state our biggest result, which says when this construction gives a crystal where the atoms don’t bump into each other and the bonds between atoms don’t cross each other. Later I may come back and add detail, but for now you can find details in our paper.

Constructing topological crystals

The ‘maximal abelian cover’ of a graph plays a key role in Sunada’s work on topological crystallography. Just as the universal cover of a connected graph X has the fundamental group \pi_1(X) as its group of deck transformations, the maximal abelian cover, denoted \overline{X}, has the abelianization of \pi_1(X) as its group of deck transformations. It thus covers every other connected cover of X whose group of deck transformations is abelian. Since the abelianization of \pi_1(X) is the first homology group H_1(X,\mathbb{Z}), there is a close connection between the maximal abelian cover and homology theory.

In our paper, Greg and I prove that for a large class of graphs, the maximal abelian cover can naturally be embedded in the vector space H_1(X,\mathbb{R}). We call this embedded copy of \overline{X} a ‘topological crystal’. The symmetries of the original graph can be lifted to symmetries of its topological crystal, but the topological crystal also has an n-dimensional lattice of translational symmetries. In 2- and 3-dimensional examples, the topological crystal can serve as the blueprint for an actual crystal, with atoms at the vertices and bonds along the edges.

The general construction of topological crystals was developed by Kotani and Sunada, and later by Eon. Sunada uses ‘topological crystal’ for an even more general concept, but we only need a special case.

Here’s how it works. We start with a graph X. This has a space C_0(X,\mathbb{R}) of 0-chains, which are formal linear combinations of vertices, and a space C_1(X,\mathbb{R}) of 1-chains, which are formal linear combinations of edges. There is a boundary operator

\partial \colon C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R})

This is the linear operator sending any edge to the difference of its two endpoints. The kernel of this operator is called the space of 1-cycles, Z_1(X,\mathbb{R}). There is an inner product on the space of 1-chains such that edges form an orthonormal basis. This determines an orthogonal projection

\pi \colon C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R})

For a graph, Z_1(X,\mathbb{R}) is isomorphic to the first homology group H_1(X,\mathbb{R}). So, to obtain the topological crystal of X, we need only embed its maximal abelian cover \overline{X} in Z_1(X,\mathbb{R}). We do this by embedding \overline{X} in C_1(X,\mathbb{R}) and then projecting it down via \pi.

To accomplish this, we need to fix a basepoint for X. Each path \gamma in X starting at this basepoint determines a 1-chain c_\gamma. These 1-chains correspond to the vertices of \overline{X}. The graph \overline{X} has an edge from c_\gamma to c_{\gamma'} whenever the path \gamma' is obtained by adding an extra edge to \gamma. This edge is a straight line segment from the point c_\gamma to the point c_{\gamma'}.

The hard part is checking that the projection \pi maps this copy of \overline{X} into Z_1(X,\mathbb{R}) in a one-to-one manner. In Theorems 6 and 7 of our paper we prove that this happens precisely when the graph X has no ‘bridges’: that is, edges whose removal would disconnect X.

Kotani and Sunada noted that this condition is necessary. That’s actually pretty easy to see. The challenge was to show that it’s sufficient! For this, our main technical tool is Lemma 5, which for any path \gamma decomposes the 1-chain c_\gamma into manageable pieces.

We call the resulting copy of \overline{X} embedded in Z_1(X,\mathbb{R}) a topological crystal.

Let’s see how it works in an example!

Take X to be this graph:

Since X has 3 edges, the space of 1-chains is 3-dimensional. Since X has 2 holes, the space of 1-cycles is a 2-dimensional plane in this 3-dimensional space. If we consider paths \gamma in X starting at the red vertex, form the 1-chains c_\gamma, and project them down to this plane, we obtain the following picture:

Here the 1-chains c_\gamma are the white and red dots. These are the vertices of \overline{X}, while the line segments between them are the edges of \overline{X}. Projecting these vertices and edges onto the plane of 1-cycles, we obtain the topological crystal for X. The blue dots come from projecting the white dots onto the plane of 1-cycles, while the red dots already lie on this plane. The resulting topological crystal provides the pattern for graphene:

That’s all there is to the basic idea! But there’s a lot more to say about it, and a lot of fun examples to look at: diamonds, triamonds, hyperquartz and more.


by John Baez at July 22, 2016 08:08 AM

July 21, 2016

Emily Lakdawalla - The Planetary Society Blog

The Planetary Society at San Diego Comic-Con
Whether or not you're attending San Diego Comic-Con, you can enjoy a discussion panel with Emily Lakdawalla and five science fiction authors about the future of science fiction in the context of today's amazing scientific advances.

July 21, 2016 11:04 PM

Peter Coles - In the Dark

Graduation and Beyond

I’ve found a few pictures of this week’s  graduation ceremony for the School of Mathematical and Physical Sciences at the University of Sussex, at which I had the pleasure of presenting the graduands. These are taken without permission from facebook posts!

Graduation ceremonies are funny things. With all their costumes and weird traditions, they even seem a bit absurd. On the other hand, even in these modern times, we live with all kinds of  rituals and I don’t see why we shouldn’t celebrate academic achievement in this way. I love graduation ceremonies, actually. As the graduands go across the stage you realize that every one of them has a unique story to tell and a whole universe of possibilities in front of them. How their lives will unfold no-one can tell, but it’s a privilege to be there for one important milestone on their journey. Getting to read their names out is quite stressful – it may not seem like it, but I do spend quite a lot of time fretting about the correct pronunciation of the names.  It’s also a bit strange in some cases finally to put a name to a face that I’ve seen around the place regularly, just before they leave the University for good. I always find this a bittersweet occasion. There’s joy and celebration, of course, but tempered by the realisation that many of the young people who you’ve seen around for three or for years, and whose faces you have grown accustomed to, will disappear into the big wide world never to be seen again. On the other hand, this year a large number of MPS graduates are going on to do PhDs – including two who are moving to Cardiff! – so they won’t all vanish without trace!

Grad_1

Grad_3

That’s me in the front row just to the left of the Mayor, in case you didn’t realise. It was very hot with all that graduation clobber on – in fact it was over 30 degrees. Waiting for the official photographs outside in the gardens was a rather sweaty experience.

Grad_2

Graduation of course isn’t just about dressing up. Nor is it only about recognising academic achievement. It’s also a rite of passage on the way to adulthood and independence, so the presence of the parents at the ceremony adds another emotional dimension to the goings-on. Although everyone is rightly proud of the achievement – either their own in the case of the graduands or that of others in the case of the guests – there’s also a bit of sadness to go with the goodbyes. It always seems that as a lecturer you are only just getting to know students by the time they graduate, but that’s enough to miss them when they go.

Anyway, all this is a roundabout way of saying congratulations once more to everyone who graduated on Tuesday, and I wish you all the very best for the future!


by telescoper at July 21, 2016 03:56 PM

astrobites - astro-ph reader's digest

Mass Loss in Dying Stars

Title: Pulsation-Triggered Mass Loss From AGB Stars: The 60-Day Critical Period

Authors: Iain McDonald and Albert Zijlstra

First Author’s Institution: Jodrell Bank Centre for Astrophysics

Status: Accepted to ApJ Letters

Background

Perhaps you’ve heard that four billion years from now, the Sun will grow into a red giant with a radius the size of Earth’s orbit before eventually shrinking into a white dwarf about the size of Earth itself. Besides being very small, the resulting white dwarf will probably only have half of the original mass of the Sun. Where does that lost mass go?

Figure 1: An HR Diagram showing the main sequence, red giant branch, horizontal branch, and asymptotic giant branch. The horizontal axis indicates the temperature, while the vertical axis indicates the luminosity.

Figure 1: An HR Diagram showing the main sequence, red giant branch, horizontal branch, and asymptotic giant branch. The horizontal axis indicates the temperature, while the vertical axis indicates the luminosity. The arrow traces out the path the star would take after leaving the main sequence. From http://www.astronomy.ohio-state.edu/~pogge/ .

During a star’s post-main-sequence (MS) evolution, it will lose much of its starting mass through stellar winds. Currently, the Sun is constantly losing mass through solar winds—material that is being ejected from its surface—but when the Sun leaves MS and reaches the red giant branch (RGB), these solar winds will become even stronger. After the end of the RGB phase, the Sun will continue to evolve until it reaches the asymptotic giant branch (AGB)—so named because it will then asymptotically approach the same location on the Hertzsprung-Russell diagram that it does as an RGB star (see Figure 1 for an example). AGB stars have even stronger stellar winds, meaning they are losing mass at an even more rapid rate than RGB stars. It is thought that much of a star’s mass loss happens when it is on the RGB and AGB. In addition, all of this excess material being blown off of the star means that AGB stars are often surrounded by a lot of dust

Exactly what really drives this process, however, is not something that we understand very well. Today’s astrobite discusses some of the possible mechanisms for stellar mass loss in AGB stars, particularly the role that pulsation plays in mass loss.

Stars can pulsate in a variety of different pulsational modes. The fundamental mode is probably what you imagine when you think of stellar pulsation—all of the star is moving radially in the same direction. However, if the star has radial nodes, different parts of the star move in different directions at the same time (sort of like the nodes of an pipe). We call these pulsational modes overtone modes, and the type of overtone mode (first, second, third, etc.) tells you the number of nodes that exist in the star.

Mass Loss above the 60-Day Critical Pulsational Period

Figure 2: Figure 1 from the paper, which shows the dust excess (given by K-[22] color) on the vertical axis plotted against period in days on the horizontal axis. The dotted horizontal line marks the authors' criterion for 'substantial dust excess'. The red circles show period data taken from Tabur (2009), the green squares from the International Variable Star Index, and the blue triangles from the General Catalogue of Variable Stars. Smaller light blue triangles indicate the stars for which they had GCVS data, but could not detect with Hipparcos. Starting at a period of 60 days, there is an increased number of stars with greater dust excess than their criterion. There is another increase at about 300 days.

Figure 2: Figure 1 from the paper, which shows the dust excess (given by K-[22] color) on the vertical axis plotted against period in days on the horizontal axis. The dotted horizontal line marks the authors’ criterion for ‘substantial dust excess’. The red circles show period data taken from Tabur (2009), the green squares from the International Variable Star Index, and the blue triangles from the General Catalogue of Variable Stars. Smaller light blue triangles indicate the stars for which they had GCVS data, but could not detect with Hipparcos. Starting at a period of 60 days, there is an increased number of stars with greater dust excess than their criterion. There is another increase at about 300 days.

Most previous studies of the effects of pulsation on mass loss have focused on stars with pulsational periods greater than 300 days, because both observation and theory have shown that to be when stars have the greatest dust production and highest mass-loss rate. However, a less-studied 60-day ‘critical period’ in the increase of dust production has also been noted as well.

Mass-loss in RGB and AGB stars seems to increase at a period of 60 days. Both RGB stars and AGB stars can pulsate (in fact, there is evidence that all stars pulsate…if only we could study them well enough to see it), but the authors find that despite inhabiting roughly the same area on the HR diagram, the 60-day period stars with strong mass loss appear to only be AGB stars and not RGB stars. This 60-day period also happens to correspond with roughly the point when AGB stars transition from second and third overtone pulsation to the first overtone pulsation mode.  Additional nodes will also result in lower pulsational amplitude (smaller change in brightness and radius over one period) for the star, leading AGB stars to have bigger amplitudes at this point. RGB stars seem to pulse only in the second and third overtone modes. This is most likely responsible for why they produce so much less dust and experience less mass-loss at the same period as their AGB star counterparts.

Screenshot 2016-06-08 11.13.40

Figure 3: This is part of Figure 2 from the paper, showing amplitude in the V-band plotted against period. In both subplots, the darker colored circles are stars with substantial dust excess, and the lighter colored circles are stars without substantial dust excess. This seems to suggest that greater dust excess corresponds with greater amplitude. Greater amplitude also usually indicates fewer radial nodes. The 60 and 300-day increase in dust productions are also visible in both plots.

The relationship between dust production and infrared excess, which the authors use as a proxy for the amount of dust the star is producing, is shown in Figure 2. From this figure, we can see that at periods longer than 60 days, there appear to be more stars that are producing dust above their criterion for substantial dust excess. Figure 3 show period-amplitude diagrams, where the pulsational amplitude is plotted against the pulsational period (where the amplitude suggests of the mode of pulsation). From this diagram, we can see that the stars with less dust production appear to also have lower amplitudes of pulsation. Together, these support the hypothesis that the pulsational mode plays a critical role in producing dust and driving mass loss. These results also confirm the increase in mass-loss at 300 days, which roughly corresponds with stars transitioning from the first overtone pulsation to the fundamental mode.

Conclusion

So what’s next? Well, as you might expect, the follow up to science is usually…more science! The authors point out that further study will be necessary in order to get conclusive evidence for exactly what role this critical period serves and how pulsational mode can affect it. Is it really a change in the stellar-mass loss rate, or is the stellar wind pre-existing, and the 60-day period just coincides with an increase in dust condensation? Similar studies focusing on stars with different metallicities will also be a good check to see whether these critical periods are universal.

by Caroline Huang at July 21, 2016 01:08 PM

Symmetrybreaking - Fermilab/SLAC

Dark matter evades most sensitive detector

In its final run, the LUX experiment increased its sensitivity four-fold, but dark matter remains elusive. 

After completing its final run, scientists on the Large Underground Xenon (LUX) experiment announced they have found no trace of dark matter particles.

The new data, which were collected over more than 300 days from October 2014 to May 2016, improved the experiment’s previous sensitivity four-fold.

“We built an experiment that has delivered world-leading sensitivity in multiple new results over the last three years,” says Brown University’s Rick Gaitskell, co-spokesperson for the LUX collaboration. “We gave dark matter every opportunity to show up in our experiment, but it chose not to.”

Although the LUX scientists haven’t found WIMPs, their results allow them to exclude many theoretical models for what these particles could have been, narrowing down future dark matter searches with other experiments.

“I’m very proud of what we’ve accomplished,” says LUX co-founder Tom Shutt from the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. “The experiment performed even better than initially planned, and we set a new standard as to how well we can take measurements, calibrate the detector and determine its background signals.”

Scientists have yet to directly detect dark matter, but they have seen indirect evidence of its existence in astronomical studies.

Located one mile underground at the Sanford Underground Research Facility in South Dakota, LUX had been searching since 2012 for what are called weakly interacting massive particles, or WIMPs. These hypothetical particles are top contenders to be the building blocks of dark matter, but their existence has yet to be demonstrated.

WIMPs are believed to barely interact with normal matter other than through gravity. However, researchers had hoped to detect their rare collisions with LUX’s detector material—a third of a ton of liquid xenon.

With the latest gain in sensitivity, LUX has “enabled us to probe dark matter candidates that would produce signals of only a few events per century in a kilogram of xenon,” says Aaron Manalaysay, the analysis working group coordinator of the LUX experiment from the University of California, Davis. Manalaysay presented the new results today at IDM2016, an international dark matter conference in Sheffield in the UK.

After LUX was first proposed in 2007, it became an R&D activity with limited funding and only a handful of participating groups. Over the years it grew from a detector that included parts bought for a few bucks on eBay into a major project involving researchers from 20 universities and national labs in the US, the UK and Portugal.

Over the next months, the LUX experiment will be decommissioned to make room for its successor experiment. The next-generation LUX-ZEPLIN (LZ) detector will use 10 tons of liquid xenon and will be 100 times more sensitive to WIMPs.

“LZ is based on lessons learned from LUX,” Shutt says. “It has been a great advantage to have LUX collect data while designing the new experiment, and some of LZ’s new features are enabled through our experience with LUX.”

Once LZ turns on in 2020, researchers will have another big shot at finding mysterious WIMPs.

by Manuel Gnida at July 21, 2016 01:00 PM

Tommaso Dorigo - Scientificblogging

Book News And A Clip
My book "Anomaly! - Collider Physics and the Quest for New Phenomena at Fermilab" is slowly getting its finishing touches, as the second round of proofreading draws to a close. The book is scheduled to appear in bookstores on November 5th, and it makes sense to start planning some events for its presentation.
One such event will take place at the CERN library on November 29th, at 4PM. I am told that CERN already ordered the book to sell it in its bookshop, so it will be good to present the work to the community there - after all, the book is for everybody but I expect that it can be of higher interest to scientists and people in some way connected to research in High-Energy physics.

read more

by Tommaso Dorigo at July 21, 2016 12:58 PM

Christian P. Robert - xi'an's og

freedom of speech in Turkey

“EUA condemns strongly and unconditionally this action against universities and university staff, and expresses its heartfelt support for the higher education community in Turkey at this time.”

Following the failed attempt at a military coup in Turkey last week, Erdoğan’s government has sacked a huge number of public workers, including all Deans of Turkey’s universities and 15,200 education staff so far. Plus barring all academics from travelling abroad. Although Erdoğan’s government has been democratically elected and while the Turkish people’s actions against the military coup led it to fail, the current purge of the public sector does not proceed from democratic principles and the current Turkish constitution and laws. Further, it sounds like the crackdown is aimed at all forms of opposition rather than at those responsible for the coup, as illustrated by the closure of websites like WikiLeaks, journals and other media.


Filed under: Travel, University life Tagged: Amnesty International, EUA, freedom of expression, military coup, Turkey, WikiLeaks

by xi'an at July 21, 2016 12:18 PM

Peter Coles - In the Dark

Minor Swing (for the National Day of Belgium)!

Not far from the hotel in which I stayed during my visit to Ghent last week is a small but pleasant jazz bar called Minor Swing. I mentioned to some colleagues as we passed by the place that it was clearly named after the tune by Django Reinhardt (who was born in Belgium). In fact it was something of a signature tune for him. Anyway, Radio 3 reminded me this morning that today (21st July)  is Belgian National Day so I thought I’d mark the occasion on this blog by posting a version of Minor Swing that demonstrates Django’s superlative gift for melodic improvisation, together with violinist Stephane Grappelli and the Quintet of the Hot Club of France.


by telescoper at July 21, 2016 10:49 AM

July 20, 2016

Peter Coles - In the Dark

Let’s talk about the Black Bird

For those of you who haven’t seen the Maltese Falcon, here’s my favourite scene from the film. Everything about this is just right: perfect dialogue (from the novel by Dashiel Hammett, adapted by director John Huston), perfect acting (Humphrey Bogart and Sidney Greenstreet), and perfect lighting and camera work (credit the great cinematographer, Arthur Edeson). This film is 75 years old this year but I don’t think it has dated at all!

 

 


by telescoper at July 20, 2016 05:48 PM

Emily Lakdawalla - The Planetary Society Blog

Multimedia recap: Two launches, a landing, a docking, and a berthing
Four days of cargo craft mania came to a close at the International Space Station this morning, as astronauts Kate Rubins and Jeff Williams snagged an approaching SpaceX Dragon vehicle and berthed it to the laboratory's Harmony module.

July 20, 2016 05:35 PM

astrobites - astro-ph reader's digest

A Planet Living on the Edge

Title:  Direct Imaging Discovery of a Jovian Exoplanet Within a Triple Star System

Authors: Kevin Wagner, Dániel Apai, Markus Kasper, Kaitlin Kratter, Melissa McClure, Massimo Robberto, and Jean-Luc Beuzit

First Author’s Institution: University of Arizona

Status: Published in Science

There’s a tug-of-war in the HD 131399 system.  A planet, HD 131399Ab, is being pulled in two directions.  On one side is the massive star HD 131399A.  On the other is a pair of smaller stars HD 131399B and HD 131399C.  The more massive HD 131399A is winning, but the battle is the most evenly matched that has ever been observed.  The planet’s orbit is just barely stable.  Orbiting far away from its primary host, it could be sent crashing inwards or tossed out of the system altogether.

The System:

HD 131399 is a triple star system.  The most massive star in the system, HD 131399, is a hot A1 star about 1.8 times the mass of the Sun.  About 350 AU away from it (1 AU = the distance from Earth to the Sun) is a pair of smaller stars, one the mass of our Sun and the other about 40% less massive than the Sun.  The primary star and the pair of smaller stars are gravitationally bound and orbit each other once every 3500 years or so.  The system is located in the Upper Centaurus-Lupus association, which implies an age of the system of 16 Myr (million years).

Between them all, about 82 AU away from the primary star, is a gas giant planet four times the mass of Jupiter discovered via direct imaging using near-infrared observations with the Very Large Telescope.   Figure 1 shows a schematic of the system.  This planet is only detectable because of the extremely young age of the system.  The light seen from the planet isn’t reflected from its suns, but rather, it comes from the planet itself as it cools from its initial, hot formation stage.  Its internal heat means that the gas giant’s “surface” is a blistering 850 K.  Its brightness and high temperature also allowed the authors to take a spectrum of HD 131399Ab, for which they found an atmosphere filled with methane and water.

Figure 1: HD 131399 system.

Figure 1: Left: one possible orbital configuration for the HD 131399 system with ‘A’ as the primary and ‘B’ and ‘C’ as the distant stellar pair. Right: a comparison to the solar system.

Orbital Characterization:

Finding a planet in a multiple star system in itself isn’t too unusual.  There are several known examples (also see this).   This system, though, is a bit unusual.  HD 131399Ab is the widest exoplanet ever discovered in a triple star system.  Solar systems with more than one body are frequently chaotic.  Even our own solar system could lose Mercury in a few billion years.  The existence of multiple massive bodies such as stars only increases potential instability.  For a hypothetical planet in close orbit around the primary star, HD 131399A, the effects of the distant pair of stars would be negligible.  However, as you move the planet farther and farther away from the primary, the gravitational perturbations from the distant pair of stars grows larger.

If the semi-major axis of the planet’s orbit is greater than about 1/3 the semi-major axis of the stellar system, the planet’s orbit is likely unstable.  HD 131399Ab is the closest known planet to this instability criterion.  The ratio for HD 131399Ab is in the range 0.14-0.38 (see Figure 2), which means that there is a possibility that the planet could be on an unstable orbit.  The authors test the stability of the system under a range of possible orbital configurations and find that, although it remains possible, an unstable orbit is an unlikely situation despite the young age of the system.

Wagner et al. (2016).

Figure 2: The ratio of planet semi-major axis to the semi-major axis of the stars for a variety of planets. If the ratio is greater than ~1/3, the system is likely to be unstable. HD 131399Ab is the closest known planet to the instability criterion with a ratio between 0.14 and 0.38.

Because the distant pair of stars likely inhibited planet formation at the planet’s current distance, it is unlikely to have formed there.  They outline three scenarios for the planet’s formation.  Scenario A:  the planet formed close to the primary and was scattered outwards.  This requires a highly eccentric orbit and another massive planet on a short-period orbit around the primary.  Scenario B: the planet formed as a circumbinary planet around the distant pair and was scattered onto its current orbit around the more massive star. This also requires a highly eccentric orbit.  Scenario C:  the planet formed anywhere in the system, but then the stellar system underwent significant orbital evolution to its present-day configuration.

Summary:

HD 131399Ab is one of the lowest mass and coldest planets discovered via direct imaging, and it lives in a young, dynamically active triple star system.  It has the widest known orbit of any planet in a triple star system, its orbit skimming the instability criterion.  The existence of this planet demonstrates the ability for planets to live on the edge.

by Joseph Schmitt at July 20, 2016 05:09 PM

July 19, 2016

Lubos Motl - string vacua and pheno

CMS in \(ZZ\) channel: a 3-4 sigma evidence in favor of a \(650\GeV\) boson
Today, the CMS collaboration has revealed one of the strongest deviations from the Standard Model in quite some time in the paper
Search for diboson resonances in the semileptonic \(X \to ZV \to \ell^+\ell^- q\bar q\) final state at \(\sqrt{s} = 13\TeV\) with CMS
On page 21, Figure 12, you see the Brazilian charts.




In the channel where a resonance decays to a \(ZZ\) pair and one \(Z\) decays to a quark-antiquark pair and the other \(Z\)-boson to a lepton-antilepton pair (semileptonic decays), CMS folks used two different methods to search for low-mass and high-mass particles.




In the high-mass search – which contributed to the bottom part of Figure 12 – they saw a locally 2-sigma excess indicating a resonance around \(1000\GeV\), possibly compatible with the new \(\gamma\gamma\) resonance near \(975\GeV\) that appeared in some new rumors about the 2016 data.

More impressively, the low-mass search revealed a locally 3.4 or 3.9 sigma excess in the search for a Randall-Sundrum or "bulk graviton" (I won't explain the differences between the two models because I don't know the details and I don't think one should take a particular interpretation too seriously) of mass \(650\GeV\). The "bulk graviton" excess is the stronger one.

Even when the look-elsewhere effect (over the \(550-1400\GeV\) range) is taken into account, as conclusions on Page 22 point out, the deviation is still 2.9 or 3.5 sigma, respectively. That's pretty strong.

We may wait for ATLAS whether they see something. (Update: In comments, a paper with a disappointing 2-sigma deficit on that place is shown instead.) CMS has only used 2.7 inverse femtobarns of the 2015 data in this analysis. Obviously, if the excess at \(650\GeV\) were real, the particle would already be safely discovered in the data that have already been collected in 2015-2016 (by CMS separately) – about 18 inverse femtobarns (CMS).

Concerning the \(650\GeV\) mass, in 2014, CMS also saw a 2.5-sigma hint of a leptoquark of that mass (more on those particles). Note that the leptoquarks carry very different charges (quantum numbers) than the "bulk graviton".

The previous CMS papers on the same channel but based on the 2012 data were these two: low-mass, high-mass strategies. I think that there was no evidence in favor of a similar hypothesis in those older papers. That also seems to be true for the analogous ATLAS paper using the 2012 data.

by Luboš Motl (noreply@blogger.com) at July 19, 2016 03:16 PM

astrobites - astro-ph reader's digest

How black hole and merger can kill a galaxy

Title: How to quench a galaxy
First Author: Andrew Pontzen, Michael Tremmel, Nina Roth et al.
First Author’s Institution: Department of Physics and Astronomy, University College London
Paper Status: Submitted to MNRAS

Galaxies are stellar neighborhoods of young and old stars. However, these galactic enclaves are only for exclusive members based on age. Some are spattered with young stellar populations and bursting with active star formation; these galaxies are typically blue in color, such as our own Milky Way.  Some are run by retired veteran stars; these galaxies are “red and dead”, as star formation has been shut off — or quenched, if you want to be fancy. What causes star formation to shut off in these red and dead galaxies has been a long-standing cosmic riddle. Since the building blocks of stars are cold molecular gas, some mechanisms are thought to drive these gases out of the galactic vicinity.

How about winds from supernovae (SNe) or massive stars, collectively known as stellar feedback? Various research (see this bite, for instance) has shown that stellar feedback may help regulate star formation in low mass (< 10¹² Msun) and low luminosity galaxies. These galactic winds are powerful enough to drive materials out of these pee-wee galaxies. However, stellar feedback becomes increasingly ineffective in higher mass (> 10¹² Msun) galaxies that have more gravity to hold on their gases. Galaxy mergers can lend a helping hand, by stripping an infalling galaxy of its gas supply and inducing intense starbursts (thus consuming gas and causing stellar feedback). However, merger+stellar feedback quenches a galaxy much slower than observed, a hint that another feedback is in action.

Active galactic nuclei, which are black holes (BH) at the centers of galaxies activated by accretion of matter, can drive rapid outflows such that their immediate environments are too hot or too devoid of gas to form stars. This is AGN feedback. The authors of this paper investigated how mergers and AGN feedback cooperate to quench star formation, by simulating a high-mass  (10¹² Msun)  redshift z=2 galaxy in three different merger scenarios: enhanced merger, suppressed merger, and the original “reference” merger. Enhanced merger is achieved by increasing the mass of the infalling object while suppressed merger is achieved through a series of small accretion events. The authors used a method to fix the local environment and arrive at the same final galaxy mass despite the different merger histories, thereby isolating the specific role of AGN and merger in the quenching process.

Figure 1 shows the simulated galaxy at z=2.3 for the BH+SNe and SNe-only cases in the three different merger scenarios. For the SNe-only case in all three merger scenarios, the simulated galaxy appears blue with a central bar/bulge, while the galaxy appears more quiescent, red, and elliptical for the BH+SNe case. These galaxy portraits suggest that star formation has ceased in the reference and enhanced merger scenarios for the BH+SNe case while star formation is still actively underway for the SNe-only case regardless of merger scenario. This is more concretely shown in Figure 2, which tracks the galaxy’s specific star formation rate (=star formation rate/stellar mass) over time. The galaxy is said to be quenched when its specific star formation rate falls below the horizontal line. The top panel shows that the enhanced-merger scenario (red) quenches permanently, the reference merger  (black) quenches temporarily, and the suppressed-merger (blue) never quenches for the BH+SNe case, alluding to the importance of mergers in the quenching process.

fig1

Fig. 1 – The simulated galaxy in IVU wavelengths at z=2.3. Each column refers to the three different merger scenario: suppressed, reference, and enhanced. The top panel is for SNe-only while the bottom panel is for BH+SNe. Notice the differences in appearance of the galaxy between the BH+SNe reference and enhanced merger scenarios and all other figures. [Adapted from Figure 1 of the paper]

fig2

FIg. 2 – Specific star formation rate (sSFR) as a function of redshift and time. The top panel is for BH+SNe while the middle panel is for SNe-only. The bottom panel is the sSFR ratio between the BH+SNe and SNe-only simulation. Black line refers to the reference merger, blue line the suppressed merger, and red line the enhanced merger scenario. The gray band is when the galaxy is at the main sequence stage. When the galaxy’s specific star formation rate drops by below the gray line labeled “UVJ-quenched” (i.e. 2×10⁻¹⁰ Msun per year), the galaxy is defined as quenched. [Figure 3 of the paper]

Mergers are AGNs’ wing-men, so to speak. They initiate the quenching process by disrupting the AGN disk. With no disk to confine the AGN’s rapid outflows and shield the star-forming regions, AGN feedback is increased, pushing the galaxy into a long-term quiescent state. In the suppressed merger scenario, the gaseous disk surrounding the AGN limits the effect of AGN by directing the outflow in a funnel perpendicular to the disk, thereby leaving star-formation uninterrupted. The presence of AGN is also crucial in maintaining the galaxy’s quenched state. When the authors manually turned off the AGN in the enhanced merger BH+SNe case when the galaxy is quenched for the first time, star-formation quickly re-establishes, as shown in Figure 3.

It appears that mergers and AGN feedback work together synergistically to quench a high-mass galaxy. While AGN feedback is essential, stellar feedback is negligible, as outflows from supernova-driven winds struggle to escape the galaxy as mass increases. Alas, it all comes down to a race against gravity…! For more ways to quench your star-formation thirst, check out these related astrobites (1, 2, 3).

fig3

Fig. 3 – Specific star formation rate (sSFR) as a function of time/redshift when the AGN is turned off at z=3 (black line). Compared to when the AGN is still “alive” (red line), the death of the AGN results in the galaxy restarting its star formation and eventually joining the main-sequence (gray band). [Figure 7 of the paper]

by Suk Sien Tie at July 19, 2016 01:40 PM

July 18, 2016

astrobites - astro-ph reader's digest

Deaths of the Smallest Galaxies: Gas Stripping?

Title:  Under Pressure: Quenching Star-Formation in Low-Mass Satellite Galaxies via Stripping
Authors:  S. P. Fillingham, M. C. Cooper, A. B. Pace, M. Boylan-Kolchin, J. S. Bullock, S. Garrison-Kimmel, C. Wheeler
First Author’s Institution:  Center for Cosmology, Department of Physics & Astronomy, University of California, Irvine, CA
Status:  Submitted to MNRAS

 

 

It’s tough to be a little galaxy.  Even back in the early universe, when galaxies went berserk and formed a prodigious number of stars—enough so their light ionized all the hydrogen gas in the universe—your gas could have been heated up, preventing you from forming stars.  Should you survive an early death, you still risk passing into the vicinity of a massive galaxy, where you could readily be shredded into oblivion, with nothing but a trail of stars left marking the path you took as you were shredded to death.  Far before your dramatic exit, you’d find yourself shining less and less brightly—the massive galaxy you orbit will starve and strip you of the gas you need to form new stars and stay bright.  And as if these things weren’t bad enough, you’re often called one of the “vermin” of the universe because there are so many galaxies like you.

The less mass you have, the more susceptible you are to death.  Among the dwarf galaxies orbiting the Milky Way—which has a mass that’s 1012 times the mass of the sun, or 1012 M⊙ for short—those that are 108 M or smaller tend to have no gas and form no new stars, a type of galaxy death astronomers call quenching.  The authors of today’s paper tackle this problem head on, looking at one possible cause of death: stripping.  The gas within you can be blasted out by the hot wind you’d encounter when you enter the Milky Way, a process that’s called ram-pressure stripping.  In addition, the movement of the fast-moving hot, diffuse gas of the Milky Way against the relatively slow-moving, cool, dense gas within you can cause your gas to billow and plume out due to gas instabilities, causing you to lose even more gas—a process called viscous stripping.

How close to death can stripping push a little galaxy?  They took real galaxies that have been observed far, far away from a massive galaxy, out in what we call the “field,” whose gas distributions have been measured.  They then calculated what would happen as these dwarfs fell into the Milky Way.  First, they estimated the amount of gas that would be stripped via ram-pressure.  They then calculated the amount of gas that would be lost due to viscous stripping in a billion years, the time it appears to take to quench the Milky Way’s dwarf galaxies.

They found that ram pressure stripping was effective for galaxies less massive than about 108 M, but failed to remove the gas in more massive dwarfs—reproducing the cutoff seen in the Milky Way’s dwarfs.  Those below this cutoff lost as much as 40% of their gas on average.  When they took into account viscous stripping, they found that the gas lost in the least massive dwarfs could rise to about 70%, almost double the amount removed just by ram-pressure.  The most massive dwarfs were not immune to viscous stripping, and lost up to 20% of their gas.

In reality, many of the dwarfs in the Milky Way are completely stripped of their gas.  Thus while the work by the authors of today’s paper shows that stripping can be very important in stripping dwarfs of their gas, it might not be the only gas-removal process at work.  Also, their calculations were fairly rudimentary: they assumed that the Milky Way’s mass distribution was perfectly spherical.  It’s mass could be rather clumpy, which can increase the amount of gas that’s stripped.  It’s possible, too, that the dwarf galaxies are less dense than we think they are.  There’s evidence that dwarf galaxies have cores of constant density, in contrast to one that gets more and more dense towards the center, as the authors assumed, which makes it easier to strip a dwarf galaxy’s gas.  Only future calculations will tell whether stripping can solve the dwarf galaxy quenching mystery!

 

 NGC_4402_Hubble_heic0911c

 

Featured image:  NGC 4402, a spiral galaxy in the Virgo cluster with evidence of ram pressure stripping.

by Stacy Kim at July 18, 2016 05:18 PM

Symmetrybreaking - Fermilab/SLAC

Pokémon Go shakes up the lab routine

At Fermilab and CERN, students, lab employees and visitors alike are on the hunt for virtual creatures.

At Fermi National Accelerator Laboratory near Chicago, the normal motions of people going about their days have shifted.

People who parked their cars in the same spot for years have moved. People are rerouting their paths through the buildings of the laboratory campus and striking off to explore new locations. They can be seen on lunch breaks hovering around lab landmarks, alone or in small clumps, flicking their fingers across their smartphones.

The augmented reality phenomenon of Pokémon Go has made its way into the world of high-energy particle physics. Based on the Nintendo franchise that launched in the ’90s, Pokémon Go sends players exploring their surrounding areas in the real world, trying to catch as many of the virtual creatures as possible.

Not only is the game affecting the movements of lab regulars, it’s also brought new people to the site, says Beau Harrison, an accelerator operator and a member of the game’s blue team. “People were coming on their bicycles to get their Pokémon here.”

At Fermilab, the three teams of the Pokémon universe—red, yellow and blue—compete for command of Fermilab’s several virtual gyms, places people battle their Pokémon to boost their strength or simply display team dominance.

“It’s kind of fun playing with everyone here,” says Bobby Santucci, another operator at the lab, who is on team red. “It’s not so much about the game. It’s more like messing with each other.”

In the few days the game has been out, the gyms at Fermilab have repeatedly tossed out one team for another: blue, then red, then blue, then red, then briefly yellow, then blue and then red again.

The game was not released in many European countries until the past weekend. But Elizabeth Kennedy, a graduate student from UC Riverside who is working at CERN, says that even before that you could identify Pokémon Go players among the people at the laboratory on the border of Switzerland and France, based on the routes they walked.

“The Americans are all playing,” she says. “It’s easy to tell who else is playing when you see other people congregating around places.”

The majority of the players at Fermilab seem to be college students and younger employees, but players of all ages can be spotted roaming the labs.

Bonnie King, a system administrator at the lab and a member of team blue, says that on one of her Pokémon-steered nature walks at Fermilab, she encountered a group of preteens. She had never been on that particular trail, and she wondered whether this was a first for the visitors, too. They noticed her playing and asked her if she was taking the gym there.

“Yeah, I am,” she replied, rising to the challenge.

King dropped off her top contender, a drooling, fungal-looking blue monster called Gloom, to help team blue keep its position of power. But eventually the red team toppled blue to reclaim the gym.

The battle for Fermilab rages on.

by Molly Olmstead at July 18, 2016 04:03 PM

Sean Carroll - Preposterous Universe

Space Emerging from Quantum Mechanics

The other day I was amused to find a quote from Einstein, in 1936, about how hard it would be to quantize gravity: “like an attempt to breathe in empty space.” Eight decades later, I think we can still agree that it’s hard.

So here is a possibility worth considering: rather than quantizing gravity, maybe we should try to gravitize quantum mechanics. Or, more accurately but less evocatively, “find gravity inside quantum mechanics.” Rather than starting with some essentially classical view of gravity and “quantizing” it, we might imagine starting with a quantum view of reality from the start, and find the ordinary three-dimensional space in which we live somehow emerging from quantum information. That’s the project that ChunJun (Charles) Cao, Spyridon (Spiros) Michalakis, and I take a few tentative steps toward in a new paper.

We human beings, even those who have been studying quantum mechanics for a long time, still think in terms of a classical concepts. Positions, momenta, particles, fields, space itself. Quantum mechanics tells a different story. The quantum state of the universe is not a collection of things distributed through space, but something called a wave function. The wave function gives us a way of calculating the outcomes of measurements: whenever we measure an observable quantity like the position or momentum or spin of a particle, the wave function has a value for every possible outcome, and the probability of obtaining that outcome is given by the wave function squared. Indeed, that’s typically how we construct wave functions in practice. Start with some classical-sounding notion like “the position of a particle” or “the amplitude of a field,” and to each possible value we attach a complex number. That complex number, squared, gives us the probability of observing the system with that observed value.

Mathematically, wave functions are elements of a mathematical structure called Hilbert space. That means they are vectors — we can add quantum states together (the origin of superpositions in quantum mechanics) and calculate the angle (“dot product”) between them. (We’re skipping over some technicalities here, especially regarding complex numbers — see e.g. The Theoretical Minimum for more.) The word “space” in “Hilbert space” doesn’t mean the good old three-dimensional space we walk through every day, or even the four-dimensional spacetime of relativity. It’s just math-speak for “a collection of things,” in this case “possible quantum states of the universe.”

Hilbert space is quite an abstract thing, which can seem at times pretty removed from the tangible phenomena of our everyday lives. This leads some people to wonder whether we need to supplement ordinary quantum mechanics by additional new variables, or alternatively to imagine that wave functions reflect our knowledge of the world, rather than being representations of reality. For purposes of this post I’ll take the straightforward view that quantum mechanics says that the real world is best described by a wave function, an element of Hilbert space, evolving through time. (Of course time could be emergent too … something for another day.)

Here’s the thing: we can construct a Hilbert space by starting with a classical idea like “all possible positions of a particle” and attaching a complex number to each value, obtaining a wave function. All the conceivable wave functions of that form constitute the Hilbert space we’re interested in. But we don’t have to do it that way. As Einstein might have said, God doesn’t do it that way. Once we make wave functions by quantizing some classical system, we have states that live in Hilbert space. At this point it essentially doesn’t matter where we came from; now we’re in Hilbert space and we’ve left our classical starting point behind. Indeed, it’s well-known that very different classical theories lead to the same theory when we quantize them, and likewise some quantum theories don’t have classical predecessors at all.

The real world simply is quantum-mechanical from the start; it’s not a quantization of some classical system. The universe is described by an element of Hilbert space. All of our usual classical notions should be derived from that, not the other way around. Even space itself. We think of the space through which we move as one of the most basic and irreducible constituents of the real world, but it might be better thought of as an approximate notion that emerges at large distances and low energies.

So here is the task we set for ourselves: start with a quantum state in Hilbert space. Not a random or generic state, admittedly; a particular kind of state. Divide Hilbert space up into pieces — technically, factors that we multiply together to make the whole space. Use quantum information — in particular, the amount of entanglement between different parts of the state, as measured by the mutual information — to define a “distance” between them. Parts that are highly entangled are considered to be nearby, while unentangled parts are far away. This gives us a graph, in which vertices are the different parts of Hilbert space, and the edges are weighted by the emergent distance between them.

rc-graph

We can then ask two questions:

  1. When we zoom out, does the graph take on the geometry of a smooth, flat space with a fixed number of dimensions? (Answer: yes, when we put in the right kind of state to start with.)
  2. If we perturb the state a little bit, how does the emergent geometry change? (Answer: space curves in response to emergent mass/energy, in a way reminiscent of Einstein’s equation in general relativity.)

It’s that last bit that is most exciting, but also most speculative. The claim, in its most dramatic-sounding form, is that gravity (spacetime curvature caused by energy/momentum) isn’t hard to obtain in quantum mechanics — it’s automatic! Or at least, the most natural thing to expect. If geometry is defined by entanglement and quantum information, then perturbing the state (e.g. by adding energy) naturally changes that geometry. And if the model matches onto an emergent field theory at large distances, the most natural relationship between energy and curvature is given by Einstein’s equation. The optimistic view is that gravity just pops out effortlessly in the classical limit of an appropriate quantum system. But the devil is in the details, and there’s a long way to go before we can declare victory.

Here’s the abstract for our paper:

Space from Hilbert Space: Recovering Geometry from Bulk Entanglement
ChunJun Cao, Sean M. Carroll, Spyridon Michalakis

We examine how to construct a spatial manifold and its geometry from the entanglement structure of an abstract quantum state in Hilbert space. Given a decomposition of Hilbert space H into a tensor product of factors, we consider a class of “redundancy-constrained states” in H that generalize the area-law behavior for entanglement entropy usually found in condensed-matter systems with gapped local Hamiltonians. Using mutual information to define a distance measure on the graph, we employ classical multidimensional scaling to extract the best-fit spatial dimensionality of the emergent geometry. We then show that entanglement perturbations on such emergent geometries naturally give rise to local modifications of spatial curvature which obey a (spatial) analog of Einstein’s equation. The Hilbert space corresponding to a region of flat space is finite-dimensional and scales as the volume, though the entropy (and the maximum change thereof) scales like the area of the boundary. A version of the ER=EPR conjecture is recovered, in that perturbations that entangle distant parts of the emergent geometry generate a configuration that may be considered as a highly quantum wormhole.

Like almost any physics paper, we’re building on ideas that have come before. The idea that spacetime geometry is related to entanglement has become increasingly popular, although it’s mostly been explored in the holographic context of the AdS/CFT correspondence; here we’re working directly in the “bulk” region of space, not appealing to a faraway boundary. A related notion is the ER=EPR conjecture of Maldacena and Susskind, relating entanglement to wormholes. In some sense, we’re making this proposal a bit more specific, by giving a formula for distance as a function of entanglement. The relationship of geometry to energy comes from something called the Entanglement First Law, articulated by Faulkner et al., and used by Ted Jacobson in a version of entropic gravity. But as far as we know we’re the first to start directly from Hilbert space, rather than assuming classical variables, a boundary, or a background spacetime. (There’s an enormous amount of work that has been done in closely related areas, obviously, so I’d love to hear about anything in particular that we should know about.)

We’re quick to admit that what we’ve done here is extremely preliminary and conjectural. We don’t have a full theory of anything, and even what we do have involves a great deal of speculating and not yet enough rigorous calculating.

Most importantly, we’ve assumed that parts of Hilbert space that are highly entangled are also “nearby,” but we haven’t actually derived that fact. It’s certainly what should happen, according to our current understanding of quantum field theory. It might seem like entangled particles can be as far apart as you like, but the contribution of particles to the overall entanglement is almost completely negligible — it’s the quantum vacuum itself that carries almost all of the entanglement, and that’s how we derive our geometry.

But it remains to be seen whether this notion really matches what we think of as “distance.” To do that, it’s not sufficient to talk about space, we also need to talk about time, and how states evolve. That’s an obvious next step, but one we’ve just begun to think about. It raises a variety of intimidating questions. What is the appropriate Hamiltonian that actually generates time evolution? Is time fundamental and continuous, or emergent and discrete? Can we derive an emergent theory that includes not only curved space and time, but other quantum fields? Will those fields satisfy the relativistic condition of being invariant under Lorentz transformations? Will gravity, in particular, have propagating degrees of freedom corresponding to spin-2 gravitons? (And only one kind of graviton, coupled universally to energy-momentum?) Full employment for the immediate future.

Perhaps the most interesting and provocative feature of what we’ve done is that we start from an assumption that the degrees of freedom corresponding to any particular region of space are described by a finite-dimensional Hilbert space. In some sense this is natural, as it follows from the Bekenstein bound (on the total entropy that can fit in a region) or the holographic principle (which limits degrees of freedom by the area of the boundary of their region). But on the other hand, it’s completely contrary to what we’re used to thinking about from quantum field theory, which generally assumes that the number of degrees of freedom in any region of space is infinitely big, corresponding to an infinite-dimensional Hilbert space. (By itself that’s not so worrisome; a single simple harmonic oscillator is described by an infinite-dimensional Hilbert space, just because its energy can be arbitrarily large.) People like Jacobson and Seth Lloyd have argued, on pretty general grounds, that any theory with gravity will locally be described by finite-dimensional Hilbert spaces.

That’s a big deal, if true, and I don’t think we physicists have really absorbed the consequences of the idea as yet. Field theory is embedded in how we think about the world; all of the notorious infinities of particle physics that we work so hard to renormalize away owe their existence to the fact that there are an infinite number of degrees of freedom. A finite-dimensional Hilbert space describes a very different world indeed. In many ways, it’s a much simpler world — one that should be easier to understand. We shall see.

Part of me thinks that a picture along these lines — geometry emerging from quantum information, obeying a version of Einstein’s equation in the classical limit — pretty much has to be true, if you believe (1) regions of space have a finite number of degrees of freedom, and (2) the world is described by a wave function in Hilbert space. Those are fairly reasonable postulates, all by themselves, but of course there could be any number of twists and turns to get where we want to go, if indeed it’s possible. Personally I think the prospects are exciting, and I’m eager to see where these ideas lead us.

by Sean Carroll at July 18, 2016 03:42 PM

John Baez - Azimuth

Frigatebirds

 

Frigatebirds are amazing!

They have the largest ratio of wing area to body weight of any bird. This lets them fly very long distances while only rarely flapping their wings. They often stay in the air for weeks at time. And one being tracked by satellite in the Indian Ocean stayed aloft for two months.

Surprisingly for sea birds, they don’t go into the water. Their feathers aren’t waterproof. They are true creatures of the air. They snatch fish from the ocean surface using their long, hooked bills—and they often eat flying fish! They clean themselves in flight by flying low and wetting themselves at the water’s surface before preening themselves.

They live a long time: often over 35 years.

But here’s the cool new discovery:

Since the frigatebird spends most of its life at sea, its habits outside of when it breeds on land aren’t well-known—until researchers started tracking them around the Indian Ocean. What the researchers discovered is that the birds’ flying ability almost defies belief.

Ornithologist Henri Weimerskirch put satellite tags on a couple of dozen frigatebirds, as well as instruments that measured body functions such as heart rate. When the data started to come in, he could hardly believe how high the birds flew.

“First, we found, ‘Whoa, 1,500 meters. Wow. Excellent, fantastique,’ ” says Weimerskirch, who is with the National Center for Scientific Research in Paris. “And after 2,000, after 3,000, after 4,000 meters — OK, at this altitude they are in freezing conditions, especially surprising for a tropical bird.”

Four thousand meters is more than 12,000 feet, or as high as parts of the Rocky Mountains. “There is no other bird flying so high relative to the sea surface,” he says.

Weimerskirch says that kind of flying should take a huge amount of energy. But the instruments monitoring the birds’ heartbeats showed that the birds weren’t even working up a sweat. (They wouldn’t, actually, since birds don’t sweat, but their heart rate wasn’t going up.)

How did they do it? By flying into a cloud.

“It’s the only bird that is known to intentionally enter into a cloud,” Weimerskirch says. And not just any cloud—a fluffy, white cumulus cloud. Over the ocean, these clouds tend to form in places where warm air rises from the sea surface. The birds hitch a ride on the updraft, all the way up to the top of the cloud.

[…]

“Absolutely incredible,” says Curtis Deutsch, an oceanographer at the University of Washington. “They’re doing it right through these cumulus clouds. You know, if you’ve ever been on an airplane, flying through turbulence, you know it can be a little bit nerve-wracking.”

One of the tagged birds soared 40 miles without a wing-flap. Several covered more than 300 miles a day on average, and flew continuously for weeks.

• Christopher Joyce, Nonstop flight: how the frigatebird can soar for weeks without stopping, All Things Considered, National Public Radio, 30 June 2016.

Frigatebirds aren’t admirable in every way. They’re kleptoparasites—now there’s a word you don’t hear every day! That’s a name for animals that steal food:

Frigatebirds will rob other seabirds such as boobies, particularly the red-footed booby, tropicbirds, shearwaters, petrels, terns, gulls and even ospreys of their catch, using their speed and maneuverability to outrun and harass their victims until they regurgitate their stomach contents. They may either assail their targets after they have caught their food or circle high over seabird colonies waiting for parent birds to return laden with food.

Frigatebird, Wikipedia.


by John Baez at July 18, 2016 01:16 PM

Emily Lakdawalla - The Planetary Society Blog

Horizon Goal: A new reporting series on NASA’s Journey to Mars
We're embarking on a multi-part series with the Huffington Post about the world's largest human spaceflight program. In part 1, we look at how the Columbia accident prompted NASA and the George W. Bush administration to create a new vision for space exploration.

July 18, 2016 12:03 PM

July 15, 2016

Emily Lakdawalla - The Planetary Society Blog

Listen Up! Microphones to Fly to Mars
The Mars 2020 mission will carry microphones in its EDL package and its SuperCam instrument, which will enable us to finally hear the sounds of Mars. The Planetary Society has been trying to get microphones to Mars for 20 years and is ecstatic that these will fly.

July 15, 2016 11:17 PM

Emily Lakdawalla - The Planetary Society Blog

Mars 2020 rover rolls into final design and fabrication phase
NASA's next Mars rover is rolling off the drawing board and into its final design and fabrication phase, the agency announced today, during a televised event at the Jet Propulsion Laboratory that highlighted some of the mission's technology.

July 15, 2016 11:16 PM

Symmetrybreaking - Fermilab/SLAC

The science of proton packs

Ghostbusters advisor James Maxwell explains the science of bustin'.

There's a new proton pack in town.

During the development of the new Ghostbusters film, released today, science advisor James Maxwell took on the question: "How would a proton pack work, with as few huge leaps of miraculous science as possible?"

As he explains in this video, he helped redesign the movie's famous ghost-catching tool to bring it more in line with modern particle accelerators such as the Large Hadron Collider.

"Particle accelerators are real. Superconducting magnets are real," he says. "The big leaps of faith are actually doing it in the space that's allowed."

Video of VayXii8HtyE

Video by Sony Pictures Entertainment

by Kathryn Jepsen at July 15, 2016 10:18 PM

Symmetrybreaking - Fermilab/SLAC

Who you gonna call? MIT physicists!

As science advisors, physicists Lindley Winslow and Janet Conrad gave the Ghostbusters crew a taste of life in the lab.

Tonight, two MIT scientists are going to the movies. It’s not just because they want to see Kristen Wiig, who plays a particle physicist in the new Ghostbusters film, talk about grand unified theories on the big screen. Lindley Winslow and Janet Conrad served as science advisors on the film, and they can’t wait to see all the nuggets of realism they managed to fit into the set.

The Ghostbusters production team contacted Winslow on the advice of The Big Bang Theory science advisor David Saltzberg, who worked with Winslow at UCLA.

Winslow says she was delighted to help out. As a child, she watched the original 1984 Ghostbusters on repeat with her sister. As an adult, Winslow recognizes that she became a scientist thanks in part to the capable female characters she saw in shows like Star Trek.

She says she’s excited for a reboot that features women getting their hands dirty doing science. “They’re using oscilloscopes and welding things. It’s great!”

The Ghostbusters crew was filming in Boston, and “they wanted to see what a particle physics lab would be like,” Winslow says. She quickly thought through the coolest stuff she had sitting around: “There was a directional neutron detector Janet had. And at the last minute, I remembered that, in the corner of my lab, I had a separate room with a prototype of a polarized Helium-3 source for a potential future electron-ion collider.”

MIT postdoc James Maxwell, now a staff scientist at Jefferson Lab, wound up constructing a replica of the Helium-3 source for the set.

But the production team was interested in more than just the shiny stuff. They wanted to understand the look and feel of a real laboratory. They knew it would be different from the sanitized versions than often appear onscreen.

Winslow obliged. “I take them to this lab, and it’s pretty… it looks like you’ve been in there for 40 years,” she says. “There’s a coat rack with a whole pile of cables hanging on it. They were taking a ton of pictures.”

The team really wanted to get the details right, down the books on the characters’ shelves, the publications and grant proposals on their desks and the awards on their walls, Winslow says. That’s where Conrad’s contributions came in. Offering to pitch in as Winslow prepared to go out on maternity leave, Conrad rented out her entire office library to the film, and she wrote papers for two characters, Wiig’s particle physicist and a villainous male scientist.

Conrad made Wiig’s character a neutrino physicist. She decided the bad guy would probably be into string theory. There’s just something sinister about the theory’s famous lack of verifiable predictions, Winslow says.

String theorists can also be lovely people, though, Conrad says, and “I wanted to make [the bad guy] as evil as possible.” In the scientific paper she wrote for his desk, “he doesn’t acknowledge anyone. He just says ‘The author is supported by the Royal Society of Fellows,’ and that’s it.”

Also, she wrote for him “an evil letter where he’s turning someone down for tenure.”

Winslow wrote the text for the awards that adorn the characters’ office walls, though both she and Conrad point out that physicists rarely hang their awards at work. “I give mine to my mom, and she hangs them up,” Conrad says.

In their offices, both Winslow and Conrad plan to hang their official Ghostbusters thank-you notes. “And a coat hook,” Winslow says. “I need a coat hook.”

Neither physicist got the chance to see the film before today, and they’re not sure how much of their handiwork will actually make it to the big screen. But Winslow was thrilled to see in a recently released preview one of her proudest contributions: a giant set of equations written on a whiteboard behind Wiig’s character.

The equations are real, representing the Georgi-Glashow model, otherwise known as SU(5), the first theory to try to combine the electroweak and strong forces. The model was ruled out by results from the Super Kamiokande experiment, but Winslow imagines Wiig’s character is using it to introduce her own attempt to unite the fundamental forces.

Winslow says she explained the basics of SU(5) to Ghostbusters director Paul Feig, who was then left to pass along the message to Wiig when Winslow needed to pick up her 3-year-old son from daycare.

As they head to the theaters tonight, Conrad and Winslow say they are excited to see bits of their lives reflected on the Ghostbusters set. They’re even more excited for girls in the audience to see themselves reflected in the tech-savvy, adventurous women in the film.

by Kathryn Jepsen at July 15, 2016 01:00 PM

July 14, 2016

CERN Bulletin

New shuttle stop in front of the Safety Training Centre
Since the 4 July, a free shuttle runs between the Meyrin and Prevessin sites every 45 minutes. You can consult here the timetable of the Circuit 2, stop “Safety Training Centre” (Blgd. 6959).

July 14, 2016 12:27 PM

July 13, 2016

Tommaso Dorigo - Scientificblogging

How Much Light Does A Proton Contain ?
Gavin Salam's talk at the "Altarelli Memorial" session of the ICNFP 2016 conference, which is presently taking place in Kolimbari (Crete), was very interesting and I wish to report here about it. 

read more

by Tommaso Dorigo at July 13, 2016 12:30 PM

John Baez - Azimuth

Operads for “Systems of Systems”

“Systems of systems” is a fashionable buzzword for complicated systems that are themselves made of complicated systems, often of disparate sorts. They’re important in modern engineering, and it takes some thought to keep them from being unmanageable. Biology and ecology are full of systems of systems.

David Spivak has been working a lot on operads as a tool for describing systems of systems. Here’s a nice programmatic talk advocating this approach:

• David Spivak, Operads as a potential foundation for
systems of systems
.

This was a talk he gave at the Generalized Network Structures and Dynamics Workshop at the Mathematical Biosciences Institute at Ohio State University this spring.

You won’t learn what operads are from this talk—for that, try this:

• Wikipedia, Operad.

But if you know a bit about operads, it may help give you an idea of their flexibility as a formalism for describing ways of sticking together components to form bigger systems!

I’ll probably talk about this kind of thing more pretty soon. So far I’ve been using category theory to study networked systems like electrical circuits, Markov processes and chemical reaction networks. The same ideas handle all these different kind of systems in a unified way. But I want to push toward biology. Here we need more sophisticated ideas. My philosophy is that while biology seems “messy” to physicists, living systems actually operate at higher levels of abstraction, which call for new mathematics.


by John Baez at July 13, 2016 01:40 AM

July 12, 2016

CERN Bulletin

CERN Bulletin Issue No. 28-29/2016
Link to e-Bulletin Issue No. 28-29/2016Link to all articles in this issue No.

July 12, 2016 02:19 PM

Symmetrybreaking - Fermilab/SLAC

A primer on particle accelerators

What’s the difference between a synchrotron and a cyclotron, anyway?

Research in high-energy physics takes many forms. But most experiments in the field rely on accelerators that create and speed up particles on demand.

What follows is a primer on three different types of particle accelerators: synchrotrons, cyclotrons and linear accelerators, called linacs.

Illustration by Sandbox Studio, Chicago with Jill Preston

Synchrotrons: the heavy lifters

Synchrotrons are the highest-energy particle accelerators in the world. The Large Hadron Collider currently tops the list, with the ability to accelerate particles to an energy of 6.5 trillion electronvolts before colliding them with particles of an equal energy traveling in the opposite direction. 

Synchrotrons typically feature a closed pathway that takes particles around a ring. Other variants are created with straight sections between the curves (similar to a racetrack or in the shape of a triangle or hexagon). Once particles enter the accelerator, they travel around the circular pathway over and over again, always enclosed in a vacuum pipe. 

Radiofrequency cavities at intervals around the ring increase their speed. Several different types of magnets create electromagnetic fields, which can be used to bend and focus the particle beams. The electromagnetic fields slowly build up as the particles are accelerated. Particles pass around the LHC about 14 million times in the 20 minutes they need to reach their intended energy level.  

Researchers send beams of accelerated particles through one another to create collisions in locations surrounded by particle detectors. Relatively few collisions happen each time the beams meet. But because the particles are constantly circulating in a synchrotron, researchers can pass them through one another many times over—creating a large number of collisions over time and more data for observing rare phenomena.

“The LHC detectors ATLAS and CMS reached about 400 million collisions a second last year,” says Mike Lamont, head of LHC operations at CERN. “This is why this design is so useful.”

Synchrotrons’ power makes them especially suited to studying the building blocks of our universe. For example, physicists were able to witness evidence of the Higgs boson among the LHC’s collisions only because the collider could accelerate particles to such a high energy and produce such high collision rates. 

The LHC primarily collides protons with protons but can also accelerate heavy nuclei such as lead. Other synchrotrons can also be customized to accelerate different types of particles. At Brookhaven National Laboratory in New York, the Relativistic Heavy Ion Collider can accelerate everything from protons to uranium nuclei. It keeps the proton beams polarized with the use of specially designed magnets, according to RHIC accelerator physicist Angelika Drees. It can also collide heavy ions such as uranium and gold to create quark-gluon plasma—the high-temperature soup that made up the universe just after the Big Bang.

Illustration by Sandbox Studio, Chicago with Jill Preston

Cyclotrons: the workhorses

Synchrotrons are the descendants of another type of circular collider called cyclotrons. Cyclotrons accelerate particles in a spiral pattern, starting at their center.

Like synchrotrons, cyclotrons use a large electromagnet to bend the particles in a circle. However, they use only one magnet, which limits how large they can be. They use metal electrodes to push particles to travel in increasingly large circles, creating a spiral pathway. 

Cyclotrons are often used to create large amounts of specific types of particles, such as muons or neutrons. They are also popular for medical research because they have the right energy range and intensity to produce medical isotopes. 

The world’s largest cyclotron is located at the TRIUMF laboratory in Vancouver, Canada. At the TRIUMF cyclotron, physicists regularly accelerate particles to 520 million electronvolts. They can draw particles from different parts of their accelerator for experiments that require particles at different energies. This makes it an especially adaptable type of accelerator, says physicist Ewart Blackmore, who helped to design and build the TRIUMF accelerator.

“We certainly make use of that facility every day when we’re running, when we’re typically producing a low-energy but high-current beam for medical isotope production,” Blackmore says. “We’re extracting at fixed energies down one beam for producing pions and muons for research, and on another beam line we’re extracting beams of radioactive nuclei to study their properties.”

Illustration by Sandbox Studio, Chicago with Jill Preston

Linacs: straight and to the point

For physics experiments or applications that require a steady, intense beam of particles, linear accelerators are a favored design. SLAC National Accelerator Laboratory hosts the longest linac in the world, which measures 2 miles long and at one point could accelerate particles up to 50 billion electronvolts. Fermi National Accelerator Laboratory uses a shorter linac to speed up protons before sending them into a different accelerator, eventually running the particles into a fixed target to create the world’s most intense neutrino beam.

While circular accelerators may require many turns to accelerate particles to the desired energy, linacs get particles up to speed in short order. Particles start at one end at a low energy, and electromagnetic fields in the linac accelerate them down its length. When particles travel in a curved path, they release energy in the form of radiation. Traveling in a straight line means keeping their energy for themselves. A series of radiofrequency cavities in SLAC’s linac are used to push particles on the crest of electromagnetic waves, causing them to accelerate forward down the length of the accelerator.

Like cyclotrons, linacs can be used to produce medical isotopes. They can also be used to create beams of radiation for cancer treatment. Electron linacs for cancer therapy are the most common type of particle accelerator.

by Signe Brewster at July 12, 2016 01:00 PM

July 11, 2016

CERN Bulletin

COLLIDE Pro Helvetia Award
The COLLIDE Pro Helvetia Award is run in partnership with Pro Helvetia, giving the opportunity to Swiss artists to do research at CERN for three months.   From left to right: Laura Perrenoud, Marc Dubois and Simon de Diesbach. The photo shows their VR Project, +2199. Fragment.In are the winning artists of COLLIDE Pro Helvetia. They came to CERN for two months in 2015, and will now continue their last month in the laboratory. Fragment.In is a Swiss based interaction design studio. They create innovative projects, interactive installations, video and game design. Read more about COLLIDE here.

July 11, 2016 03:10 PM

CERN Bulletin

Pablo Rodríguez Pérez (1976 - 2016)

It is with great sadness that we announce the sudden loss of Pablo Rodríguez Pérez, a physicist on the LHCb Experiment, who died in Manchester on 1st July 2016.

 


Pablo Rodríguez Pérez.

Pablo obtained his Bachelor of Science in Physics at the Universidade de Santiago de Compostela (USC) in 2003, specialising in electronics before going on to work in industry.

He joined the LHCb experiment in 2007, undertaking his Master of Science at USC on the optimisation of readout electronics for the Inner Tracker. He then continued his studies with a PhD, becoming the principal author for the experiment control system of the Silicon Tracker. After the commissioning of the LHCb experiment he moved his attention to the LHCb upgrade, performing the first investigation of prototypes for the upgrade of the vertex locator (VELO).

Following his PhD from USC, Cum Laude, he joined the University of Manchester in 2013 to work further on the VELO Upgrade. Pablo took the lead role in the group’s FPGA firmware development and was key to the VELO Upgrade module construction.

Pablo is survived by his wife, Sonia, their three young children, his parents and his brothers Iván and Carlos. He is predeceased by his brother Victor.  His warmth, kindness, dedication and competence will be deeply missed by his many friends in the LHCb Collaboration.

LHCb Collaboration

July 11, 2016 10:07 AM

CERN Bulletin

July 10, 2016

Tommaso Dorigo - Scientificblogging

Poster Session At ICNFP 2016
I am spending a week in Kolimbari, a nice seaside place in western Crete. Here the fifth International Conference on New Frontiers in Physics is being held in the Orthodox Academy of Crete. The conference gathers together high-energy experimentalists and theorists, nuclear physicists, neutrino physicists, and also other specialists. 
As I am not talking this year (I am here because I am co-organizing a mini-workshop on Higgs physics), I thought it was a good idea to ask the organizers if they needed help, and I got the task of organizing the poster selection committee. 26 posters have been presented, and will be on display tomorrow evening. We will have to select the best ones, whose authors will win a prize.

read more

by Tommaso Dorigo at July 10, 2016 08:43 PM

Lubos Motl - string vacua and pheno

Reality vs Connes' fantasies about physics on non-commutative spaces
Florin Moldoveanu, an eclectic semi-anti-quantum zealot, hasn't ever been trained in particle physics and he doesn't understand it but he found it reasonable to uncritically write about Alain Connes' proposals to construct a correct theory of particle physics using the concepts of noncommutative geometry.

Now, Connes is a very interesting guy, great, creative, and playful mathematician, and he surely belongs among the most successful abstract mathematicians who have worked hard to learn particle physics. Except that the product just isn't enough because the airplanes don't land. His and his collaborators' proposals are intriguing but they just don't work and what the "new framework" is supposed to be isn't really well-defined at all.

The status quo in particle physics is that quantum field theories – often interpreted as effective field theories (theories useful for the description of all phenomena at distance scales longer than a cutoff) – and string theory are the only known ways to produce realistic theories. Moreover, to a large extent, string theory in most explicit descriptions we know also adopts the general principles of quantum field theory "without reservations".

The world sheet description of perturbative string theory is a standard two-dimensional conformal (quantum) field theory, Matrix theory and AdS/CFT describe vacua of string/M-theory but they're also quantum field theories in some spaces (world volumes or AdS boundaries), and string theory vacua have their effective field theory descriptions exactly of the type that one expects in the formalism of effective field theories (even though string theory itself isn't "quite" a regular quantum field theory in the bulk).




When we discuss quantum field theories, we decide about the dimension, qualitative field content, and symmetries. Once we do so, we're obliged to consider all (anomaly-free, consistent, unitary) quantum field theories with these conditions and all values of the parameters. This also gives us an idea about which choices of the parameters are natural or unnatural.




Now, Connes and collaborators claim to have something clearly different from the usual rules of quantum field theory (or string theory). The discovery of a new framework that would be "on par" with quantum field theory or string theory would surely be a huge one, just like the discovery of additional dimensions of the spacetime of any kind. Except that we have never been shown what the Connes' framework actually is, how to decide whether a paper describing a model of this kind belongs to Connes' framework or not. And we haven't been given any genuine evidence that the additional dimensions of Connes' type exist.

So all this Connes' work is some hocus pocus experimentation with mixtures of mathematics of noncommutative spaces (which he understands very well) and particle physics (which he understands much less well) and in between some mathematical analyses that are probably hugely careful and advanced, he often writes things that are known to be just silly to almost every physics graduate student. And a very large fraction of his beliefs how noncommutative geometry may work within physics just seems wrong.

How is it supposed to work?

In Kaluza-Klein theory (or string theory), there is some compactification manifold which I will call \(CY_6\) because the Calabi-Yau three-fold is the most frequently mentioned, and sort of canonical, example. Fields may be expanded to modes – a generalization of Fourier series – which are functions of the coordinates on \(CY_6\). And there is a countably infinite number of these modes. Only a small number of them are very light but if you allow arbitrary masses, you have a whole tower of increasingly heavy Kaluza-Klein modes.

Connes et al. want to believe that there are just finitely many fields in 3+1 dimensions, like in the Standard Model. How can we get a finite number of Kaluza-Klein modes? We get them if the space is noncommutative. The effect is similar as if the space were a finite number of points except that a noncommutative space isn't a finite set of points.

A noncommutative space isn't a set of points at all. For this reason, there are no "open sets" and "neighborhoods" and the normal notions of topology and space dimension, either. A noncommutative space is a generalization of the "phase space in quantum mechanics". The phase space has coordinates \(x,p\) but they don't commute with each other – it's why it's called a "noncommutative space". Instead, we have\[

xp-px=i\hbar.

\] Consequently, the uncertainty principle restricts how accurately \(x,p\) may be determined at the same moment. The phase space is effectively composed of cells of area \(2\pi\hbar\) (or its power, if we have many copies of the coordinates and momenta). And these cells behave much like "discrete points" when it comes to the counting of the degrees of freedom – except that they're not discretely separated at all. The boundaries between them are unavoidably fuzzier than even those in regular commutative manifolds. If you consider a compactified (periodic \(x,p\) in some sense) versions of the phase space (e.g. fuzzy sphere and fuzzy torus), you may literally get a finite number of cells and therefore a finite number of fields in 3+1 dimensions.

That's basically what Connes and pals do.

Now, they have made some truly extraordinary claims that have excited me as well. I can't imagine how could I be unexcited at least once; but I also can't imagine that I would preserve my excitement once I see that there's no defensible added value in those ideas. In 2006, for example, Chamseddine, Connes, and Marcolli have released their standard model with neutrino mixing that boldly predicted the mass of the Higgs boson as well. The prediction was \(170\GeV\) which is not right, as you know: the Higgs boson of mass \(125\GeV\) was officially discovered in July 2012.

But the fate of this prediction \(m_h=170\GeV\) was sort of funny. Two years later, in 2008, the Tevatron became able to say something about the Higgs mass for the first time. It ruled out the first narrow interval of Higgs masses. Amusingly enough, the first value of the Higgs mass that was killed was exactly Connes' \(170\GeV\). Oops. ;-)

There's a consensus in the literature of Connes' community that \(170\GeV\) is the prediction that the framework should give for the Higgs mass. But in August 2012, one month after the \(125\GeV\) Higgs boson was discovered, Chamseddine and Connes wrote a preprint about the resilience of their spectral standard model. A "faux pas" would probably be more accurate but "resilience" sounded better.

In that paper, they added some hocus pocus arguments claiming that because of some additional singlet scalar field \(\sigma\) that was previously neglected, the Higgs prediction is reduced from \(170\GeV\) to \(125\GeV\). Too bad they couldn't make this prediction before December 2011 when the value of \(125\GeV\) emerged as the almost surely correct one to the insiders among us.

I can't make sense of the technical details – and I am pretty sure that it's not just due to the lack of effort, listening, or intelligence. There are things that just don't make sense. Connes and his co-author claim that the new scalar field \(\sigma\) which they consider a part of their "standard model" is also responsible for the Majorana neutrino masses.

Now, this just sounds extremely implausible because the origin of the small neutrino masses is very likely to be in the phenomena that occur at some very high energy scale near the GUT scale – possibly grand unified physics itself. The seesaw mechanism produces good estimates for the neutrino masses\[

m_\nu \approx \frac{m_{h}^2}{m_{GUT}}.

\] So how could one count the scalar field responsible for these tiny masses to the "Standard Model" which is an effective theory for the energy scales close to the electroweak scale or the Higgs mass \(m_h\sim 125\GeV\)? If the Higgs mass and neutrino masses are calculable in Connes' theory, the theory wouldn't really be a standard model but a theory of everything and it should work near the GUT scale, too.

The claim that one may relate these parameters that seemingly boil down to very different physical phenomena – at very different energy scales – is an extraordinary statement that requires extraordinary evidence. If the statement were true or justifiable, it would be amazing by itself. But this is the problem with non-experts like Connes. He doesn't give any evidence because he doesn't even realize that his statement sounds extraordinary – it sounds (and probably is) incompatible with rather basic things that particle physicists know (or believe to know).

Connes' "fix" that reduced the prediction to \(125\GeV\) was largely ignored by the later pro-Connes literature that kept on insisting that \(170\GeV\) is indeed what the theory predicts.

So I don't believe one can ever get correct predictions out of a similar framework, except for cases of good luck. But my skepticism about the proposal is much stronger than that. I don't really believe that there exists any new "framework" at all.

What are Connes et al. actually doing when they are constructing new theories? They are rewriting some/all terms in a Lagrangian using some new algebraic symbols, like a "star-product" on a specific noncommutative geometry. But is it a legitimate way to classify quantum field theories? You know, a star-product is just a bookkeeping device. It's a method to write down classical theories of a particular type.

But the quantum theory at any nonzero couplings isn't really "fully given by the classical Lagrangian". It should have some independent definition. If you allow the quantum corrections, renormalization, subtleties with the renormalization schemes etc., I claim that you just can't say whether a particular theory is or is not a theory of the Connes' type. The statement "it is a theory of Connes' type" is only well-defined for classical field theories and probably not even for them.

A generic interacting fully quantum field theory just isn't equivalent to any star-product based classical Lagrangians!

There are many detailed questions that Connes can't quite answer that show that he doesn't really know what he's doing. One of these questions is really elementary: Is gravity supposed to be a part of his picture? Does his noncommutative compactification manifold explain the usual gravitational degrees of freedom, or just some polarizations of the graviton in the compact dimensions, or none? You can find contradictory answers to this question in the Connes' paper.

Let me say what is the answer to the question whether gravity is a part of the consistent decoupled field theories on noncommutative spaces – i.e. those in string theory. The answer is simply No. String theory allows you to pick a \(B\)-field and decouple the low-energy open-string dynamics (which is a gauge theory). The gauge theory is decoupled even if the space coordinates are noncommutative.

But it's always just a gauge theory. There are never spin-two fields that would meaningfully enter the Lagrangian with the noncommutative star-product. Why? Because the noncommutativity comes from the \(B\)-field which may be set to zero by a gauge invariance for the \(B\)-field, \(\delta B_{(2)} = d \lambda_{(1)}\). So the value of this field is unphysical. This conclusion only changes inside a D-brane where \(B+F\) is the gauge-invariant combination. The noncommutativity-inducing \(B\)-field may really be interpreted as a magnetic \(F\) field inside the D-brane which is gauge-invariant. Its value matters. But in the decoupling limit, it only matters for the D-brane degrees of freedom because the D-brane world volume is where the magnetic field \(F\) is confined.

In other words, the star-product-based theory only decouples from the rest of string theory if the open-string scale is parameterically longer than the closed-string scale. And that's why the same star-product isn't relevent for the closed-string modes such as gravity. Or: if you tried to include some "gravitational terms with the star product", you would need to consider all objects with the string-scale energies and the infinite tower of the massive string states would be a part of the picture, too.

Whether you learn these lessons from the string theory examples or you derive them purely from "noncommutative field theory consistency considerations", your conclusions will contradict Connes' assumptions. One simply cannot have gravity in these decoupled theories. If your description has gravity, it must have everything. At the end, you could relate this conclusion with the "weak gravity conjecture", too. Gravity is the weakest force so once your theory of elementary building blocks of Nature starts to be sensitive to it, you must already be sensitive to everything else. Alternatively, you may say that gravity admits black holes that evaporate and they may emit any particle as the Hawking radiation – any particle in any stage of a microscopic phenomenon that is allowed in Nature. So there's no way to decouple any subset of objects and phenomena.

When I read Connes' papers on these issues, he contradicts insights like that – which seem self-evident to me and probably to most real experts in this part of physics. You know, I would be extremely excited if a totally new way to construct theories or decouple subsets of the dynamics from string theory existed. Except that it doesn't seem to be the case.

In proper string/M-theory, when you actually consistently decouple some subset of the dynamics, it's always near some D-brane or singularity. The decoupling of the low-energy physics on D-branes (which may be a gauge theory on noncommutative spaces) was already mentioned. Cumrun Vafa's F-theory models of particle physics are another related example: one decouples the non-gravitational particle physics near the singularities in the F-theory manifold, basically near the "tips of some cones".

But Connes et al. basically want to have a non-singular compactification without branes and they still want to claim that they may decouple some ordinary standard-model-like physics from everything else – like the excited strings or (even if you decided that those don't exist) the black hole microstates which surely have to exist. But that's almost certainly not possible. I don't have a totally rock-solid proof but it seems to follow from what we know from many lines of research and it's a good enough reason to ignore Connes' research direction as a wrong one unless he finds something that is really nontrivial, which he hasn't done yet.

Again, I want to mention the gap between the "physical beef" and "artefacts of formalism". The physical beef includes things like the global symmetries of a physical theories. The artefacts of formalism include things like "whether some classical Lagrangian may be written using some particular star-product". Connes et al. just seem to be extremely focused on the latter, the details of the formalism. They just don't think as physicists.

You know, as we have learned especially in the recent 100 years, a physical theory may often be written in very many different ways that are ultimately equivalent. Quantum mechanics was first found as Heisenberg's "matrix mechanics" which turned into the Heisenberg picture and later as "wave mechanics" which became Schrödinger's picture. Dirac pointed out that a compromise, the interaction/Dirac picture, always exists. Feynman added his path integral approach later, it's really another picture. The equivalence of those pictures was proven soon.

For particular quantum field theories and vacua of string/M-theory, people found dualities, especially in recent 25 years: string-string duality, IIA/M, heterotic/M, S-dualities, T-dualities, U-dualities, mirror symmetry, AdS/CFT, ER=EPR, and others. The point is that physics that is ultimately the same to the observers who live in that universe may often be written in several or many seemingly very different ways. After all, even the gauge theories on noncommutative spaces are equivalent to gauge theories on commutative spaces – or noncommutative spaces in different dimensions, and so on.

The broader lesson is that the precise formalism you pick simply isn't fundamental. Connes' whole philosophy – and the philosophy of many people who focus on appearances and not the physical substance – is very different. At the end, I think that Connes would agree that he's just constructing something that may be rewritten as quantum field theories. If there's any added value, he just claims to have a gadget that produces the "right" structure of the relevant quantum field theories.

But even if he had some well-defined criterion that divides the "right" and "wrong" Lagrangians of this kind, and I think he simply doesn't have one because there can't be one, why would one really believe the Connes' subset? A theory could be special because it could be written in Connes' form but is that a real virtue or just an irrelevant curiosity? The theory is equally consistent and has equal symmetries etc. as many other theories that cannot be written in the Connes form.

So even if the theories of Connes' type were a well-defined subset of quantum field theories, I think that it would be irrational to dramatically focus on them. It would seem just a little bit more natural to focus on this subset than to focus on quantum field theories whose all dimensions of representations are odd and the fine-structure constant (measured from the electron-electron low-energy scattering) is written using purely odd digits in the base-10 form. ;-) You may perhaps define this subset but why would you believe that belonging to this subset is a "virtue"?

I surely don't believe that "the ability to write something in Connes' form" is an equally motivated "virtue" as an "additional enhanced symmetry" of a theory.

This discussion is a somewhat more specific example of the thinking about the "ultimate principles of physics". In quantum field theory, we sort of know what the principles are. We know what theories we like or consider and why. The quantum field theory principles are constructive. The principles we know in string theory – mostly consistency conditions, unitarity, incorporation of massless spin-two particles (gravitons) – are more bootstrapy and less constructive. We would like to know more constructive principles of string theory that make it more immediately clear why there are 6 maximally decompactified supersymmetric vacua of string/M-theory, and things like that. That's what the constantly tantalizing question "what is string theory" means.

But whenever we describe some string theory vacua in a well-defined quantitative formalism, we basically return to the constructive principles of quantum field theory. Constrain the field/particle content and the symmetries. Some theories – mostly derivably from a Lagrangian and its quantization – obey the conditions. There are parameters you may derive. And some measure on these parameter spaces.

Connes basically wants to add principles such as "a theory may be written using a Lagrangian that may be written in a Connes form". I just don't believe that principles like that matter in Nature because they don't really constrain Nature Herself but only what Nature looks like in a formalism. I simply don't believe that a formalism may be this important in the laws of physics. Nature abhors bureaucracy. She doesn't really care about formalisms and what they look like to those who have to work with them. She doesn't really discriminate against one type of formalisms and She doesn't favor another kind. If She constrains some theories, She has good reasons for that. To focus on a subclass of quantum field theories because they are of the "Connes type" simply isn't a good reason. There isn't any rational justification that the Connesness is an advantage rather than a disadvantage etc.

Even though some of my objections are technical while others are "philosophically emotional" in some way, I am pretty sure that most of the people who have thought about the conceptual questions deeply and successfully basically agree with me. This is also reflected by the fact that Connes' followers are a restricted group and I think that none of them really belongs to the cream of the theoretical high-energy physics community. Because the broader interested public should have some fair idea about what the experts actually think, it seems counterproductive for non-experts like Moldoveanu to write about topics they're not really intellectually prepared for.

Moldoveanu's blog post is an example of a text that makes the readers believe that Connes has found a framework that is about as important, meaningful, and settled as the conventional rules of the model building in quantum field theory or string theory. Except that he hasn't and the opinion that he has is based on low standards and sloppiness. More generally, people are being constantly led to believe that "anything goes". But it's not true that anything goes. The amount of empirical data we have collected and the laws, principles, and patterns we have extracted from them is huge and the viable theories and frameworks are extremely constrained. Almost nothing works.

The principles producing theories that seem to work should be taken very seriously.

by Luboš Motl (noreply@blogger.com) at July 10, 2016 08:08 AM

July 08, 2016

Lubos Motl - string vacua and pheno

Pesticides needed against anti-physics pests
Their activity got too high in the summer

Some three decades ago, mosquitoes looked like a bigger problem in the summer. Their numbers had to drop or I am spending less time at places where they get concentrated. The haters of physics have basically hijacked the mosquitoes' Lebensraum, it seems.

The scum stinging fundamental, theoretical, gravitational, and high-energy physics became so aggressive and repetitive that it's no longer possible to even list all the incidents. A week ago, notorious Californian anti-physics instructor Richard Muller – a conman who once pretended to be a climate skeptic although he has always been a fanatical alarmist, a guy who just can't possibly understand that the event horizon is just a coordinate singularity and who thinks it's a religion to demand a physical theory to be compatible with all observations (quantum and gravitational ones), not to mention dozens of other staggering idiocies he has written in recent years – wrote another rant saying that "string theory isn't even a theory".




There's absolutely nothing new about this particular rant – it's the 5000th repetition of the anti-string delusions repeated by dozens of other mental cripples and fraudsters in the recent decade. To make things "cooler", he says that many string theorists would agree with him and to make sure what they would agree with, he promotes both Šmoits' crackpot books at the end as the "recommended reading".

Oh, sure, string theorists would agree with these Šmoitian things. Time for your pills, Mr Muller.




This particular rant has been read by more than 45,000 readers. The number of people indoctrinated with this junk is so high that one should almost start to be afraid to call the string critics vermin on the street (my fear is not this far, however). I am sure that most of them have been gullible imbeciles since the rant was upvoted a whopping 477 times. Every Quora commenter who has had something to do with high brow physics disagrees with Muller but it's only Muller's rant that is visible. Quora labels this Muller as the "most viewed writer in physics". Quora is an anti-civilization force that deserves to be liquidated.

This week, Sabine Hossenfelder wrote a rant claiming that the LHC is a disappointment and naturalness is a delusion. Holy cow, another gem from this Marxist whore. The LHC is a wonderful machine that has already discovered enough to justify its existence and that works perfectly. Lots of the genuine particle physics enthusiasts are excited to follow both papers by the LHC collaboration and the LHC schedules. And while naturalness may look stretched and many people (not myself!) have surely been naive about the direct way how it can imply valid predictions, it is absolutely obvious that to some extent, it will always be viewed as an argument.

It's because theories of Nature simply have to be natural in one way or another. The point is that you can always construct uncountably many unnatural theories that agree with the data. You may always say that some highly fine-tuned God created all the species just like we observe them and all the patterns (and observations that something is much smaller than something else etc.) explained by the natural theories are just coincidences. You can take any valid theory and add 50 new random interactions with very small coupling constants or particles with high masses and claim that your theory is great.

But such unnatural theories are simply no good because it's unlikely for the parameters to have at least qualitatively the right values that are needed for the theories to avoid contradictions with the empirical evidence. At some level, the Bayesian inference that indicates that dimensionless parameters shouldn't be expected to be much smaller than one kicks in. It's the quantitative reason why it's often right to use Occam's razor in our analyses. You know, the future of naturalness is basically analogous to the future of Occam's razor, a related but less specific concept. Some very specific versions of it may be incorrect but the overall paradigm simply can't ever disappear from science.

The right laws of Nature that explain why the Higgs mass is much lighter than the Planck scale may be different than the existing "sketches" of the projects but at the end, these laws are natural. They are colloquially natural which, you might object, is a different word. But in any sufficiently well-defined framework, the colloquial naturalness may be turned into some kind of technical naturalness.

To demand that naturalness is abandoned or banned altogether means to demand that people no longer think rationally. Ms Hossenfelder is just absolutely missing the point of science. Her last paragraph says:
It’s somewhat of a mystery to me why naturalness has become so popular in theoretical high energy physics. I’m happy to see it go out of the window now. Keep your eyes open in the next couple of years and you’ll witness that turning point in the history of science when theoretical physicists stopped dictating nature what’s supposedly natural.
But naturalness isn't going "out of the window" (just look at recent papers with naturalness in the title) and physicists who complain that a theory is unnatural aren't dictating anything to Nature. Instead, they complain against the broader theory. A theory that is fine-tuned and doesn't have any explanation for the fine-tuning is either wrong or missing the explanation of an important thing – it fails to see even the sketch of it. You know, when a theory disagrees with the data "slightly" or in a "detail", something may be "slightly" wrong about the theory or a "detail" may be incorrect. But when a theory makes a parametrically wrong estimate for its parameters, something bigger must be wrong or missing about the theory. Naturalness doesn't say anything else than this trivial fact that simply can't be wrong because in its general form, it's the basis of all rational thinking. Theories in physics will always have to be natural with some interpretation of the probabilistic distributions for the parameters.

The anthropic principle is sometimes quoted as an "alternative to naturalness". Even if this principle could be considered a replacement of this kind of thinking at all, and I am confident that it's right to say that there's no version of it that could be claimed to achieve this goal at this moment, it would still imply some naturalness. The anthropic principle, if it became well-defined enough to be considered a part of physicists' thinking, would just give us different estimates for the parameters or probability distributions for them. But it would still produce some estimates or distributions and we would still distinguish natural and unnatural theories.

And it goes on and on and on. Yesterday, Ethan Siegel wrote a Forbes rant claiming that grand unification may be dead end in physics. Siegel is OK when he writes texts about Earth's being round or similar things but hey, this guy simply must realize that he is absolutely out of his league when it comes to the cutting-edge fundamental physics. Everything he has written about it has always suffered from some absolutely lethal problems and this new rant is no exception.

He uses Garrett Lisi's childish picture to "visualize" the Georgi-Glashow grand unified \(SU(5)\) model (projection of some weights of the fermion reps on a 2D subspace of the Cartan subalgebra). I don't think that this unusual picture is useful for anything (perhaps to incorrectly claim that some \(\ZZ_6\) symmetry is what grand unification is all about) but yes, there are much more severe problems with Siegel's text.

When he starts to enumerate "problems" of grand unified theories, he turns into a full-fledged zombie crackpot:
But there are some big problems with these ideas, too. For one, the new particles that were predicted were of hopelessly high energies: around \(10^{15}\) to \(10^{16}\GeV\), or trillions of times the energies the LHC produces.
What? How can someone call it a "problem" in the sense of "bad news"? The scale at which the couplings unify is whatever it is. If it is \(10^{16}\GeV\), then it is a fact, not a "problem". A person who calls one number a "problem" unmasks that he is a prejudiced aßhole. He simply prefers one number over another without any evidence that would discriminate the possibilities – something that an honest scientist simply cannot ever do.
For another, almost all of the GUTs you can design lead to particles undergoing flavor-changing-neutral-currents, which are certain types of decays forbidden in the Standard Model and never observed in nature.
Great. But it's true for any generic enough theory of new physics, too. Clearly, Nature isn't generic in this sense. But there exist grand unified theories in which all these unwanted effects are suppressed in a technically natural way and that's everything that's needed to say that "everything is fine with the broader GUT paradigm at this moment". Similarly, the proton decay is acceptably slow in some classes of grand unified theories that are as fine as the Georgi-Glashow model.

But the most staggering technical stupidity on grand unified theories that Siegel wrote was one about the unification of the couplings:
The single “point” that the three forces almost meet at only looks like a point on a logarithmic scale, when you zoom out. But so do any three mutually non-parallel lines; you can try it for yourself by drawing three line segments, extending them in both directions until they all intersect and then zooming out.
What? Every three straight lines intersect in one point? Are you joking or are you high?



If you draw three generic straight lines A,B,C in a plane, the pairs intersect at points AB, BC, CA, but there is no intersection of all three lines ABC. Instead, there is a triangle inside. That's the left picture. On the other hand, for a special slope of the third line C – one real number has to be adjusted – the intersection BC may happen to coincide with the intersection AB, and when it's so, the intersection of CA coincides with this point, too: all three lines intersect at one point. That's the right picture. The triangle shrinks to zero area, to a point.

This outcome is in no way guaranteed. It's infinitely unlikely for three lines in a plane to intersect at one point. The small likelihood of this small miracle is roughly equal to the ratio of the actual precision (i.e. the longest side of the triangle) we get over the characteristic precision (the size of the triangle) we expected. The fact that the unification (intersection at a point) happens in a large subset of "morally simple enough" grand unified theories with a certain precision is a nontrivial successful test of these theories' viability. It doesn't prove that the 3 forces get unified (because the precision we can prove isn't "overwhelming") but it's not something that may be denied, either.

How can Ethan Siegel misunderstand the difference between 3 lines intersecting and non-intersecting? I think that every layman who has failed to understand this simple point after reading a popular book on particle physics has failed miserably. Siegel just doesn't make it even to an average reader of popular physics books.

And the incredible statements are added all the time:
The small-but-nonzero masses for neutrinos can be explained by any see-saw mechanism and/or by the MNS matrix; there’s nothing special about the one arising from GUTs.
One can get neutrino masses of a reasonable magnitude from any physics at the right scale but the scale has to be near the GUT scale. Funnily enough, it's the scale \(10^{15}\) to \(10^{16}\GeV\) that Siegel previously called a "problem". Except that this value disfavored by Siegel is favored by the neutrino masses. It's the scale where one expects the new physics responsible for the neutrino masses and nontrivially, it's approximately the same scale as the scale where the unification of the couplings demonstrably takes place (according to a calculation).

So that's another piece of evidence for the picture – that something is taking place at the scale and the something is rather likely to include the unification of non-gravitational forces. Moreover, it's somewhat beneath the Planck scale where gravity is added to the complete unification and it's arguably a good thing: the non-gravitational forces shouldn't split "unnaturally too low" beneath the truly fundamental, Planck scale.

It's the conventional picture which is still arguably most convincing and likely: the true unification of 4 forces occurs close enough to the Planck scale as calculated by Planck and at energies lower by some 2-3 orders of magnitude, the GUT force splits to the electroweak and the strong one. The electroweak force splits to the electromagnetic and weak force at the LHC Higgs scale. This old picture isn't "a demonstrated scientific fact" but it sounds very convincing and as long as we live in a civilized society, you can't just "ban" it or try to harass the people who think that it's the most persuasive scenario – which includes most of the top particle physicists, I am pretty sure about it.

This aßhole just fails to understand all these basic things and he sells this embarrassing ignorance as if it were a virtue. At the very end, we read:
There’s no compelling reason to think grand unification is anything other than a theoretical curiosity and a physical dead-end.
A more accurate formulation is that Mr Siegel doesn't want to see any arguments in favor of grand unification because he is a dishonest and/or totally stupid prejudiced and demagogic crackpot. But I guess that Siegel's own formulation, while totally untrue, sounds fancier to his brainwashed readers.

The number of individuals just like him has grown astronomical and they produce their lies on a daily basis without facing almost any genuine enemies.

by Luboš Motl (noreply@blogger.com) at July 08, 2016 03:03 PM

July 07, 2016

Matt Strassler - Of Particular Significance

Spinoffs from Fundamental Science

I find that some people just don’t believe scientists when we point out that fundamental research has spin-off benefits for modern society.  The assumption often seems to be that it’s just a bunch of egghead esoteric researchers trying to justify their existence.  It’s a real problem when those scoffing at our evidence are congresspeople of the United States and their staffers, or other members of governmental funding agencies around the world.

So I thought I’d point out an example, reported on Bloomberg News.  It’s a good illustration of how these things often work out, and it is very rare indeed that they are discussed in the press.

Gravitational waves are usually incredibly tiny effects [typically squeezing the radius of our planet by less than the width of an atomic nucleus] that can be made only with monster black holes and neutron stars.   There’s not much hope of using them in technology.  So what good could an experiment to discover them, such as LIGO, possibly be for the rest of the world?

Well, Shell Oil seems to have found some value in it.   It’s not in the gravitational waves themselves, of course; instead, it is in the technology that has to be developed to detect something so delicate.   http://www.bloomberg.com/news/articles/2016-07-07/shell-is-using-innoseis-s-sensors-to-detect-gravitational-waves

Score another one for investment in fundamental scientific research.

 


Filed under: Gravitational Waves, Science and Modern Society Tagged: LIGO, Spinoffs

by Matt Strassler at July 07, 2016 08:38 PM

Clifford V. Johnson - Asymptotia

Kill Your Darlings…

dialogues_process_share_7-7-16(Apparently I spent a lot of time cross-hatching, back in 2010-2012? More on this below. click for larger view.)

I've changed locations, have several physics research tasks to work on, and so my usual work flow is not going to be appropriate for the next couple of weeks, so I thought I'd work on a different aspect of the book project. I'm well into the "one full page per day for the rest of the year to stay on target" part of the calendar and there's good news and bad news. On the good news side, I've refined my workflow a lot, and devised new ways of achieving various technical tasks too numerous (and probably boring) to mention, and so I've actually got [...] Click to continue reading this post

The post Kill Your Darlings… appeared first on Asymptotia.

by Clifford at July 07, 2016 08:24 PM

John Baez - Azimuth

Large Countable Ordinals (Part 3)

Last time we saw why it’s devilishly hard to give names to large countable ordinals.

An obvious strategy is to make up a function f from ordinals to ordinals that grows really fast, so that f(x) is a lot bigger than the ordinal x indexing it. This is indeed a good idea. But something funny tends to happen! Eventually x catches up with f(x). In other words, you eventually hit a solution of

x = f(x)

This is called a fixed point of f. At this point, there’s no way to use f(x) as a name for x unless you already have a name for x. So, your scheme fizzles out!

For example, we started by looking at powers of \omega, the smallest infinite ordinal. But eventually we ran into ordinals x that obey

x = \omega^x

There’s an obvious work-around: we make up a new name for ordinals x that obey

x = \omega^x

We call them epsilon numbers. In our usual nerdy way we start counting at zero, so we call the smallest solution of this equation \epsilon_0, and the next one \epsilon_1, and so on.

But eventually we run into ordinals x that are fixed points of the function \epsilon_x, meaning that

x = \epsilon_x

There’s an obvious work-around: we make up a new name for ordinals x that obey

x = \epsilon_x

But by now you can guess that this problem will keep happening, so we’d better get systematic about making up new names! We should let

\phi_0(\alpha) = \omega^\alpha

and let \phi_{n+1}(\alpha) be the \alphath fixed point of \phi_n.

Oswald Veblen, a mathematician at Princeton, came up with this idea around 1908, based on some thoughts of G. H. Hardy:

• Oswald Veblen, Continuous increasing functions of finite and transfinite ordinals, Trans. Amer. Math. Soc. 9 (1908), 280–292.

He figured out how to define \phi_\gamma(\alpha) even when the index \gamma is infinite.

Last time we saw how to name a lot of countable ordinals using this idea: in fact, all ordinals less than the ‘Feferman–Schütte ordinal’. This time I want go further, still using Veblen’s work.

First, however, I feel an urge to explain things a bit more precisely.

Veblen’s fixed point theorem

There are three kinds of ordinals. The first is a successor ordinal, which is one more than some other ordinal. So, we say \alpha is a successor ordinal if

\alpha = \beta + 1

for some \beta. The second is 0, which is not a successor ordinal. And the third is a limit ordinal, which is neither 0 nor a successor ordinal. The smallest example is

\omega = \{1, 2, 3, \dots \}

Every limit ordinal is the ‘limit’ of ordinals less than it. What does that mean, exactly? Remember, each ordinal is a set: the set of all smaller ordinals. We can define the limit of a set of ordinals to be the union of that set. Alternatively, it’s the smallest ordinal that’s greater than or equal to every ordinal in that set.

Now for Veblen’s key idea:

Veblen’s Fixed Point Theorem. Suppose a function f from ordinals to ordinals is:

strictly increasing: if x < y then f(x) < f(y)

and

continuous: if x is a limit ordinal, f(x) is the limit of the ordinals f(\alpha) where \alpha < x.

Then f must have a fixed point.

Why? For starters, we always have this fact:

x \le f(x)

After all, if this weren’t true, there’d be a smallest x with the property that f(x) < x, since every nonempty set of ordinals has a smallest element. But since f is strictly increasing,

f(f(x)) < f(x)

so f(x) would be an even smaller ordinal with this property. Contradiction!

Using this fact repeatedly, we get

0 \le f(0) \le f(f(0)) \le \cdots

Let \alpha be the limit of the ordinals

0, f(0), f(f(0)), \dots

Then by continuity, f(\alpha) is the limit of the sequence

f(0), f(f(0)), f(f(f(0))),\dots

So f(\alpha) equals \alpha. Voilà! A fixed point!

This construction gives the smallest fixed point of f. There are infinitely many more, since we can start not with 0 but with \alpha+1 and repeat the same argument, etc. Indeed if we try to list these fixed points, we find there is one for each ordinal.

So, we can make up a new function that lists these fixed points. Just to be cute, people call this the derivative of f, so that f'(\alpha) is the \alphath fixed point of f. Beware: while the derivative of a polynomial grows more slowly than the original polynomial, the derivative of a continuous increasing function f from ordinals to ordinals generally grows more quickly than f. It doesn’t really act like a derivative; people just call it that.

Veblen proved another nice theorem:

Theorem. If f is a continuous and strictly increasing function from ordinals to ordinals, so is f'.

So, we can take the derivative repeatedly! This is the key to the Veblen hierarchy.

If you want to read more about this, it helps to know that a function from ordinals to ordinals that’s continuous and strictly increasing is called normal. ‘Normal’ is an adjective that mathematicians use when they haven’t had enough coffee in the morning and aren’t feeling creative—it means a thousand different things. In this case, a better term would be ‘differentiable’.

Armed with that buzzword, you can try this:

• Wikipedia, Fixed-point lemma for normal functions.

Okay, enough theory. On to larger ordinals!

The Feferman–Schütte barrier

First let’s summarize how far we got last time, and why we got stuck. We inductively defined the \alphath ordinal of the \gammath kind by:

\phi_0(\alpha) = \omega^\alpha

and

\phi_{\gamma+1}(\alpha) = \phi'_\gamma(\alpha)

meaning that \phi_{\gamma+1}(\alpha) is the \alphath fixed point of \phi_\gamma.

This handles the cases where \gamma is zero or a successor ordinal. When \gamma is a limit ordinal we let \phi_{\gamma}(\alpha) be the \alphath ordinal that’s a fixed point of all the functions \phi_\beta for \beta < \gamma.

Last time I explained how these functions \phi_\gamma give a nice notation for ordinals less than the Feferman–Schütte ordinal, which is also called \Gamma_0. This ordinal is the smallest solution of

x = \phi_x(0)

So it’s a fixed point, but of a new kind, because now the x appears as a subscript of the \phi function.

We can get our hands on the Feferman–Schütte ordinal by taking the limit of the ordinals

\phi_0(0), \; \phi_{\phi_0(0)}(0) , \; \phi_{\phi_{\phi_0(0)}(0)}(0), \dots

(If you’re wondering why we use the number 0 here, instead of some other ordinal, I believe the answer is: it doesn’t really matter, we would get the same result if we used any ordinal less than the Feferman–Schütte ordinal.)

The ‘Feferman–Schütte barrier’ is the combination of these two facts:

• On the one hand, every ordinal \beta less than \Gamma_0 can be written as a finite sum of guys \phi_\gamma(\alpha) where \alpha and \gamma are even smaller than \beta. Using this fact repeatedly, we can get a finite expression for any ordinal less than the Feferman–Schütte ordinal in terms of the \phi function, addition, and the ordinal 0.

• On the other hand, if \alpha and \gamma are less than \Gamma_0 then \phi_\gamma(\alpha) is less than \Gamma_0. So we can’t use the \phi function to name the Feferman–Schütte ordinal in terms of smaller ordinals.

But now let’s break the Feferman–Schütte barrier and reach some bigger countable ordinas!

The Γ function

The function \phi_x(0) is strictly increasing and continuous as a function of x. So, using Veblen’s theorems, we can define \Gamma_\alpha to be the \alphath solution of

x = \phi_x(0)

We can then define a bunch of enormous countable ordinals:

\Gamma_0, \Gamma_1, \Gamma_2, \dots

and still bigger ones:

\Gamma_\omega, \; \Gamma_{\omega^2}, \; \Gamma_{\omega^3} , \dots

and even bigger ones:

\Gamma_{\omega^\omega}, \; \Gamma_{\omega^{\omega^\omega}}, \; \Gamma_{\omega^{\omega^{\omega^\omega}}}, \dots

and even bigger ones:

\Gamma_{\epsilon_0}, \Gamma_{\epsilon_1}, \Gamma_{\epsilon_2}, \dots

But since \epsilon_\alpha is just \phi_1(\alpha), we can reach much bigger countable ordinals with the help of the \phi function:

\Gamma_{\phi_2(0)}, \; \Gamma_{\phi_3(0)}, \; \Gamma_{\phi_4(0)}, \dots

and we can do vastly better using the \Gamma function itself:

\Gamma_{\Gamma_0}, \Gamma_{\Gamma_{\Gamma_0}}, \Gamma_{\Gamma_{\Gamma_{\Gamma_0}}} , \dots

The limit of all these is the smallest solution of

x = \Gamma_x

As usual, this ordinal is still countable, but there’s no way to express it in terms of the \Gamma function and smaller ordinals. So we are stuck again.

In short: we got past the Feferman–Schütte barrier by introducing a name for the \alphath solution of x = \phi_x(0). We called it \Gamma_\alpha. This made us happy for about two minutes…

…. but then we ran into another barrier of the same kind.

So what we really need is a more general notation: one that gets us over not just this particular bump in the road, but all bumps of this kind! We don’t want to keep randomly choosing goofy new letters like \Gamma. We need something systematic.

The multi-variable Veblen hierarchy

We were actually doing pretty well with the \phi function. It was nice and systematic. It just wasn’t powerful enough. But if you’re trying to keep track of how far you’re driving on a really long trip, you want an odometer with more digits. So, let’s try that.

In other words, let’s generalize the \phi function to allow more subscripts. Let’s rename \Gamma_\alpha and call it \phi_{1,0}(\alpha). The fact that we’re using two subscripts says that we’re going beyond the old \phi functions with just one subscript. The subscripts 1 and 0 should remind you of what happens when you drive more than 9 miles: if your odometer has two digits, it’ll say you’re on mile 10.

Now we proceed as before: we make up new functions, each of which enumerates the fixed points of the previous one:

\phi_{1,1} = \phi'_{1,0}
\phi_{1,2} = \phi'_{1,1}
\phi_{1,3} = \phi'_{1,2}

and so on. In general, we let

\phi_{1,\gamma+1} = \phi'_{1,\gamma}

and when \gamma is a limit ordinal, we let

\displaystyle{ \phi_{1,\gamma}(\alpha) = \lim_{\beta \to \gamma} \phi_{1,\beta}(\alpha) }

Are you confused?

How could you possibly be confused???

Okay, maybe an example will help. In the last section, our notation fizzled out when we took the limit of these ordinals:

\Gamma_{\Gamma_0}, \Gamma_{\Gamma_{\Gamma_0}}, \Gamma_{\Gamma_{\Gamma_{\Gamma_0}}} , \dots

The limit of these is the smallest solution of x = \Gamma_x. But now we’re writing \Gamma_x = \phi_{1,0}(x), so this limit is the smallest fixed point of \phi_{1,0}. So, it’s \phi_{1,1}(0).

We can now ride happily into the sunset, defining \phi_{1,\gamma}(\alpha) for all ordinals \alpha, \gamma. Of course, this will never give us a notation for ordinals with

x = \phi_{1,x}(0)

But we don’t let that stop us! This is where the new extra subscript really comes in handy. We now define \phi_{2,0}(\alpha) to be the \alphath solution of

x = \phi_{1,x}(0)

Then we drive on as before. We let

\phi_{2,\gamma+1} = \phi'_{2,\gamma}

and when \gamma is a limit ordinal, we say

\displaystyle{ \phi_{2,\gamma}(\alpha) = \lim_{\beta \to \gamma} \phi_{2,\beta}(\alpha) }

I hope you get the idea. Keep doing this!

We can inductively define \phi_{\beta,\gamma}(\alpha) for all \alpha, \beta and \gamma. Of course, these functions will never give a notation for solutions of

x = \phi_{x,0}(0)

To describe these, we need a function with one more subscript! So let \phi_{1,0,0}(\alpha) be the \alphath solution of

x = \phi_{x,0}(0)

We can then proceed on and on and on, adding extra subscripts as needed.

This is called the multi-variable Veblen hierarchy.

Examples

To help you understand the multi-variable Veblen hierarchy, I’ll use it to describe lots of ordinals. Some are old friends. Starting with finite ones, we have:

\phi_0(0) = 1

\phi_0(0) + \phi_0(0) = 2

and so on, so we don’t need separate names for natural numbers… but I’ll use them just to save space.

\phi_0(1) = \omega

\phi_0(2) = \omega^2

and so on, so we don’t need separate names for \omega and its powers, but I’ll use them just to save space.

\phi_0(\omega) = \omega^\omega

\phi_0(\omega^\omega) = \omega^{\omega^\omega}

\phi_1(0) = \epsilon_0

\phi_1(1) = \epsilon_1

\displaystyle{ \phi_1(\phi_1(0)) = \epsilon_{\epsilon_0} }

\phi_2(0) = \zeta_0

\phi_2(1) = \zeta_1

where I should remind you that \zeta_\alpha is a name for the \alphath solution of x = \epsilon_x.

\phi_{1,0}(0) = \Gamma_0

\phi_{1,0}(1) = \Gamma_1

\displaystyle{ \phi_{1,0}(\phi_{1,0}(0)) = \Gamma_{\Gamma_0} }

\phi_{1,1}(0) is the limit of \Gamma_{\Gamma_0}, \Gamma_{\Gamma_{\Gamma_0}}, \Gamma_{\Gamma_{\Gamma_{\Gamma_0}}} , \dots

\phi_{1,0,0}(0) is called the Ackermann ordinal.

Apparently Wilhelm Ackermann, the logician who invented a very fast-growing function called Ackermann’s function, had a system for naming ordinals that fizzled out at this ordinal.

The small Veblen ordinal

There are obviously lots more ordinals that can be described using the multi-variable Veblen hierarchy, but I don’t have anything interesting to say about them. And you’re probably more interested in this question: what’s next?

The limit of these ordinals

\phi_1(0), \; \phi_{1,0}(0), \; \phi_{1,0,0}(0), \dots

is called the small Veblen ordinal. Yet again, it’s a countable ordinal. It’s the smallest ordinal that cannot be named in terms of smaller ordinals using the multi-variable Veblen hierarchy…. at least, not the version I described. And here’s a nice fact:

Theorem. Every ordinal \beta less than the small Veblen ordinal can be written as a finite expression in terms of the multi-variable \phi function, addition, and 0.

For example,

\Gamma_0 + \epsilon_{\epsilon_0} + \omega^\omega + 2

is equal to

\displaystyle{  \phi_{\phi_0(0),0}(0) + \phi_{\phi_0(0)}(\phi_{\phi_0(0)}(0)) +  \phi_0(\phi_0(\phi_0(0))) + \phi_0(0) + \phi_0(0)  }

On the one hand, this notation is quite tiresome to read. On the other hand, it’s amazing that it gets us so far!

Furthermore, if you stare at expressions like the above one for a while, and think about them abstractly, they should start looking like trees. So you should find it easy to believe that ordinals less than the small Veblen ordinal correspond to trees, perhaps labelled in some way.

Indeed, this paper describes a correspondence of this sort:

• Herman Ruge Jervell, Finite trees as ordinals, in New Computational Paradigms, Lecture Notes in Computer Science 3526, Springer, Berlin, 2005, pp. 211–220.

However, I don’t think his idea is quite same as what you’d come up with by staring at expressions like

\displaystyle{  \phi_{\phi_0(0),0}(0) + \phi_{\phi_0(0)}(\phi_{\phi_0(0)}(0)) +  \phi_0(\phi_0(\phi_0(0))) + \phi_0(0) + \phi_0(0)  }

Beyond the small Veblen ordinal

We’re not quite done yet. The modifier ‘small’ in the term ‘small Veblen ordinal’ should make you suspect that there’s more in Veblen’s paper. And indeed there is!

Veblen actually extended his multi-variable function \phi_{\gamma_1, \dots, \gamma_n}(\alpha) to the case where there are infinitely many variables. He requires that all but finitely many of these variables equal zero, to keep things under control. Using this, one can set up a notation for even bigger countable ordinals! This notation works for all ordinals less than the large Veblen ordinal.

We don’t need to stop here. The large Veblen ordinal is just the first of a new series of even larger countable ordinals!

These can again be defined as fixed points. Yes: it’s déjà vu all over again. But around here, people usually switch to a new method for naming these fixed points, called ‘ordinal collapsing functions’. One interesting thing about this notation is that it makes use of uncountable ordinal. The first uncountable ordinal is called \Omega, and it dwarfs all those we’ve seen here.

We can use the ordinal collapsing function \psi to name many of our favorite countable ordinals, and more:

\psi(\Omega) is \zeta_0, the smallest solution of x = \epsilon_x.

\psi(\Omega^\Omega) is \Gamma_0, the Feferman–Schütte ordinal.

\psi(\Omega^{\Omega^2}) is the Ackermann ordinal.

\psi(\Omega^{\Omega^\omega}) is the small Veblen ordinal.

\psi(\Omega^{\Omega^\Omega}) is the large Veblen ordinal.

\psi(\epsilon_{\Omega+1}) is called the Bachmann–Howard ordinal. This is the limit of the ordinals

\psi(\Omega), \psi(\Omega^\Omega), \psi(\Omega^{\Omega^\Omega}), \dots

I won’t explain this now. Maybe later! But not tonight. As Bilbo Baggins said:

The Road goes ever on and on
Out from the door where it began.
Now far ahead the Road has gone,
Let others follow it who can!
Let them a journey new begin,
But I at last with weary feet
Will turn towards the lighted inn,
My evening-rest and sleep to meet.

For more

But perhaps you’re impatient and want to begin a new journey now!

The people who study notations for very large countable ordinals tend to work on proof theory, because these ordinals have nice applications to that branch of logic. For example, Peano arithmetic is powerful enough to work with ordinals up to but not including \epsilon_0, so we call \epsilon_0 the proof-theoretic ordinal of Peano arithmetic. Stronger axiom systems have bigger proof-theoretic ordinals.

Unfortunately this makes it a bit hard to learn about large countable ordinals without learning, or at least bumping into, a lot of proof theory. And this subject, while interesting in principle, is quite tough. So it’s hard to find a readable introduction to large countable ordinals.

The bibliography of the Wikipedia article on large countable ordinals gives this half-hearted recommendation:

Wolfram Pohlers, Proof theory, Springer 1989 ISBN 0-387-51842-8 (for Veblen hierarchy and some impredicative ordinals). This is probably the most readable book on large countable ordinals (which is not saying much).

Unfortunately, Pohlers does not seem to give a detailed account of ordinal collapsing functions. If you want to read something fun that goes further than my posts so far, try this:

• Hilbert Levitz, Transfinite ordinals and their notations: for the uninitiated.

(Anyone whose first name is Hilbert must be born to do logic!)

This is both systematic and clear:

• Wikipedia, Ordinal collapsing functions.

And if you want to explore countable ordinals using a computer program, try this:

• Paul Budnik, Ordinal calculator and research tool.

Among other things, this calculator can add, multiply and exponentiate ordinals described using the multi-variable Veblen hierarchy—even the version with infinitely many variables!


by John Baez at July 07, 2016 01:00 AM

July 06, 2016

Symmetrybreaking - Fermilab/SLAC

Scientists salvage insights from lost satellite

Before Hitomi died, it sent X-ray data that could explain why galaxy clusters form far fewer stars than expected.

Working with information sent from the Japanese Hitomi satellite, an international team of researchers has obtained the first views of a supermassive black hole stirring hot gas at the heart of a galaxy cluster. These motions could explain why galaxy clusters form far fewer stars than expected.

The data, published today in Nature, were recorded with the X-ray satellite during its first month in space earlier this year, just before it spun out of control and disintegrated due to a chain of technical malfunctions.

“Being able to measure gas motions is a major advance in understanding the dynamic behavior of galaxy clusters and its ties to cosmic evolution,” said study co-author Irina Zhuravleva, a postdoctoral researcher at the Kavli Institute for Particle Astrophysics and Cosmology. “Although the Hitomi mission ended tragically after a very short period of time, it’s fair to say that it has opened a new chapter in X-ray astronomy.” KIPAC is a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory.

Galaxy clusters, which consist of hundreds to thousands of individual galaxies held together by gravity, also contain large amounts of gas. Over time, the gas should cool down and clump together to form stars. Yet there is very little star formation in galaxy clusters, and until now scientists were not sure why.

“We already knew that supermassive black holes, which are found at the center of all galaxy clusters and are tens of billions of times more massive than the sun, could play a major role in keeping the gas from cooling by somehow injecting energy into it,” said Norbert Werner, a research associate at KIPAC involved in the data analysis. “Now we understand this mechanism better and see that there is just the right amount of stirring motion to produce enough heat.”

Plasma bubbles stir and heat intergalactic gas

About 15 percent of the mass of galaxy clusters is gas that is so hot – tens of millions of degrees Fahrenheit – that it shines in bright X-rays. In their study, the Hitomi researchers looked at the Perseus cluster, one of the most massive astronomical objects and the brightest in the X-ray sky.

Other space missions before Hitomi, including NASA’s Chandra X-ray Observatory, had taken precise X-ray images of the Perseus cluster. These snapshots revealed how giant bubbles of ultrahot, ionized gas, or plasma, rise from the central supermassive black hole as it catapults streams of particles tens of thousands of light-years into space. At the same time, streaks of cold gas appear to be pulled away from the center of the galaxy cluster, according to additional images of visible light. Until now, it has been unclear whether these two actions were connected.

To find out, the researchers pointed one of Hitomi’s instruments – the soft X-ray spectrometer (SXS) – at the center of the Perseus cluster and analyzed its X-ray emissions.

“Since the SXS had 30 times better energy resolution than the instruments of previous missions, we were able to resolve details of the X-ray signals that weren’t accessible before,” said co-principal investigator Steve Allen, a professor of physics at Stanford and of particle physics and astrophysics at SLAC. “These new details resulted in the very first velocity map of the cluster center, showing the speed and turbulence of the hot gas.”

By superimposing this map onto the other images, the researchers were able to link the observed motions of the cold gas to the hot plasma bubbles.

According to the data, the rising plasma bubbles drag cold gas away from the cluster center. Researchers see this in the form of stretched filaments in the optical images. The bubbles also transfer energy to the gas, which causes turbulence, Zhuravleva said.

“In a way, the bubbles are like spoons that stir milk into a cup of coffee and cause eddies,” she said. “The turbulence heats the gas, and it appears that this is enough to work against star formation in the cluster.”     

Hitomi’s legacy

Astrophysicists can use the new information to fine-tune models that describe how galaxy clusters change over time.

One important factor in these models is the mass of galaxy clusters, which researchers typically calculate from the gas pressure in the cluster. However, motions cause additional pressure, and before this study it was unclear if the calculations need to be corrected for turbulent gas.

“Although the motions heat the gas at the center of the Perseus cluster, their speed is only about 100 miles per second, which is surprisingly slow considering how disturbed the region looks in X-ray images,” said co-principal investigator Roger Blandford, the Luke Blossom Professor of Physics at Stanford and a professor for particle physics and astrophysics at SLAC. “One consequence is that corrections for these motions are only very small and don’t affect our mass calculations much.”

Although the loss of Hitomi cut most of the planned science program short – it was supposed to run for at least three years – the researchers hope their results will convince the international community to plan another X-ray space mission.

“The data Hitomi sent back to Earth are just beautiful,” Werner said. “They demonstrate what’s possible in the field and give us a taste of all the great science that should have come out of the mission over the years.”

Hitomi is a joint project, with the Japan Aerospace Exploration Agency (JAXA) and NASA as the principal partners. Led by Japan, it is a large-scale international collaboration, boasting the participation of eight countries, including the United States, the Netherlands and Canada, with additional partnership by the European Space Agency (ESA). Other KIPAC researchers involved in the project are Tuneyoshi Kamae, Ashley King, Hirokazu Odaka and co-principal investigator Grzegorz Madejski.

A version of this article originally appeared as a Stanford University press release.

by Manuel Gnida at July 06, 2016 05:18 PM

Lubos Motl - string vacua and pheno

Interview with Dr Spaghetti on the relocation of the diphoton excess
It's a pleasure to have you here, the CERN director Ms Raviola Spaghetti. You were heard saying that the diphoton excess at 750 GeV hasn't disappeared. It has just moved elsewhere.

RS: I urge everyone to avoid rumors. You will hear the actual diphoton results at ICHEP-2016 in Chicago in early August.

OK, but didn't you say that the diphoton has just jumped to another place?

RS: I forgot what I said about the excess.

Your CMS colleague Pasta-Pizza Tortelloni-Tortellini said that anyway, it's not disappeared. It has just relocated.

RS: If you read the following tweet carefully, you will see that Pasta was talking about his neighbor Gnocchi-Macaroni Lasagne, not the diphoton excess.




Oh, that's interesting. So why did Pasta use the word "it"? Shouldn't the neighbor be referred to as "he" or "she"?

Pasta's neighbor Gnocchi-Macaroni Lasagne who has relocated was a bambino who may be referred to as "it". So dolce allegrissimo, vivace, forte piano!




Fine, it makes sense. Pasta's neighbor Lasagne was a kid.

Right.



But what about the slide of Eilam Gross at the SUSY-16 conference?

What about it?

I mean, why does the graph contain several copies of a kangaroo?

Because the conference is taking place in Melbourne, Australia, and the smallest continent where everyone is walking upside down is full of kangaroos. In 2011, 34 million kangaroos lived in commercial harvest areas there which was a staggering 40% increase from 2010.

Thanks for the lesson on the animals. But even in Australia, is it normal to have kangaroos on the ATLAS probability charts?

It's apparently more normal than you seem to think.

What does the word "elsewhere" on the slide mean?

It means "into a different place".

Thank you very much. I wanted to know what is the role played by the word on that ATLAS member's slide?

You have to ask Eilam Gross.

Fine, I will. But can you at least tell us where the kangaroo is jumping?

Kangaroos like to jump everywhere. They are also keen on carrying their babies in their breast pockets.

And why does Eilam Gross talk about wild roses in the context of the diphoton probability chart?

It's reference to a song by Nick Cave and Kylie Minogue who are Australian citizens, much like the kangaroos. Or like these Melbourne musicians who sing the well-known Czech song "I'm a musician and I'm coming from Australia". But it's pointless to tell you too much: If I show you the roses will you follow?

Can't we get a clearer answer, like an answer we would get from a true man?

You were just clearly discriminating against us, the women. If a male director, e.g. an old white Aryan one, makes 750 achievements, everyone is impressed and people write 400 papers even though the 750 achievements were fake. When a signorita director is going to make 1600 real achievements, no one gives a damn.



The Tricci-a-povyeri (Tricks and Superstitions) band: The Italian man doesn't know the miracle, and that's why his body is decaying. He doesn't know the miracle, dumplings-pork-cabbage and beer. But following his own request, he was hired as the director of the Fiat Corporation. Parmesan is spread over bread.

That's very interesting. So have you made those 1600 achievements?

I haven't claimed anything of the sort. It was just an example.

Thank you very much for the interview and your somewhat clear answers. The situation is more transparent than it was before but we will probably have to wait for ICHEP-2016 in the early August, anyway.

by Luboš Motl (noreply@blogger.com) at July 06, 2016 03:31 PM

ZapperZ - Physics and Physicists

Photoemission Spectroscopy - Fundamental Aspects
I don't know if this is a chapter out of a book, or if this is a lecture material, or what, but it has a rather comprehensive coverage of photoionization, Auger, and photoemission in solids. I also don't know how long the document will be available (web links come and go, it seems). So if this is something you're interested in, it might be something you want to download.

At the very least, it has an extensive collection of references, ranging from Hertz's discovery of the photoelectric effect, to Einstein's photoelectric effect paper of 1905, all the way to Spicer's 3-step model and recent progress in ARPES.

Zz.

by ZapperZ (noreply@blogger.com) at July 06, 2016 02:30 PM

Jester - Resonaances

CMS: Higgs to mu tau is going away
One interesting anomaly in the LHC run-1 was a hint of Higgs boson decays to a muon and a tau lepton. Such process is forbidden in the Standard Model by the conservation of  muon and tau lepton numbers. Neutrino masses violate individual lepton numbers, but their effect is far too small to affect the Higgs decays in practice. On the other hand, new particles do not have to respect global symmetries of the Standard Model, and they could induce lepton flavor violating Higgs decays at an observable level. Surprisingly, CMS found a small excess in the Higgs to tau mu search in their 8 TeV data, with the measured branching fraction Br(h→τμ)=(0.84±0.37)%.  The analogous measurement in ATLAS is 1 sigma above the background-only hypothesis, Br(h→τμ)=(0.53±0.51)%. Together this merely corresponds to a 2.5 sigma excess, so it's not too exciting in itself. However, taken together with the B-meson anomalies in LHCb, it has raised hopes for lepton flavor violating new physics just around the corner.  For this reason, the CMS excess inspired a few dozen of theory papers, with Z' bosons, leptoquarks, and additional Higgs doublets pointed out as possible culprits.

Alas, the wind is changing. CMS made a search for h→τμ in their small stash of 13 TeV data collected in 2015. This time they were hit by a negative background fluctuation, and they found Br(h→τμ)=(-0.76±0.81)%. The accuracy of the new measurement is worse than that in run-1, but nevertheless it lowers the combined significance of the excess below 2 sigma. Statistically speaking, the situation hasn't changed much,  but psychologically this is very discouraging. A true signal is expected to grow when more data is added, and when it's the other way around it's usually a sign that we are dealing with a statistical fluctuation...

So, if you have a cool model explaining the h→τμ  excess be sure to post it on arXiv before more run-2 data is analyzed ;)

by Jester (noreply@blogger.com) at July 06, 2016 09:00 AM

July 05, 2016

Symmetrybreaking - Fermilab/SLAC

Incredible hulking facts about gamma rays

From lightning to the death of electrons, the highest-energy form of light is everywhere.

Gamma rays are the most energetic type of light, packing a punch strong enough to pierce through metal or concrete barriers. More energetic than X-rays, they are born in the chaos of exploding stars, the annihilation of electrons and the decay of radioactive atoms. And today, medical scientists have a fine enough control of them to use them for surgery. Here are seven amazing facts about these powerful photons.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Doctors conduct brain surgery using “gamma ray knives.”

Gamma rays can be helpful as well as harmful (and are very unlikely to turn you into the Hulk). To destroy brain cancers and other problems, medical scientists sometimes use a "gamma ray knife." This consists of many beams of gamma rays focused on the cells that need to be destroyed. Because each beam is relatively small, it does little damage to healthy brain tissue. But where they are focused, the amount of radiation is intense enough to kill the cancer cells. Since brains are delicate, the gamma ray knife is a relatively safe way to do certain kinds of surgery that would be a challenge with ordinary scalpels.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

The name “gamma rays” came from Ernest Rutherford.

French chemist Paul Villard first identified gamma rays in 1900 from the element radium, which had been isolated by Marie and Pierre Curie just two years before. When scientists first studied how atomic nuclei changed form, they identified three types of radiation based on how far they penetrated into a barrier made of lead. Ernest Rutherford named the radiation for the first three letters of the Greek alphabet. Alpha rays bounce right off, beta rays went a little farther, and gamma rays went the farthest. Today we know alpha rays are the same thing as helium nuclei (two protons and two neutrons), beta rays are either electrons or positrons (their antimatter versions), and gamma rays are a kind of light.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Nuclear reactions are a major source of gamma rays.

When an unstable uranium nucleus splits in the process of nuclear fission, it releases a lot of gamma rays in the process. Fission is used in both nuclear reactors and nuclear warheads. To monitor nuclear tests in the 1960s, the United States launched gamma radiation detectors on satellites. They found far more explosions than they expected to see. Astronomers eventually realized these explosions were coming from deep space—not the Soviet Union—and named them gamma-ray bursts, or GRBs. Today we know GRBs come in two types: the explosions of extremely massive stars, which pump out gamma rays as they die, and collisions between highly dense relics of stars called neutron stars and something else, probably another neutron star or a black hole.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Gamma rays played a key role in the discovery of the Higgs boson.

Most of the particles in the Standard Model of particle physics are unstable; they decay into other particles almost as soon as they come into existence. The Higgs boson, for example, can decay into many different types of particles, including gamma rays. Even though theory predicts that a Higgs boson will decay into gamma rays just 0.2 percent of the time, this type of decay is relatively easy to identify and it was one of the types that scientists observed when they first discovered the Higgs boson.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

To study gamma rays, astronomers build telescopes in space.

Gamma rays heading toward the Earth from space interact with enough atoms in the atmosphere that almost none of them reach the surface of the planet. That's good for our health, but not so great for those who want to study GRBs and other sources of gamma rays. To see gamma rays before they reach the atmosphere, astronomers have to build telescopes in space. This is challenging for a number of reasons. For example, you can't use a normal lens or mirror to focus gamma rays, because the rays punch right through them. Instead an observatory like the Fermi Gamma-ray Space Telescope detects the signal from gamma rays when they hit a detector and convert into pairs of electrons and positrons.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Some gamma rays come from thunderstorms.

In the 1990s, observatories in space detected bursts of gamma rays coming from Earth that eventually were traced to thunderclouds. When static electricity builds up inside clouds, the immediate result is lightning. That static electricity also acts like a giant particle accelerator, creating pairs of electrons and positrons, which then annihilate into gamma rays. These bursts happen high enough in the air that only airplanes are exposed—and they’re one reason for flights to steer well away from storms.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Gamma rays (indirectly) give life to Earth.

Hydrogen nuclei are always fusing together in the core of the sun. When this happens, one byproduct is gamma rays. The energy of the gamma rays keeps the sun’s core hot. Some of those gamma rays also escape into the sun's outer layers, where they collide with electrons and protons and lose energy. As they lose energy, they change into ultraviolet, infrared and visible light. The infrared light keeps Earth warm, and the visible light sustains Earth’s plants.

by Matthew R. Francis at July 05, 2016 04:13 PM

Lubos Motl - string vacua and pheno

Gerardus 't Hooft and string theory
Gerardus 't Hooft is celebrating his 70th birthday today. Congratulations!



(C) LM

He got his 1999 Nobel prize for the proof of the renormalizability of the electroweak theory, along with Martinus Veltman, his adviser, whose main contribution was to assign his powerful student with a good and ambitious homework exercise. Because of his magical technical skills, 't Hooft used to be nicknamed The Ayatollah – just because most of his colleagues didn't realize that The Shah was actually much better than the Persian mullah-in-chief.

Even though 't Hooft has taught a string theory course at his university, I think people would agree that he's not a string theorist. However, a modern look at his contributions is a good example of the high degree of organization and clarification that string theory has introduced to theoretical physics.




First, the proof of the renormalizability of gauge theories doesn't have an immediate relationship with string theory because gauge theories "are not" string theory. Except that in some superselection sectors, they are. The AdS/CFT correspondence implies that string theory on anti de Sitter-based backgrounds may be reduced to the low-energy limit of the D-brane dynamics which is simply a gauge theory. Because string theory is nice and finite, the relevant limit should have the same property. And it has.




't Hooft was also the first man, ahead of Leonardus Susskind, who has revealed the heuristic arguments in favor of the holographic principle, a more vague but more general idea than one demonstrated in the AdS/CFT correspondence.

The 't Hooft interaction coupling many fermions whose zero modes appear as solutions on top of an instanton has some interpretations or generalizations in terms of D-instantons and other objects in string theory.

He is also rumored to be the first guy who (in 1971) calculated the beta-function of QCD, including the \(-11/3\) factor, before Gross, Wilczek, and Politzer did. But he didn't publish it. The factor of \(11=22/2\) may be interpreted as the argument of \(SO(22)\), the isometry of the sphere in \(AdS^5 \times S^{21}\), a background for the \(D=26\) string theory that is closer to a solution than it would be in other dimensions. The appearance of the number \(11\) and the \(D=26\) critical dimension of bosonic string theory aren't independent facts.

However, 't Hooft's most cited paper – also and perhaps especially by string theorists – is A planar diagram theory for strong interactions, with over 4,000 citations. The 't Hooft limit is mentioned in thousands of string theory papers, of course.

If you consider Feynman diagrams for gauge theories with adjoint fields, they look like this:



The propagators are lines that are conveniently refined as "double lines" because these fields and particles carry two fundamental \(SU(N)\) indices, for example \(j,k\). With a big density of lines, the Feynman diagrams may approximate surfaces. Every time you cut an area by a non-twisted propagator (a double line) – imagine e.g. that you cut the brown area in the middle by an extra horizontal line – you add two cubic vertices i.e. a factor of \(g^2\) but you also increase the number of faces by one. So the parametric dependence gains an extra \(\lambda=g^2 N\), the 't Hooft coupling, but no additional factor of \(1/N\). If you added "handles" to the area, the terms would be suppressed by an additional factor of \(1/N\), much like if you twisted the horizontal line we just mentioned so that the two boundaries of the two successor states would be joined into one.

So in the large \(N\) limit with a large but fixed \(\lambda = g^2 N\), the gauge theory has to look like a world sheet of string theory. And the AdS/CFT correspondence in the "gauge-gravity duality" version makes this equivalence rigorous.

I want to mention one point. You could object that the "exact theory" for a finite \(g,N\) has to have "either" the continuous world sheets as imagined by string theory; "or" the triangulated world sheets as obtained from the gauge theory Feynman diagrams. Which of them is right? Surely just one may be right, you might say.

The funny thing is that there is no contradiction because the stringy world sheet is described by a theory of quantum gravity (in 2 spacetime i.e. world sheet dimensions) itself. So the "bulk" doesn't really exist. I believe that there's a way to rigorously prove – and I have partly written down the proof – that the perturbative expansion of the \(AdS_5\times S^5\) type IIB string theory coincides with the perturbative expansion of the \(\NNN=4\) Yang-Mills theory.



Off-topic, cernette: ATLAS' Eilam Gross' slide at SUSY 2016. The kangaroo may be a hint about what happened with the cernette. ;-) Song.

The gauge theory propagators are matched to the points of the stringy world sheet where it touches the boundary of a hyperbolic plane (Poincaré disk), a Euclideanized \(AdS_2\). So all the faces of the Feynman diagram drawn on the string world sheet are actually examples of quantum gravity in \(EAdS_2\). By holography, the dynamics of this gravitational theory may be reduced to the conformal theory in a dimension smaller by one, i.e. \(ECFT_1\), and that's the dynamics producing the propagators (and hopefully also the Feynman vertices where 3+ faces meet).

This removal of one world sheet dimension through the "holography on the world sheet" may be viewed as a general heuristic explanation why some sectors of string theory are equivalent to a quantum field theory.

by Luboš Motl (noreply@blogger.com) at July 05, 2016 12:11 PM

July 04, 2016

John Baez - Azimuth

Large Countable Ordinals (Part 2)

Last time I took you on a road trip to infinity. We zipped past a bunch of countable ordinals

\omega , \; \omega^\omega,\; \omega^{\omega^\omega}, \;\omega^{\omega^{\omega^\omega}}, \dots

and stopped for gas at the first one after all these. It’s called \epsilon_0. Heuristically, you can imagine it like this:

\epsilon_0 = \omega^{\omega^{\omega^{\omega^{\cdot^{\cdot^{\cdot}}}}}}

More rigorously, it’s the smallest ordinal x obeying the equation

x = \omega^x

Beyond εo

But I’m sure you have a question. What comes after \epsilon_0?

Well, duh! It’s

\epsilon_0 + 1

Then comes

\epsilon_0 + 2

and then eventually we get to

\epsilon_0 + \omega

and then

\epsilon_0 + \omega^2 ,\dots, \epsilon_0 + \omega^3,\dots \epsilon_0 + \omega^4,\dots

and after a long time

\epsilon_0 + \epsilon_0 = \epsilon_0 2

and then eventually

\epsilon_0^2

and then eventually….

Oh, I see! You wanted to know the first really interesting ordinal after \epsilon_0.

Well, this is a matter of taste, but you might be interested in \epsilon_1. This is the first ordinal after \epsilon_0 that satisfies this equation:

x = \omega^x

How do we actually reach this ordinal? Well, just as \epsilon_0 was the limit of this sequence:

\omega , \; \omega^\omega,\; \omega^{\omega^\omega}, \;\omega^{\omega^{\omega^\omega}}, \dots

\epsilon_1 is the limit of this:

\epsilon_0 + 1, \; \omega^{\epsilon_0 + 1}, \;  \omega^{\omega^{\epsilon_0 + 1}}, \; \omega^{\omega^{\omega^{\epsilon_0 + 1}}},\dots

You may wonder what I mean by the ‘limit’ of an increasing sequence of ordinals. I just mean the smallest ordinal greater than or equal to every ordinal in that sequence. Such a thing is guaranteed to exist, since if we treat ordinals as well-ordered sets, we can just take the union of all the sets in that sequence.

Here’s a picture of \epsilon_1, taken from David Madore’s interactive webpage:

In what sense is \epsilon_1 the first "really interesting" ordinal after \epsilon_0?

For one thing, it’s first that can’t be built out of 1, \omega and \epsilon_0 using finitely many additions, multiplications and exponentiations. In other words, if we use Cantor normal form to describe ordinals (as explained last time), and allow expressions involving \epsilon_0 as well as 1 and \omega, we get a notation for all ordinals up to \epsilon_1.

What’s the next really interesting ordinal after \epsilon_1? As you might expect, it’s called \epsilon_2. This is the next solution of

x = \omega^{x}

and it’s defined to be the limit of this sequence:

\epsilon_1 + 1, \; \omega^{\epsilon_1 + 1}, \;\omega^{\omega^{\epsilon_1 + 1}}, \; \omega^{\omega^{\omega^{\epsilon_1 + 1}}},\dots

Maybe now you get the pattern. In general, \epsilon_{\alpha} is the
\alphath solution of x = \omega^{x}. We can define this, if we’re smart, for any ordinal \alpha.

So, we can keep driving on through fields of ever larger ordinals:

\epsilon_2,\dots, \epsilon_{3},\dots, \epsilon_{4}, \dots

and eventually

\epsilon_{\omega}

which is the first ordinal bigger than \epsilon_0, \epsilon_1, \epsilon_2, \dots

Let’s stop and take a look!

Nice! Okay, back in the car…

\epsilon_{\omega+1},\dots, \epsilon_{\omega+2},\dots, \epsilon_{\omega+1},\dots

and then

\epsilon_{\omega^2},\dots , \epsilon_{\omega^3},\dots, \epsilon_{\omega^4},\dots

and then

\epsilon_{\omega^{\omega}},\dots, \epsilon_{\omega^{\omega^{\omega}}},\dots

As you can see, this gets boring after a while: it’s suspiciously similar to the beginning of our trip through the ordinals. The same ordinals are now showing up as subscripts in this epsilon notation. But we’re moving much faster now, since I’m skipping over much bigger gaps, not bothering to mention all sorts of ordinals like

\epsilon_{\omega^{\omega}} + \epsilon_{\omega 248} + \omega^{\omega^{\omega + 17}} + 1

Anyway… while we’re zipping along, I might as well finish telling you the story I started last time. My friend David Sternlieb and I were driving across South Dakota on Route 80. We kept seeing signs for the South Dakota Tractor Museum. When we finally got there, we were driving pretty darn fast, out of boredom—about 85 miles an hour. And guess what happened then!

Oh — wait a minute—this one is sort of interesting:

\displaystyle{ \epsilon_{\epsilon_0} }

Then come some more like that:

\epsilon_{\epsilon_1},\dots, \epsilon_{\epsilon_2},\dots \epsilon_{\epsilon_3},\dots

until we reach this:

\epsilon_{\epsilon_{\omega}}

and then

\epsilon_{\epsilon_{\omega^{\omega}}},\dots, \epsilon_{\epsilon_{\omega^{\omega^{\omega}}}},\dots

As we keep speeding up, we see:

\epsilon_{\epsilon_{\epsilon_0}},\dots \epsilon_{\epsilon_{\epsilon_{\epsilon_0}}},\dots \epsilon_{\epsilon_{\epsilon_{\epsilon_{\epsilon_0}}}},\dots

So, anyway: by the time we got that tractor museum, we were driving really fast. And, all we saw as we whizzed by was a bunch of rusty tractors out in a field! It was over in a split second! It was a real anticlimax — just like this anecdote, in fact.

But that’s just the way it is when you’re driving through these ordinals! Every ordinal, no matter how large, looks pretty pathetic and small compared to the ones ahead — so you keep speeding up, looking for something ‘really new and different’. But when you find one, it turns out to be part of a larger pattern, and soon that gets boring too.

For example, when we reach the limit of this sequence:

\epsilon_0, \epsilon_{\epsilon_0}, \epsilon_{\epsilon_{\epsilon_0}}, \epsilon_{\epsilon_{\epsilon_{\epsilon_0}}}, \epsilon_{\epsilon_{\epsilon_{\epsilon_{\epsilon_0}}}},\dots

our notation fizzles out again, since this is the first solution of

x = \epsilon_{x}

We could make up a new name for this ordinal, like \zeta_0. I don’t think this name is very common, though I’ve seen it. We could call it the Tractor Museum of Countable Ordinals.

Now we can play the whole game again, defining the zeta number \zeta_{\alpha} to be the \alphath solution of

x = \epsilon_x

sort of like how we defined the epsilons. This kind of equation, where something equals some function of itself, is called a fixed point equation.

But since we’ll have to play this game infinitely often, we might as well be more systematic about it!

The Veblen hierarchy

As you can see, we keep running into new, qualitatively different types of ordinals. First we ran into the powers of omega. Then we ran into the epsilons, and then the zetas. It’s gonna keep happening! For each type of ordinal, our notation fizzles out when we reach the first ‘fixed point’— when the xth ordinal of this type is actually equal to x.

So, instead of making up infinitely many Greek letters for different types of ordinals let’s index them… by ordinals! For each ordinal \gamma we’ll have a type of ordinal. We’ll let \phi_\gamma(\alpha) be the \alphath ordinal of type \gamma.

We can use the fixed point equation to define \phi_{\gamma+1} in terms of \phi_{\gamma}. In other words, we start off by defining

\phi_0(\alpha) = \omega^{\alpha}

and then define

\phi_{\gamma+1}(\alpha)

to be the \alphath solution of

x = \phi_{\gamma}(x)

where we start counting at \alpha = 0, so the first solution is called the ‘zeroth’.

We can even make sense of \phi_\gamma(\alpha) when \gamma itself is infinite! Suppose \gamma is a limit of smaller ordinals. Then we define \phi_\gamma(x) to be the limit of \phi_\beta(x) as \beta approaches \gamma. I’ll make this more precise next time.

We get infinitely many different types of ordinals, called the Veblen hierarchy. So, concretely, the Veblen hierarchy starts with the powers of \omega:

\phi_0(\alpha) = \omega^\alpha

and then it goes on to the ‘epsilons’:

\phi_1(\alpha) = \epsilon_\alpha

and then it goes on to what I called the ‘zetas’:

\phi_2(\alpha) = \zeta_\alpha

But that’s just the start!

The Feferman–Schütte ordinal

Boosting the subscript \gamma in \phi_\gamma(\alpha) increases the result much more than boosting \alpha, so let’s focus on that and just let \alpha = 0. The Veblen hierarchy contains ordinals like this:

\phi_{\omega}(0), \; \phi_{\omega+1}(0), \; \phi_{\omega+2}(0), \dots

and then ordinals like this:

\phi_{\omega^2}(0), \; \phi_{\omega^3}(0), \; \phi_{\omega^4}(0), \dots

and then ordinals like this:

\phi_{\omega^\omega}(0), \; \phi_{\omega^{\omega^\omega}}(0), \; \phi_{\omega^{\omega^{\omega^{\omega}}}}(0), \dots

and then this:

\phi_{\epsilon_0}(0), \phi_{\epsilon_{\epsilon_0}}(0), \phi_{\epsilon_{\epsilon_{\epsilon_0}}}(0),  \dots

where of course I’m skipping huge infinite stretches of ‘boring’ ones. But note that

\phi_{\omega}(0) = \phi_{\phi_0(0)}(0)

and

\phi_{\epsilon_0}(0) = \phi_{\phi_1(0)}(0)

and

\phi_{\zeta_0}(0) = \phi_{\phi_2(0)}(0)

In short, we can plug the phi function into itself—and we get the biggest effect if we plug it into the subscript!

So, if we’re in a rush to reach some really big countable ordinals, we can try these:

\phi_0(0), \; \phi_{\phi_0(0)}(0) , \; \phi_{\phi_{\phi_0(0)}(0)}(0), \dots

But the limit of these is an ordinal x that has

x = \phi_x(0)

This is called the Feferman–Schütte ordinal and denoted \Gamma_0.

In fact, the Feferman–Schütte ordinal is the smallest solution of

x = \phi_x(0)

Since this equation is self-referential, we can’t describe Feferman–Schütte ordinal using the Veblen hierarchy—at least, not without using the Feferman–Schütte ordinal!

Indeed, some mathematicians have made a big deal about this ordinal, claiming it’s

the smallest ordinal that cannot be described without self-reference.

This takes some explaining, and it’s somewhat controversial. After all, there’s a sense in which every fixed point equation is self-referential. But there’s a certain precise sense in which the Feferman–Schütte ordinal is different from previous ones.

Anyway, you have admit that this is a very cute description of the Fefferman–Schuette ordinal: “the smallest ordinal that cannot be described without self-reference.” Does it use self-reference? It had better—otherwise we have a contradiction!

It’s a little scary, like this picture:

More importantly for us, the Veblen hierarchy fizzles out when we hit the Feferman–Schuette ordinal. Let me say what I mean by that.

Veblen normal form

The Veblen hierarchy gives a notation for ordinals called the Veblen normal form. You can think of this as a high-powered version of Cantor normal form, which we discussed last time.

Veblen normal form relies on this result:

Theorem. Any ordinal \beta can be written uniquely as

\beta = \phi_{\gamma_1}(\alpha_1) + \dots + \phi_{\gamma_{k}}(\alpha_k)

where k is a natural number, each term is less than or equal to the previous one, and \alpha_i < \phi_{\gamma_i}(\alpha_i) for all i.

Note that we can also use this theorem to write out the ordinals \beta_i and \gamma_i, and so on, recursively. So, it gives us a notation for ordinals.

However, this notation is only useful when all the ordinals \alpha_i, \gamma_i are less than the ordinal \beta that we’re trying to describe. Otherwise we need to already have a notation for \beta to express \alpha in Veblen normal form!

So, the power of this notation eventually fizzles out. And the place where it does is Feferman–Schütte ordinal. Every ordinal less than this can be expressed in terms of 0, addition, and the function \phi, using just finitely many symbols!

The moral

As I hope you see, the power of the human mind to see a pattern and formalize it gives the quest for large countable ordinals a strange quality. As soon as we see a systematic way to generate a sequence of larger and larger ordinals, we know this sequence has a limit that’s larger then all of those! And this opens the door to even larger ones….

So, this whole journey feels a bit like trying to outrace our car’s own shadow as we drive away from the sunset: the faster we drive, the faster it shoots ahead of us. We’ll never win.

On the other hand, we’ll only lose if we get tired.

So it’s interesting to hear what happens next. We don’t have to give up. The usual symbol for the Feferman–Schütte ordinal should be a clue. It’s called \Gamma_0. And that’s because it’s just the start of a new series of even bigger countable ordinals!

I’m dying to tell you about those. But this is enough for today.


by John Baez at July 04, 2016 01:00 AM

June 29, 2016

Clifford V. Johnson - Asymptotia

Gauge Theories are Cool

That is all.

screen_shot_progress_gauge

('fraid you'll have to wait for the finished book to learn why those shapes are relevant to the title...)

-cvj Click to continue reading this post

The post Gauge Theories are Cool appeared first on Asymptotia.

by Clifford at June 29, 2016 10:51 PM

Symmetrybreaking - Fermilab/SLAC

LHCb discovers family of tetraquarks

Researchers found four new particles made of the same four building blocks.

It’s quadruplets! Syracuse University researchers on the LHCb experiment confirmed the existence of a new four-quark particle and serendipitously discovered three of its siblings.

Quarks are the solid scaffolding inside composite particles like protons and neutrons. Normally quarks come in pairs of two or three, but in 2014 LHCb researchers confirmed the existence four-quark particles and, one year later, five-quark particles.

The particles in this new family were named based on their respective masses, denoted in mega-electronvolts: X(4140), X(4274), X(4500) and X(4700). Each particle contains two charm quarks and two strange quarks arranged in a unique way, making them the first four-quark particles composed entirely of heavy quarks. Researchers also measured each particle’s quantum numbers, which describe their subatomic properties. Theorists will use these new measurements to enhance their understanding of the formation of particles and the fundamental structures of matter.

“What we have discovered is a unique system,” says Tomasz Skwarnicki, a physics professor at Syracuse University. “We have four exotic particles of the same type; it’s the first time we have seen this and this discovery is already helping us distinguish between the theoretical models.”

Evidence of the lightest particle in this family of four and a hint of another were first seen by the CDF experiment at the US Department of Energy’s Fermi National Accelerator Lab in 2009. However, other experiments were unable to confirm this observation until 2012, when the CMS experiment at CERN reported seeing the same particle-like bumps with a much greater statistical certainty. Later, the D0 collaboration at Fermilab also reported another observation of this particle.

“It was a long road to get here,” says University of Iowa physicist Kai Yi, who works on both the CDF and CMS experiments. “This has been a collective effort by many complementary experiments. I’m very happy that LHCb has now reconfirmed this particle’s existence and measured its quantum numbers.”

The US contribution to the LHCb experiment is funded by the National Science Foundation.

LHCb researcher Thomas Britton performed this analysis as his PhD thesis at Syracuse University.

“When I first saw the structures jumping out of the data, little did I know this analysis would be such an aporetic saga,” Britton says. “We looked at every known particle and process to make sure these four structures couldn’t be explained by any pre-existing physics. It was like baking a six-dimensional cake with 98 ingredients and no recipe—just a picture of a cake.”

Even though the four new particles all contain the same quark composition, they each have a unique internal structure, mass and their own sets of quantum numbers. These characteristics are determined by the internal spatial configurations of the quarks.

“The quarks inside these particles behave like electrons inside atoms,” Skwarnicki says. “They can be ‘excited’ and jump into higher energy orbitals. The energy configuration of the quarks gives each particle its unique mass and identity.”

According to theoretical predictions, the quarks inside could be tightly bound (like three quarks packed inside a single proton) or loosely bound (like two atoms forming a molecule.) By closely examining each particle’s quantum numbers, scientists were able to narrow down the possible structures.

“The molecular explanation does not fit with the data,” Skwarnicki says. “But I personally would not conclude that these are definitely tightly bound states of four quarks. It could be possible that these are not even particles. The result could show the complex interplays of known particle pairs flippantly changing their identities.”

Theorists are currently working on models to explain these new results—be it a family of four new particles or bizarre ripple effects from known particles. Either way, this study will help shape our understanding of the subatomic universe.

“The huge amount of data generated by the LHC is enabling a resurgence in searches for exotic particles and rare physical phenomena,” Britton says. “There’s so many possible things for us to find and I’m happy to be a part of it.”

by Sarah Charley at June 29, 2016 06:05 PM

June 28, 2016

Symmetrybreaking - Fermilab/SLAC

Preparing for their magnetic moment

Scientists are using a plastic robot and hair-thin pieces of metal to ready a magnet that will hunt for new physics.

Three summers ago, a team of scientists and engineers on the Muon g-2 experiment moved a 52-foot-wide circular magnet 3200 miles over land and sea. It traveled in one piece without twisting more than a couple of millimeters, lest the fragile coils inside irreparably break. It was an astonishing feat that took years to plan and immense skill to execute.

As it turns out, that was the easy part.

The hard part—creating a magnetic field so precise that even subatomic particles see it as perfectly smooth—has been under way for seven months. It’s a labor-intensive process that has inspired scientists to create clever, often low-tech solutions to unique problems, working from a road map written 30 years ago as they drive forward into the unknown.

The goal of Muon g-2 is to follow up on a similar experiment conducted at the US Department of Energy’s Brookhaven National Laboratory in New York in the 1990s. Scientists there built an extraordinary machine that generated a near-perfect magnetic field into which they fired a beam of particles called muons. The magnetic ring serves as a racetrack for the muons, and they zoom around it for as long as they exist—usually about 64 millionths of a second.

That’s a blink of an eye, but it’s enough time to measure a particular property: the precession frequency of the muons as they hustle around the magnetic field. And when Brookhaven scientists took those measurements, they found something different than the Standard Model, our picture of the universe, predicted they would. They didn’t quite capture enough data to claim a definitive discovery, but the hints were tantalizing.

Now, 20 years later, some of those same scientists—and dozens of others, from 34 institutions around the world—are conducting a similar experiment with the same magnet, but fed by a more powerful beam of muons at the US Department of Energy’s Fermi National Accelerator Laboratory in Illinois. Moving that magnet from New York caused quite a stir among the science-interested public, but that’s nothing compared with what a discovery from the Muon g-2 experiment would cause.

“We’re trying to determine if the muon really is behaving differently than expected,” says Dave Hertzog of the University of Washington, one of the spokespeople of the Muon g-2 experiment. “And, if so, that would suggest either new particles popping in and out of the vacuum, or new subatomic forces at work.  More likely, it might just be something no one has thought of yet.  In any case, it’s all  very exciting.”

Shimming to reduce shimmy

To start making these measurements, the magnetic field needs to be the same all the way around the ring so that, wherever the muons are in the circle, they will see the same pathway. That’s where Brendan Kiburg of Fermilab and a group of a dozen scientists, post-docs and students come in. For the past six months, they have been “shimming” the magnetic ring, shaping it to an almost inconceivably exact level.

“The primary goal of shimming is to make the magnetic field as uniform as possible,” Kiburg says. “The muons act like spinning tops, precessing at a rate proportional to the magnetic field. If a section of the field is a little higher or a little lower, the muon sees that, and will go faster or slower.”

Since the idea is to measure the precession rate to an extremely precise degree, the team needs to shape the magnetic field to a similar degree of uniformity. They want it to vary by no more than ten parts in a billion per centimeter. To put that in perspective, that’s like wanting a variation of no more than one second in nearly 32 years, or one sheet in a roll of toilet paper stretching from New York to London.

How do they do this? First, they need to measure the field they have. With a powerful electromagnet that will affect any metal object inside it, that’s pretty tricky. The solution is a marriage of high-tech and low-tech: a cart made of sturdy plastic and quartz, moved by a pulley attached to a motor and continuously tracked by a laser. On this cart are probes filled with petroleum jelly, with sensors measuring the rate at which the jelly’s protons spin in the magnetic field.

The laser can record the position of the cart to 25 microns, half the width of a human hair. Other sensors measure how far apart the top and bottom of the cart are to the magnet, to the micron.

“The cart moves through the field as it is pulled around the ring,” Kiburg says. “It takes between two and two-and-a-half hours to go around the ring. There are more than 1500 locations around the path, and it stops every three centimeters for a short moment while the field is precisely measured in each location. We then stitch those measurements into a full map of the magnetic field.”

Erik Swanson of the University of Washington is the run coordinator for this effort, meaning he directs the team as they measure the field and perform the manually intensive shimming. He also designed the new magnetic resonance probes that measure the field, upgrading them from the technology used at Brookhaven.

“They’re functionally the same,” he says of the probes, “but the Brookhaven experiment started in the 1990s, and the old probes were designed before that. Any electronics that old, there’s the potential that they will stop working.”

Swanson says that the accuracy to which the team has had to position the magnet’s iron pieces to achieve the desired magnetic field surprised even him. When scientists first turned the magnet on in October, the field, measured at different places around the ring, varied by as much as 1400 parts per million. That may seem smooth, but to a tiny muon it looks like a mountain range of obstacles. In order to even it out, the Muon g-2 team makes hundreds of minuscule adjustments by hand.

Video of 4HlKN0rfdKA

Physical physics

Stationed around the ring are about 1000 knobs that control the ways the field could become non-uniform. But when that isn’t enough, the field can be shaped by taking parts of the magnet apart and inserting extremely small pieces of steel called shims, changing the field by thousandths of an inch.

There are 12 sections of the magnet, and it takes an entire day to adjust just one of those sections.

This process relies on simulations, calibrations and iterations, and with each cycle the team inches forward toward their goal, guided by mathematical predictions. Once they’re done with the process of carefully inserting these shims, some as thin as 12.5 microns, they reassemble the magnet and measure the field again, starting the process over, refining and learning as they go.

“It’s fascinating to me how hard such a simple-seeming problem can be,” says Matthias Smith of the University of Washington, one of the students who helped design the plastic measuring robot. “We’re making very minute adjustments because this is a puzzle that can go together in multiple ways. It’s very complex.”

His colleague Rachel Osofsky, also of the University of Washington, agrees. Osofsky has helped put in more than 800 shims around the magnet, and says she enjoys the hands-on and collaborative nature of the work.

“When I first came aboard, I knew I’d be spending time working on the magnet, but I didn’t know what that meant,” she says. “You get your hands dirty, really dirty, and then measure the field to see what you did. Students later will read the reports we’re writing now and refer to them. It’s exciting.”

Similarly, the Muon g-2 team is constantly consulting the work of their predecessors who conducted the Brookhaven experiment, making improvements where they can. (One upgrade that may not be obvious is the very building that the experiment is housed in, which keeps the temperature steadier than the one used at Brookhaven and reduces shape changes in the magnet itself.)

Kiburg says the Muon g-2 team should be comfortable with the shape of the magnetic field sometime this summer. With the experiment’s beamline under construction and the detectors to be installed, the collaboration should be ready to start measuring particles by next summer. Swanson says that while the effort has been intense, it has also been inspiring.

“It’s a big challenge to figure out how to do all this right,” he says. “But if you know scientists, when a challenge seems almost impossible, that’s the one we all go for.”

by Andre Salles at June 28, 2016 03:24 PM

June 27, 2016

ZapperZ - Physics and Physicists

Landau's Nobel Prize in Physics
This is a fascinating article. It describes, using the Nobel prize archives, the process that led to Lev Landau's nomination and winning the Nobel Prize in physics. But more than that, it also describes the behind-the-scenes nominating process for the Nobel Prize.

I'm not sure if the process has change significantly since then, but I would imaging that many of the mechanism leading up to a nomination and winning the prize are similar.

Zz.

by ZapperZ (noreply@blogger.com) at June 27, 2016 05:39 PM

June 24, 2016

Andrew Jaffe - Leaves on the Line

The Sick Rose

Songs of innocence and of experience page 39 The Sick Rose Fitzwilliam copy

O Rose thou art sick.
The invisible worm,
That flies in the night
In the howling storm:

Has found out thy bed
Of crimson joy:
And his dark secret love
Does thy life destroy.

—William Blake, Songs of Experience

by Andrew at June 24, 2016 10:42 AM

Clifford V. Johnson - Asymptotia

Historic Hysteria

So, *that* happened... (Click for larger view.)

referendum_result

-cvj Click to continue reading this post

The post Historic Hysteria appeared first on Asymptotia.

by Clifford at June 24, 2016 05:35 AM

Clifford V. Johnson - Asymptotia

Concern…

Anyone else finding this terrifying? A snapshot (click for larger view) from the Guardian's live results tracker* as of 19:45 PST - see here.

referendum_so_far_1
-cvj

*BTW, I've been using their trackers a lot during the presidential primaries, they're very good. Click to continue reading this post

The post Concern… appeared first on Asymptotia.

by Clifford at June 24, 2016 02:54 AM

June 23, 2016

Symmetrybreaking - Fermilab/SLAC

The Higgs-shaped elephant in the room

Higgs bosons should mass-produce bottom quarks. So why is it so hard to see it happening?

Higgs bosons are born in a blob of pure concentrated energy and live only one-septillionth of a second before decaying into a cascade of other particles. In 2012, these subatomic offspring were the key to the discovery of the Higgs boson.

So-called daughter particles stick around long enough to show up in the CMS and ATLAS detectors at the Large Hadron Collider. Scientists can follow their tracks and trace the family trees back to the Higgs boson they came from.

But the particles that led to the Higgs discovery were actually some of the boson’s less common progeny. After recording several million collisions, scientists identified a handful of Z bosons and photons with a Higgs-like origin. The Standard Model of particle physics predicts that Higgs bosons produce those particles 2.5 and 0.2 percent of the time. Physicists later identified Higgs bosons decaying into W bosons, which happens about 21 percent of the time.

According to the Standard Model, the most common decay of the Higgs boson should be a transformation into a pair of bottom quarks. This should happen about 60 percent of the time.

The strange thing is, scientists have yet to discover it happening (though they have seen evidence).

According to Harvard researcher John Huth, a member of the ATLAS experiment, seeing the Higgs turning into bottom quarks is priority No. 1 for Higgs boson research.

“It would behoove us to find the Higgs decaying to bottom quarks because this is the largest interaction,” Huth says, “and it darn well better be there.”

If the Higgs to bottom quarks decay were not there, scientists would be left completely dumbfounded.

“I would be shocked if this particle does not couple to bottom quarks,” says Jim Olsen, a Princeton researcher and Physics Coordinator for the CMS experiment. “The absence of this decay would have a very large and direct impact on the relative decay rates of the Higgs boson to all of the other known particles, and the recent ATLAS and CMS combined measurements are in excellent agreement with expectations.”

To be fair, the decay of a Higgs to two bottom quarks is difficult to spot.

When a dying Higgs boson produces twin Z or W bosons, they each decay into a pair of muons or electrons. These particles leave crystal clear signals in the detectors, making it easy for scientists to spot them and track their lineage. And because photons are essentially immortal beams of light, scientists can immediately spot them and record their trajectory and energy with electromagnetic detectors.

But when a Higgs births a pair of bottom quarks, they impulsively marry other quarks, generating huge unstable families which bourgeon, break and reform. This chaotic cascade leaves a messy ancestry.

Scientists are developing special tools to disentangle the Higgs from this multi-generational subatomic soap opera. Unfortunately, there are no cheek swabs or Maury Povich to announce, Higgs, you are the father! Instead, scientists are working on algorithms that look for patterns in the energy these jets of particles deposit in the detectors.

“The decay of Higgs bosons to bottom quarks should have different kinematics from the more common processes and leave unique signatures in our detector,” Huth says. “But we need to deeply understand all the variables involved if we want to squeeze the small number of Higgs events from everything else.”

Physicist Usha Mallik and her ATLAS team of researchers at the University of Iowa have been mapping the complex bottom quark genealogies since shortly after the Higgs discovery in 2012.

“Bottom quarks produce jets of particles with all kinds and colors and flavors,” Mallik says. “There are fat jets, narrow gets, distinct jets and overlapping jets. Just to find the original bottom quarks, we need to look at all of the jet’s characteristics. This is a complex problem with a lot of people working on it.”

This year the LHC will produce five times more data than it did last year and will generate Higgs bosons 25 percent faster. Scientists expect that by August they will be able to identify this prominent decay of the Higgs and find out what it can tell them about the properties of this unique particle.

by Sarah Charley at June 23, 2016 01:00 PM

Clifford V. Johnson - Asymptotia

QED and so forth…

(Spoiler!! :) )

Talking about gauge invariance took a couple more pages than I planned...

screen_shot_progress_QED

-cvj Click to continue reading this post

The post QED and so forth… appeared first on Asymptotia.

by Clifford at June 23, 2016 12:22 AM

June 22, 2016

Jester - Resonaances

Game of Thrones: 750 GeV edition
The 750 GeV diphoton resonance has made a big impact on theoretical particle physics. The number of papers on the topic is already legendary, and they keep coming at the rate of order 10 per week. Given that the Backović model is falsified, there's no longer a theoretical upper limit.  Does this mean we are not dealing with the classical ambulance chasing scenario? The answer may be known in the next days.

So who's winning this race?  What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric. The contest is much more fierce than one might expect:  it takes 8 papers on the topic to win, and 7 papers to even get on the podium.  Among the 3 authors with 7 papers the final classification is decided by trial by combat the citation count.  The result is (drums):

Citations, tja...   Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):

This explains why the resonance is usually denoted by the letter S.

Congratulations to all the winners.  For all the rest, wish you more luck and persistence in the next edition,  provided it will take place.

My scripts are not perfect (in previous versions I missed crucial contenders, as pointed out in the comments), so let me know in case I missed your papers or miscalculated citations. 

by Jester (noreply@blogger.com) at June 22, 2016 05:00 PM

Jester - Resonaances

Off we go
The LHC is back in action since last weekend, again colliding protons with 13 TeV energy. The weasels' conspiracy was foiled, and the perpetrators were exemplarily electrocuted. PhD students have been deployed around the LHC perimeter to counter any further sabotage attempts (stoats are known to have been in league with weasels in the past). The period that begins now may prove to be the most exciting time for particle physics in this century.  Or the most disappointing.

The beam intensity is still a factor of 10 below the nominal one, so  the harvest of last weekend is meager 40 inverse picobarns. But the number of proton bunches in the beam is quickly increasing, and once it reaches O(2000), the data will stream at a rate of a femtobarn per week or more. For the nearest future, the plan is to have a few inverse femtobarns on tape by mid-July, which would roughly double the current 13 TeV dataset. The first analyses of this chunk of data  should be presented around the time of the  ICHEP conference in early August. At that point we will know whether the 750 GeV particle is real. Celebrations will begin if the significance of the diphoton peak increases after adding the new data, even if the statistics is not enough to officially announce  a discovery. In the best of all worlds, we may also get a hint of a matching 750 GeV peak in another decay channel (ZZ, Z-photon, dilepton, t-tbar,...) which would help focus our model building. On the other hand, if the significance of the diphoton peak drops in August, there will be a massive hangover...

By the end of October, when the 2016 proton collisions are scheduled to end, the LHC hopes to collect some 20 inverse femtobarns of data. This should already give us a rough feeling of new physics within the reach of the LHC. If a hint of another resonance is seen at that point, one will surely be able to confirm or refute it with the data collected in the following years.  If nothing is seen... then you should start telling yourself that condensed matter physics is also sort of fundamental,  or that systematic uncertainties in astrophysics are not so bad after all...  In any scenario, by December, when first analyses of the full  2016 dataset will be released,  we will know infinitely more than we do today.

So fasten your seat belts and get ready for a (hopefully) bumpy ride. Serious rumors should start showing up on blogs and twitter starting from July.

by Jester (noreply@blogger.com) at June 22, 2016 03:27 PM

Jester - Resonaances

Weekend Plot: The king is dead (long live the king)
The new diphoton king has been discussed at length in the blogoshpere, but the late diboson king also deserves a word or two. Recall that last summer ATLAS announced a 3 sigma excess in the dijet invariant mass distribution where each jet resembles a fast moving W or Z boson decaying to a pair of quarks. This excess can be interpreted as a 2 TeV resonance decaying to a pair of W or Z bosons. For example, it could be a heavy cousin of the W boson, W' in short, decaying to a W and a Z boson. Merely a month ago this paper argued that the excess remains statistically significant after combining several different CMS and ATLAS diboson resonance run-1 analyses in hadronic and leptonic channels of W and Z decay. However, the hammer came down seconds before the diphoton excess announced: diboson resonance searches based on the LHC 13 TeV collisions data do not show anything interesting around 2 TeV. This is a serious problem for any new physics interpretation of the excess since, for this mass scale,  the statistical power of the run-2 and run-1 data is comparable.  The tension is summarized in this plot:
The green bars show the 1 and 2 sigma best fit cross section to the diboson excess. The one on the left takes into account  only the hadronic channel in ATLAS, where the excess is most significant; the one on the right is bases on  the combined run-1 data. The red lines are the limits from run-2 searches in ATLAS and CMS, scaled to 8 TeV cross sections assuming W' is produced in quark-antiquark collisions. Clearly, the best fit region for the 8 TeV data is excluded by the new 13 TeV data. I display results for the W' hypothesis, however conclusions are similar (or more pessimistic) for other hypotheses leading to WW and/or ZZ final states. All in all,  the ATLAS diboson excess is not formally buried yet, but at this point any a reversal of fortune would be a miracle.

by Jester (noreply@blogger.com) at June 22, 2016 03:27 PM

Jon Butterworth - Life and Physics

Don’t let’s quit

This doesn’t belong on the Guardian Science pages, because even though universities and science will suffer if Britain leaves the EU, that’s not my main reason for voting ‘remain’. But lots of friends have been writing or talking about their choice, and the difficulties of making it, and I feel the need to write my own reasons down even if everyone is saturated by now. It’s nearly over, after all.

Even though the EU is obviously imperfect, a pragmatic compromise, I will vote to stay in with hope and enthusiasm. In fact, I’ll do so partly because it’s an imperfect, pragmatic compromise.

I realise there are a number of possible reasons for voting to leave the EU, some better than others, but please don’t.

Democracy

Maybe you’re bothered because EU democracy isn’t perfect. Also we can get outvoted on some things (these are two different points. Being outvoted sometimes is actually democratic. Some limitations on EU democracy are there to stop countries being outvoted by other countries too often). But it sort of works and it can be improved, especially if we took EU elections more seriously after all this. And we’re still ‘sovereign’, simply because we can vote to leave if we get outvoted on something important enough.

Misplaced nostalgia and worse

Maybe you don’t like foreigners, or you want to ‘Take Britain back’  (presumably to some fantasy dreamworld circa 1958). Unlucky; the world has moved on and will continue to do so whatever the result this week. I don’t have a lot of sympathy, frankly, and I don’t think this applies to (m)any of my ‘leave’ friends.

Lies

Maybe you believed the lies about the £350m we don’t send, which wouldn’t save the NHS anyway even if we did, or the idea that new countries are lining up to join and we couldn’t stop them if we wanted. If so please look at e.g. https://fullfact.org/europe/ for help. Some people I love and respect have believed some of these lies, and that has made me cross. These aren’t matters of opinion, and the fact that the ‘leave’ campaign repeats them over and over shows both their contempt for the intelligence of voters and the weakness of their case. If you still want to leave, knowing the facts, then fair enough. But don’t do it on a lie.

We need change

Maybe you have a strong desire for change, because bits of British life are rubbish and unfair. In this case, the chances are your desire for change is directed at entirely the wrong target. The EU is not that powerful in terms of its direct effects on everyday life. The main thing it does is provide a mechanism for resolving common issues between EU member states. It is  a vast improvement on the violent means used in previous centuries. It spreads rights and standards to the citizens and industries of members states, making trade and travel safer and easier. And it amplifies our collective voice in global politics.

People who blame the EU for the injustices of British life are being made fools of by unscrupulous politicians, media moguls and others who have for years been content to see the destruction of British industry, the undermining of workers’ rights, the underfunding of the NHS and education, relentless attacks on national institutions such as the BBC, neglect of whole regions of the country and more.

These are the people now telling us to cut off our nose to spite our face, and they are exploiting the discontent they have fostered to persuade us this would be a good idea, by blaming the EU for choices made by UK governments.

They are quite happy for industry to move to lower-wage economies in the developing world when is suits them, but they don’t want us agreeing common standards, protections and practices with our EU partners. They don’t like Nation states clubbing together, because that can make trouble for multinationals, and (in principle at least) threatens their ability to cash in on exploitative labour practices and tax havens. They would much rather play nation off against nation.

If…

If we vote to leave, the next few years will be dominated by attempts to negotiate something from the wreckage, against the background of a shrinking economy and a dysfunctional political class.  This will do nothing to fix inequality and the social problems we face (and I find it utterly implausible that people like Bojo, IDS or Farage would even want that). Those issues will be neglected or worse. Possibly this distraction, which is already present, is one reason some in the Conservative Party have involved us all in their internal power struggles.

If we vote remain, I hope the desire for change is preserved beyond Thursday, and is focussed not on irresponsible ‘blame the foreigner’ games, but on real politics, of hope and pragmatism, where it can make a positive difference.

I know there’s no physics here. This is the ‘life’ bit, and apart from the facts, it’s just my opinion. Before writing it I said this on twitter:

and it probably still be true that it’s better than the above. Certainly it’s shorter. But I had to try my own words.

I’m not going to enable comments here since they can be added on twitter and facebook if you feel the urge, and I can’t keep up with too many threads.


Filed under: Politics

by Jon Butterworth at June 22, 2016 06:19 AM

June 21, 2016

Symmetrybreaking - Fermilab/SLAC

All four one and one for all

A theory of everything would unite the four forces of nature, but is such a thing possible?

Over the centuries, physicists have made giant strides in understanding and predicting the physical world by connecting phenomena that look very different on the surface. 

One of the great success stories in physics is the unification of electricity and magnetism into the electromagnetic force in the 19th century. Experiments showed that electrical currents could deflect magnetic compass needles and that moving magnets could produce currents.

Then physicists linked another force, the weak force, with that electromagnetic force, forming a theory of electroweak interactions. Some physicists think the logical next step is merging all four fundamental forces—gravity, electromagnetism, the weak force and the strong force—into a single mathematical framework: a theory of everything.

Those four fundamental forces of nature are radically different in strength and behavior. And while reality has cooperated with the human habit of finding patterns so far, creating a theory of everything is perhaps the most difficult endeavor in physics.

“On some level we don't necessarily have to expect that [a theory of everything] exists,” says Cynthia Keeler, a string theorist at the Niels Bohr Institute in Denmark. “I have a little optimism about it because historically, we have been able to make various unifications. None of those had to be true.”

Despite the difficulty, the potential rewards of unification are great enough to keep physicists searching. Along the way, they’ve discovered new things they wouldn’t have learned had it not been for the quest to find a theory of everything.

Illustration by Sandbox Studio, Chicago with Corinne Mucha

United we hope to stand

No one has yet crafted a complete theory of everything.

It’s hard to unify all of the forces when you can’t even get all of them to work at the same scale. Gravity in particular tends to be a tricky force, and no one has come up with a way of describing the force at the smallest (quantum) level.

Physicists such as Albert Einstein thought seriously about whether gravity could be unified with the electromagnetic force. After all, general relativity had shown that electric and magnetic fields produce gravity and that gravity should also make electromagnetic waves, or light. But combining gravity and electromagnetism, a mission called unified field theory, turned out to be far more complicated than making the electromagnetic theory work. This was partly because there was (and is) no good theory of quantum gravity, but also because physicists needed to incorporate the strong and weak forces.

A different idea, quantum field theory, combines Einstein’s special theory of relativity with quantum mechanics to explain the behavior of particles, but it fails horribly for gravity. That’s largely because anything with energy (or mass, thanks to relativity) creates a gravitational attraction—including gravity itself. To oversimplify somewhat, the gravitational interaction between two particles has a certain amount of energy, which produces an additional gravitational interaction with its own energy, and so on, spiraling to higher energies with each extra piece.

“One of the first things you learn about quantum gravity is that quantum field theory probably isn’t the answer,” says Robert McNees, a physicist at Loyola University Chicago. “Quantum gravity is hard because we have to come up with something new.”

Illustration by Sandbox Studio, Chicago with Corinne Mucha

An evolution of theories

The best-known candidate for a theory of everything is string theory, in which the fundamental objects are not particles but strings that stretch out in one dimension.  

Strings were proposed in the 1970s to try to explain the strong force. This first string theory proved to be unnecessary, but physicists realized it could be joined to the another theory called Kaluza-Klein theory as a possible explanation of quantum gravity.

String theory expresses quantum gravity in two dimensions rather than the four, bypassing all the problems of the quantum field theory approach but introducing other complications, namely an extra six dimensions that must be curled up on a scale too small to detect.

Unfortunately, string theory has yet to reproduce the well-tested predictions of the Standard Model.

Another well-known idea is the sci-fi-sounding “loop quantum gravity,” in which space-time on the smallest scales is made of tiny loops in a flexible mesh that produces gravity as we know it.

The idea that space-time is made up of smaller objects, just as matter is made of particles, is not unique to the theory. There are many others with equally Jabberwockian names: twistors, causal set theory, quantum graphity and so on. Granular space-time might even explain why our universe has four dimensions rather than some other number.

Loop quantum gravity’s trouble is that it can’t replicate gravity at large scales, such as the size of the solar system, as described by general relativity.

None of these theories has yet succeeded in producing a theory of everything, in part because it's so hard to test them.

“Quantum gravity is expected to kick in only at energies higher than anything that we can currently produce in a lab,” says Lisa Glaser, who works on causal set quantum gravity at the University of Nottingham. “The hope in many theories is now to predict cumulative effects,” such as unexpected black hole behavior during collisions like the ones detected recently by LIGO.

Today, many of the theories first proposed as theories of everything have moved beyond unifying the forces. For example, much of the current research in string theory is potentially important for understanding the hot soup of particles known as the quark-gluon plasma, along with the complex behavior of electrons in very cold materials like superconductors—something seemingly as far removed from quantum gravity as could be. 

“On a day-to-day basis, I may not be doing a calculation that has anything directly to do with string theory,” Keeler says. “But it’s all about these ideas that came from string theory.”

Finding a theory of everything is unlikely to change the way most of us go about our business, even if our business is science. That’s the normal way of things: Chemists and electricians don't need to use quantum electrodynamics, even though that theory underlies their work. But finding such a theory could change the way we think of the universe on a fundamental level.

Even a successful theory of everything is unlikely to be a final theory. If we’ve learned anything from 150 years of unification, it’s that each step toward bringing theories together uncovers something new to learn.

by Matthew R. Francis at June 21, 2016 01:00 PM

Axel Maas - Looking Inside the Standard Model

How to search for dark, unknown things: A bachelor thesis
Today, I would like to write about a recently finished bachelor thesis on the topic of dark matter and the Higgs. Though I will also present the results, the main aim of this entry is to describe an example of such a bachelor thesis in my group. I will try to follow up also in the future with such entries, to give those interested in working in particle physics an idea of what one can do already at a very early stage in one's studies.

The framework of the thesis is the idea that dark matter could interact with the Higgs particle. This is a serious possibility, as both objects are somehow related to mass. There is also not yet any substantial reason why this should not be the case. The unfortunate problem is only: how strong is this effect? Can we measure it, e.g. in the experiments at CERN?

We are looking in a master thesis in the dynamical features of this idea. This is ongoing, and something I will certainly write about later. Knowing the dynamics, however, is only the first step towards connecting the theory to experiment. To do so, we need the basic properties of the theory. This input will then be put through a simulation of what happens in the experiment. Only this result is the one really interesting for experimental physicists. They then look what any kind of imperfections of the experiments change and then they can conclude, whether they will be able to detect something. Or not.

In the thesis, we did not yet had the results from the master student's work, so we parametrized the possible outcomes. This meant mainly to have the mass and the strength of the interaction between the Higgs and the dark matter particle to play around. This gave us what we call an effective theory. Such a theory does not describe every detail, but it is sufficiently close to study a particular aspect of a theory. In this case how dark matter should interact with the Higgs at the CERN experiments.

With this effective theory, it was then possible to use simulations of what happens in the experiment. Since dark matter cannot, as the name says, be directly seen, we needed somehow a marker to say that it has been there. For that purpose we choose the so-called associate production mode.

We knew that the dark matter would escape the experiment undetected. In jargon, this is called missing energy, since we miss the energy of the dark matter particles, when we account for all we see. Since we knew what went in, and know that what goes in must come out, anything not accounted for must have been carried away by something we could not directly see. To make sure that this came from an interaction with the Higgs we needed a tracer that a Higgs had been involved. The simplest solution was to require that there is still a Higgs. Also, there are deeper reasons which require that dark matter in this theory should not only arrive with a Higgs particle, but should be obtained also from a Higgs particle before the emission of the dark matter particles. The simplest way to check for this is that there is besides the Higgs in the end also a so-called Z-boson, for technical reasons. Thus, we had what we called a signature: Look for a Higgs, a Z-boson, and missing energy.

There is, however, one unfortunate thing in known particle physics which makes this more complicated: neutrinos. These particles are also essentially undetectable for an experiment at the LHC. Thus, when produced, they will also escape undetected as missing energy. Since we do not detect either dark matter or neutrinos, we cannot decide, what actually escaped. Unfortunately, the tagging with the Higgs and the Z do not help, as neutrinos can also be produced together with them. This is what we call a background to our signal. Thus, it was necessary to account for this background.

Fortunately, there are experiments which can detect, with a lot of patience, neutrinos. They are very different from the one we at the LHC. But they gave us a lot of information on neutrinos. Hence, we knew how often neutrinos would be produced in the experiment. So, we would only need to remove this known background from what the simulation gives. Whatever is left would then be the signal of dark matter. If the remainder would be large enough, we would be able to see the dark matter in the experiment. Of course, there are many subtleties involved in this process, which I will skip.

So the student simulated both cases, and determined the signal strength. From that she could deduce that the signal grows quickly with the strength of the interaction. She also found that the signal became stronger if the dark matter particles become lighter. That is so because there is only a finite amount of energy available to produce them. But the more energy is left to make the dark matter particles move the easier it gets to produce them, an effect known in physics as phase space. In addition, she found that if the dark matter particles have half the mass of the Higgs their production became also very efficient. The reason is a resonance. Just like two noises amplify each other if they are at the same frequency, so such amplifications can happen in particle physics.

The final outcome of the bachelor thesis was thus telling us for the values of the two parameters of the effective theory how strong our signal would be. Once we know these values from our microscopic theory in the master project, we know whether we have a chance to see these particles in this type of experiments.

by Axel Maas (noreply@blogger.com) at June 21, 2016 07:35 AM

June 20, 2016

Sean Carroll - Preposterous Universe

Father of the Big Bang

Georges Lemaître died fifty years ago today, on 20 June 1966. If anyone deserves the title “Father of the Big Bang,” it would be him. Both because he investigated and popularized the Big Bang model, and because he was an actual Father, in the sense of being a Roman Catholic priest. (Which presumably excludes him from being an actual small-f father, but okay.)

John Farrell, author of a biography of Lemaître, has put together a nice video commemoration: “The Greatest Scientist You’ve Never Heard Of.” I of course have heard of him, but I agree that Lemaître isn’t as famous as he deserves.

The Greatest Scientist You've Never Heard Of from Farrellmedia on Vimeo.

by Sean Carroll at June 20, 2016 09:10 PM

Robert Helling - atdotde

Restoring deleted /etc from TimeMachine
Yesterday, I managed to empty the /etc directory on my macbook (don't ask how I did it. I was working on subsurface and had written a perl script to move system files around that had to be run with sudo. And I was still debugging...).

Anyway, once I realized what the problem was I did some googling but did not find the answer. So here, as a service to fellow humans googling for help is how to fix this.

The problem is that in /etc all kinds of system configuration files are stored and without it the system does not know anymore how to do a lot of things. For example it contains /etc/passwd which contains a list of all users, their home directories and similar things. Or /etc/shadow which contains (hashed) passwords or, and this was most relevant in my case, /etc/sudoers which contains a list of users who are allowed to run commands with sudo, i.e. execute commands with administrator privileges (in the GUI this shows as as a modal dialog asking you to type in your password to proceed).

In my case, all was gone. But, luckily enough, I had a time machine backup. So I could go 30 minutes back in time and restore the directory contents.

The problem was that after restoring it, it ended up as a symlink to /private/etc and user helling wasn't allowed to access its contents. And I could not sudo the access since the system could not determine I am allowed to sudo since it could not read /etc/sudoers.

I tried a couple of things including a reboot (as a last resort I figured I could always boot in target disk mode and somehow fix the directory) but it remained in /private/etc and I could not access it.

Finally I found the solution (so here it is): I could look at the folder in Finder (it had a red no entry sign on it meaning that I could not open it). But I could right click and select Information and there I could open the lock by tying in my password (no idea why that worked) and give myself read (and for that matter write) permissions and then everything was fine again.

by Robert Helling (noreply@blogger.com) at June 20, 2016 08:12 PM

June 18, 2016

Jester - Resonaances

Black hole dark matter
The idea that dark matter is made of primordial black holes is very old but has always been in the backwater of particle physics. The WIMP or asymmetric dark matter paradigms are preferred for several reasons such as calculability, observational opportunities, and a more direct connection to cherished theories beyond the Standard Model. But in the recent months there has been more interest, triggered in part by the LIGO observations of black hole binary mergers. In the first observed event, the mass of each of the black holes was estimated at around 30 solar masses. While such a system may well be of boring astrophysical origin, it is somewhat unexpected because typical black holes we come across in everyday life are either a bit smaller (around one solar mass) or much larger (supermassive black hole in the galactic center). On the other hand, if the dark matter halo were made of black holes, scattering processes would sometimes create short-lived binary systems. Assuming a significant fraction of dark matter in the universe is made of primordial black holes, this paper estimated that the rate of merger processes is in the right ballpark to explain the LIGO events.

Primordial black holes can form from large density fluctuations in the early universe. On the largest observable scales the universe is incredibly homogenous, as witnessed by the uniform temperature of the Cosmic Microwave Background over the entire sky. However on smaller scales the primordial inhomogeneities could be much larger without contradicting observations.  From the fundamental point of view, large density fluctuations may be generated by several distinct mechanism, for example during the final stages of inflation in the waterfall phase in the hybrid inflation scenario. While it is rather generic that this or similar process may seed black hole formation in the radiation-dominated era, severe fine-tuning is required to produce the right amount of black holes and ensure that the resulting universe resembles the one we know.

All in all, it's fair to say that the scenario where all or a significant fraction of  dark matter  is made of primordial black holes is not completely absurd. Moreover, one typically expects the masses to span a fairly narrow range. Could it be that the LIGO events is the first indirect detection of dark matter made of O(10)-solar-mass black holes? One problem with this scenario is that it is excluded, as can be seen in the plot.  Black holes sloshing through the early dense universe accrete the surrounding matter and produce X-rays which could ionize atoms and disrupt the Cosmic Microwave Background. In the 10-100 solar mass range relevant for LIGO this effect currently gives the strongest constraint on primordial black holes: according to this paper they are allowed to constitute  not more than 0.01% of the total dark matter abundance. In astrophysics, however, not only signals but also constraints should be taken with a grain of salt.  In this particular case, the word in town is that the derivation contains a numerical error and that the corrected limit is 2 orders of magnitude less severe than what's shown in the plot. Moreover, this limit strongly depends on the model of accretion, and more favorable assumptions may buy another order of magnitude or two. All in all, the possibility of dark matter made of  primordial black hole in the 10-100 solar mass range should not be completely discarded yet. Another possibility is that black holes make only a small fraction of dark matter, but the merger rate is faster, closer to the estimate of this paper.

Assuming this is the true scenario, how will we know? Direct detection of black holes is discouraged, while the usual cosmic ray signals are absent. Instead, in most of the mass range, the best probes of primordial black holes are various lensing observations. For LIGO black holes, progress may be made via observations of fast radio bursts. These are strong radio signals of (probably) extragalactic origin and millisecond duration. The radio signal passing near a O(10)-solar-mass black hole could be strongly lensed, leading to repeated signals detected on Earth with an observable time delay. In the near future we should observe hundreds of such repeated bursts, or obtain new strong constraints on primordial black holes in the interesting mass ballpark. Gravitational wave astronomy may offer another way.  When more statistics is accumulated, we will be able to say something about the spatial distributions of the merger events. Primordial black holes should be distributed like dark matter halos, whereas astrophysical black holes should be correlated with luminous galaxies. Also, the typical eccentricity of the astrophysical black hole binaries should be different.  With some luck, the primordial black hole dark matter scenario may be vindicated or robustly excluded  in the near future.

See also these slides for more details. 

by Jester (noreply@blogger.com) at June 18, 2016 10:06 AM

June 17, 2016

Jaques Distler - Musings

Coriolis

I really like the science fiction TV series The Expanse. In addition to a good plot and a convincing vision of human society two centuries hence, it depicts, as Phil Plait observes, a lot of good science in a matter-of-fact, almost off-hand fashion. But one scene (really, just a few dialogue-free seconds in a longer scene) has been bothering me. In it, Miller, the hard-boiled detective living on Ceres, pours himself a drink. And we see — as the whiskey slowly pours from the bottle into the glass — that the artificial gravity at the lower levels (where the poor people live) is significantly weaker than near the surface (where the rich live) and that there’s a significant Coriolis effect. Unfortunately, the effect depicted is 3 orders-of-magnitude too big.

Pouring a drink on Ceres. Significant Coriolis deflection is apparent.

To explain, six million residents inhabit the interior of the asteroid, which has been spun up to provide an artificial gravity. Ceres has a radius, <semantics>R C=4.73×10 5<annotation encoding="application/x-tex">R_C = 4.73\times 10^5</annotation></semantics> m and a surface gravity <semantics>g C=.27m/s 2<annotation encoding="application/x-tex">g_C=.27\,\text{m}/\text{s}^2</annotation></semantics>. The rotational period is supposed to be 40 minutes (<semantics>ω2.6×10 3/s<annotation encoding="application/x-tex">\omega\sim 2.6\times 10^{-3}\, /\text{s}</annotation></semantics>). Near the surface, this yields <semantics>ω 2R C(1ϵ 2)ω 2R Cg C0.3<annotation encoding="application/x-tex">\omega^2 R_C(1-\epsilon^2)\equiv \omega^2 R_C -g_C \sim 0.3</annotation></semantics> g. On the innermost level, <semantics>R=13R C<annotation encoding="application/x-tex">R=\tfrac{1}{3} R_C</annotation></semantics>, and the effective artificial gravity is only 0.1 g.

Ceres Station, dug into the interior of the asteroid.

So how big is the Coriolis effect in this scenario?

The equations1 to be solved are

(1)<semantics>d 2xdt 2 =ω 2(1ϵ 2)x2ωdydt d 2ydt 2 =ω 2(1ϵ 2)(yR)+2ωdxdt<annotation encoding="application/x-tex">\begin{split} \frac{d^2 x}{d t^2}&= \omega^2(1-\epsilon^2) x - 2 \omega \frac{d y}{d t}\\ \frac{d^2 y}{d t^2}&= \omega^2(1-\epsilon^2) (y-R) + 2 \omega \frac{d x}{d t} \end{split} </annotation></semantics>

with initial conditions <semantics>x(t)=x˙(t)=y(t)=y˙(t)=0<annotation encoding="application/x-tex">x(t)=\dot{x}(t)=y(t)=\dot{y}(t)=0</annotation></semantics>. The exact solution solution is elementary, but for <semantics>ωt1<annotation encoding="application/x-tex">\omega t\ll 1</annotation></semantics>, i.e. for times much shorter than the rotational period, we can approximate

(2)<semantics>x(t) =13(1ϵ 2)R(ωt) 3+O((ωt) 5), y(t) =12(1ϵ 2)R(ωt) 2+O((ωt) 4)<annotation encoding="application/x-tex">\begin{split} x(t)&= \frac{1}{3} (1-\epsilon^2) R (\omega t)^3 +O\bigl((\omega t)^5\bigr),\\ y(t)&= - \tfrac{1}{2} (1-\epsilon^2)R(\omega t)^2+O\bigl((\omega t)^4\bigr) \end{split} </annotation></semantics>

From (2), if the whiskey falls a distance <semantics>hR<annotation encoding="application/x-tex">h\ll R</annotation></semantics>, it undergoes a lateral displacement

(3)<semantics>Δx=23h(2h(1ϵ 2)R) 1/2<annotation encoding="application/x-tex">\Delta x = \tfrac{2}{3} h\, {\left(\frac{2h}{(1-\epsilon^2)R}\right)}^{1/2} </annotation></semantics>

For <semantics>h=16<annotation encoding="application/x-tex">h=16</annotation></semantics> cm and <semantics>R=13R C<annotation encoding="application/x-tex">R=\tfrac{1}{3}R_C</annotation></semantics>, this is <semantics>Δxh=10 3<annotation encoding="application/x-tex">\frac{\Delta x}{h}= 10^{-3}</annotation></semantics> which is 3 orders of magnitude smaller than depicted in the screenshot above2.

So, while I love the idea of the Coriolis effect appearing — however tangentially — in a TV drama, this really wasn’t the place for it.


1 Here, I’m approximating Ceres to be a sphere of uniform density. That’s not really correct, but since the contribution of Ceres’ intrinsic gravity to (3) is only a 5% effect, the corrections from non-uniform density are negligible.

2 We could complain about other things: like that the slope should be monotonic (very much unlike what’s depicted). But that seems a minor quibble, compared to the effect being a thousand times too large.

by distler (distler@golem.ph.utexas.edu) at June 17, 2016 11:10 PM

Quantum Diaries

Enough data to explore the unknown

The Large Hadron Collider (LHC) at CERN has already delivered more high energy data than it had in 2015. To put this in numbers, the LHC has produced 4.8 fb-1, compared to 4.2 fb-1 last year, where fb-1 represents one inverse femtobarn, the unit used to evaluate the data sample size. This was achieved in just one and a half month compared to five months of operation last year.

With this data at hand, and the projected 20-30 fb-1 until November, both the ATLAS and CMS experiments can now explore new territories and, among other things, cross-check on the intriguing events they reported having found at the end of 2015. If this particular effect is confirmed, it would reveal the presence of a new particle with a mass of 750 GeV, six times the mass of the Higgs boson. Unfortunately, there was not enough data in 2015 to get a clear answer. The LHC had a slow restart last year following two years of major improvements to raise its energy reach. But if the current performance continues, the discovery potential will increase tremendously. All this to say that everyone is keeping their fingers crossed.

If any new particle were found, it would open the doors to bright new horizons in particle physics. Unlike the discovery of the Higgs boson in 2012, if the LHC experiments discover a anomaly or a new particle, it would bring a new understanding of the basic constituents of matter and how they interact. The Higgs boson was the last missing piece of the current theoretical model, called the Standard Model. This model can no longer accommodate new particles. However, it has been known for decades that this model is flawed, but so far, theorists have been unable to predict which theory should replace it and experimentalists have failed to find the slightest concrete signs from a broader theory. We need new experimental evidence to move forward.

Although the new data is already being reconstructed and calibrated, it will remain “blinded” until a few days prior to August 3, the opening date of the International Conference on High Energy Physics. This means that until then, the region where this new particle could be remains masked to prevent biasing the data reconstruction process. The same selection criteria that were used for last year data will then be applied to the new data. If a similar excess is still observed at 750 GeV in the 2016 data, the presence of a new particle will make no doubt.

Even if this particular excess turns out to be just a statistical fluctuation, the bane of physicists’ existence, there will still be enough data to explore a wealth of possibilities. Meanwhile, you can follow the LHC activities live or watch CMS and ATLAS data samples grow. I will not be available to report on the news from the conference in August due to hiking duties, but if anything new is announced, even I expect to hear its echo reverberating in the Alps.

Pauline Gagnon

To find out more about particle physics, check out my book « Who Cares about Particle Physics: making sense of the Higgs boson, the Large Hadron Collider and CERN », which can already be ordered from Oxford University Press. In bookstores after 21 July. Easy to read: I understood everything!

CMS-lumi-17juin

The total amount of data delivered in 2016 at an energy of 13 TeV to the experiments by the LHC (blue graph) and recorded by CMS (yellow graph) as of 17 June. One fb-1 of data is equivalent to 1000 pb-1.

by Pauline Gagnon at June 17, 2016 01:13 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
July 25, 2016 06:06 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at