Particle Physics Planet


September 27, 2016

Peter Coles - In the Dark

The Possible Plumes of Europa

I was too busy yesterday to write a post about the latest hot news from the NASA Hubble Space Telescope, so here’s a quick catch-up.

It seems that Europa, the smallest of the four Galilean moons of Jupiter, may from time to time be releasing “plumes” of water vapour. It has long been speculated that there might be large quantities of liquid water under Europa’s extremely smooth icy crust. Here’s a picture of possible plumes (to the bottom left of the image) in which a high-resolution picture of the surface of Europa has been superimposed.

europa

Picture Credits: NASA/ESA/W. Sparks (STScI)/USGS Astrogeology Science Center

There’s also short video explaining the possible discovery here.

It’s not obvious at first sight that features like that shown above are caused by water erupting through Europa’s surface. On the face of it they could, for example, be caused by the impact of a smaller body. However,  long-term observations of this phenomenon suggest out-gassing is much more likely.  The Hubble Space Telescope’s Imaging Spectrograph was used to study what are essentially Aurorae powered by Jupiter’s strong magnetic field in which the presence of excited states of hydrogen and oxygen provide evidence for the disintegration of water molecules through interaction with electrons in this highly energetic environment. The images were taken when Europa was in front of Jupiter so they are seen were seen in silhouette.

There is also evidence that these appearance of these plumes is periodic, and that they are more likely to occur when Europa is further from Jupiter than when it is closer. A plausible theory is that water is released from cracks in Europa’s surface which open and close owing to a combination of tidal gravitational and magnetic effects.

I wouldn’t say this was definite proof of the water interpretation. These observations push the capability of the Hubble Space Telescope to the limit because the features are so faint. For information here’s what the raw image looks like (left)  and with enhanced contrast (right):

raw_image

 

Verification of these results through independent means is clearly an important priority, though likely to prove challenging. The plume interpretation is possible, but whether it is yet probable I couldn’t say!

 

 


by telescoper at September 27, 2016 11:15 AM

astrobites - astro-ph reader's digest

Astronomical celebrity, or just another pretender? The curious case of CR7

Title: No evidence for Population III stars or a Direct Collapse Black Hole in the z = 6.6 Lyman-α emitter ‘CR7’

Authors: R. A. A. Bowler, R. J. McLure, J. S. Dunlop, D. J. McLeod, E. R. Stanway , J. J. Eldridge, M. J. Jarvis

First Author’s Institution: University of Oxford, UK


Last year Astrobites covered the discovery of CR7, a luminous galaxy in the early universe (a redshift of 6.6, approximately 700 million years after the big bang; equivalent to when the Universe was 4 years old if we scale its age to an average human lifespan of 70 years). It’s made up of three spatial components (see Figure 2), one of which appeared to be a peculiarity: it’s light contained very few emission lines, which suggested that it contained very few metals (Astronomy parlance for elements heavier than Helium). It’s this lack of metals that got astronomers puzzled, and granted theorists imaginations a licence to run wild.

CR7

Figure 1. CR7: Population III stars, direct collapse black hole, footballing legend, or just another hotshot galaxy? Background image credit: ESO/M. Kornmesser

Metals are formed in the cores of stars. When a massive star dies it goes supernovae, flinging its metals out into nearby gas during the explosion. This gas is then said to be ‘enriched’ with metals. Subsequent generations of stars forming from this enriched gas will contain more metals than the previous generation, visible through emission lines in the light from the stars. The very first generation of stars (known as population III stars) would necessarily be almost completely metal free. Gas from the early universe that hasn’t been host to any stars would also contain very few metals.

So when CR7 didn’t appear to contain many metals, the original paper authors speculated that it could be a collection of population III stars. This would represent the very first discovery of such stars in the universe – pretty big news! Other authors speculated that CR7 could be something altogether different, a direct collapse black hole.

‘Normal’ black holes are formed when massive stars die, and collapse in on themselves due to the immense gravity caused by their mass. A direct collapse black hole is a different beast, forming from a giant, primordial gas cloud that, under the right conditions, collapses down into a single, supermassive star. This supermassive star would be unlike any star in the universe today, and would quickly collapse directly into a black hole. The immense black hole that is formed would then start to grow by gathering nearby gas, and this accretion would emit light.

One of the conditions for such a collapse to occur is low metallicity – unlike gas clouds with lots of metals, metal-free clouds tend not to break up into smaller clouds that lead to multiple lower-mass stars but collapse globally as a whole. This low metallicity would show up in the spectrum of light we observe from the accreting black hole (for more information on direct collapse black holes check out some of these previous astro bites). Direct collapse black holes have been theorised but never seen, so this would again make CR7 a world first. But is CR7 really as groundbreaking as it first seems?

CR7 in WFC3

Figure 2. Hubble Space Telescope Wide Field Camera 3 image of CR7 in two ultraviolet / optical bands. The three components are labelled; object A is responsible for the peculiar spectrum of light observed previously.

The authors of today’s paper carried out new observations of CR7 with both ground and space based telescopes. In particular, they find strong evidence for an optical emission line, namely doubly ionised Oxygen (you can see this in Figure 3 as the very high green data point above the green line for object A near the grey [OIII] line). Such a line suggests that whatever is powering the light from CR7 contains metals, which could sound the death knell for both the Population III or Direct Collapse Black Hole explanations.

They also find a weaker emission line from singly ionised helium than seen previously. Such an emission line is typically only produced in the presence of very energetic photons of light, which population III stars are theorised to produce. This provides additional evidence against the population III claim.

So if it’s not Population III stars or a direct collapse black hole, what is powering CR7? The authors suggest a couple of more ‘standard’ explanations. The ionised helium line could be explained by a more traditional accreting black hole at the center of the galaxy, something almost all galaxies appear to have. Alternatively, it could be a low-metallicity galaxy undergoing an intense period of star formation. New models of stellar light that include binary stars, or Wolf-Rayet stars, could also help explain the spectrum.

So has CR7 lost its claim to fame? The latest evidence suggests so, but the most damning evidence will come soon with the launch of the James Webb Space Telescope, which will be able to probe the optical region of the spectrum at much higher resolution. Until then, the true identity of CR7 will remain just out of reach.

CR7 photometry

Figure 3: The new photometry measurements for the three components of CR7. Ground based and Space based observations are shown as diamonds and circles, respectively. Each line shows the best fit to a Stellar Population Synthesis model, which models the light from a collection of stars based on the group properties such as their ages and metallicities.

by Christopher Lovell at September 27, 2016 07:30 AM

astrobites - astro-ph reader's digest

Gravity-Darkened Seasons on Planets

Title: Gravity-Darkened Seasons: Insolation Around Rapid Rotators
Authors: John P. Ahlers
First Author’s Institution: Physics Department, University of Idaho

On Earth, our seasons come about due to the Earth’s tilted rotational axis relative to its orbital plane (and not due to changes in distance from the Sun, as it is commonly mistaken!) Essentially, this is due to the varying amounts of radiation that the Earth receives from the Sun in each hemisphere. But what would happen if the Sun were to radiate at different temperatures across its surface?

It’s hard to imagine such a scenario, but a phenomenon known as gravity darkening causes rapidly spinning stars to have non-uniform surface temperatures due to their non-spherical shape. As a star spins, its equator bulges outwards as a result of centrifugal forces (specifically, into an oblate spheroid). Since a star is made of gas, this has interesting implications for its temperature. If its equator is bulging outwards, the gas at the equator experiences a lower surface gravity (being slightly further away from the star’s center) a lower density and temperature. The equator of a spinning star is thus considered to be “gravitationally darkened”. The gas at the star’s poles on the other hand, has a slightly higher density and temperature (“gravitational brightening”) since it is closer to the center of the star relative to the gas at the equatorial bulge. Thus, there is a temperature gradient between the poles and equator of a rapidly rotating star.

While this is an interesting phenomenon in itself, the author of today’s paper introduces a new twist: what if there’s a planet orbiting such a star, and what implication does this gravity darkening have on a planet’s seasonal temperature variations? Compared to Earth. exoplanets have potentially more complex factors governing its surface temperature variations. For example, if a planet’s orbit is inclined relative to the star’s equator (see Figure 1), it can preferentially receive radiation from different parts of its star during the course of its orbit.

Fig 1: All the parameters describing a planet's orbit. In this paper, the author mainly focuses on the inclination i, which is the angle of a planet's orbital plane relative to the star's equator. (Image courtesy of Wikipedia)

Fig 1: All the parameters describing a planet’s orbit. In this paper, the author mainly focuses on the inclination i, which is the angle of a planet’s orbital plane relative to the star’s equator. (Image courtesy of Wikipedia)

The author claims that this effect can cause a planet’s surface temperature to vary as much as 15% (Figure 2). This essentially doubles the number of seasonal temperature variations a planet can experience over the course of an orbit. However, the author does not attempt to model the complex heat transfer that occurs on the planets surface due to the atmosphere and winds.

Fig. 2: The left plot shows the flux a planet recieves if it's orbiting around a typical non-rotation star, while the right plot shows the effects of a rotating, gravitationally darkened star. The different coloured curves indicate the obliquity of a planet's orbit (with 0 degrees corresponding to a planet strictly orbiting a star's equator, while 90 corresponds to an orbit around a star's poles).

Fig. 2: The left plot shows the flux a planet recieves if it’s orbiting around a typical non-rotation star, while the right plot shows the effects of a rotating, gravitationally darkened star. The different coloured curves indicate the tilt of a planet’s orbit (with 0 degrees corresponding to a planet strictly orbiting a star’s equator, while 90 corresponds to an orbit around a star’s poles).

Not only that, but there is also some variation in the type of radiation that a planet receives during the course of its orbit. Since the poles of rotating star are at a higher temperature, it will radiate relatively more UV radiation compared to the equatorial regions. The author claims that a planet orbiting in a highly inclined orbit will alternate receiving radiation preferentially from a star’s poles or equator, causing the amount of UV radiation to vary as much as 80%. High levels of UV radiation can cause a planet’s atmosphere to evaporate, as well as other complex photochemical reactions (such as those responsible for the hazy atmosphere on Saturn’s moon Titan).

As we discover new exoplanets over the course of the coming years, we will likely find examples of planets potentially experiencing these gravitationally darkened seasons. This will have interesting implications on how we view the habitability of these other worlds.

by Anson Lam at September 27, 2016 07:03 AM

September 26, 2016

Christian P. Robert - xi'an's og

importance sampling by kernel smoothing

As noted in an earlier post, Bernard Delyon and François Portier have recently published a paper in Bernoulli about improving the speed of convergence of an importance sampling estimator of

∫ φ(x) dx

when replacing the true importance distribution ƒ with a leave-one-out (!) kernel estimate in the importance sampling estimator… They also consider a debiased version that converges even faster at the rate

n h_n^{d/2}

where n is the sample size, h the bandwidth and d the dimension. There is however a caveat, namely a collection of restrictive assumptions on the components of this new estimator:

  1. the integrand φ has a compact support, is bounded, and satisfies some Hölder-type regularity condition;
  2. the importance distribution ƒ is upper and lower bounded, its r-th order derivatives are upper bounded;
  3. the kernel K is order r, with exponential tails, and symmetric;
  4. the leave-one-out correction for bias has a cost O(n²) compared with O(n) cost of the regular Monte-Carlo estimator;
  5. the bandwidth h in the kernel estimator has a rate in n linked with the dimension d and the regularity indices of ƒ and φ

and this bandwidth needs to be evaluated as well. In the paper the authors rely on a control variate for which the integral is known, but which “looks like φ”, a strong requirement in appearance only since this new function is the convolution of φ with a kernel estimate of ƒ which expectation is the original importance estimate of the integral. This sounds convoluted but this is a generic control variate nonetheless! But this is also a costly step. Because of the kernel estimation aspect, the method deteriorates with the dimension of the variate x. However, since φ(x) is a real number, I wonder if running the non-parametric density estimate directly on the sample of φ(x)’s would lead to an improved estimator…


Filed under: Books, Statistics Tagged: Bernoulli, importance sampling, leave-one-out calibration, non-parametric kernel estimation, unbiased estimation, variance correction

by xi'an at September 26, 2016 10:16 PM

Clifford V. Johnson - Asymptotia

Where I’d Rather Be…?

floorboards_shareRight now, I'm much rather be on the sofa reading a novel (or whatever it is she's reading)....instead of drawing all those floorboards near her. (Going to add "rooms with lots of floorboards" to [...] Click to continue reading this post

The post Where I’d Rather Be…? appeared first on Asymptotia.

by Clifford at September 26, 2016 08:51 PM

Emily Lakdawalla - The Planetary Society Blog

New Findings are Conclusive: Europa is crying out for exploration
New scientific findings add to the evidence that Europa is spouting its liquid ocean into space. NASA has a mission to Europa in the works, but it wouldn't launch for at least a decade. Congress can make it faster, but it all depends on whether they can pass a budget this year.

September 26, 2016 06:36 PM

astrobites - astro-ph reader's digest

Write for Astrobites in Spanish

We are looking for enthusiastic students to join the “Astrobites en Español” team.

Requirements: Preferably master or PhD students in physics or astronomy, fluent in Spanish and English. We ask you to submit:

  • One “astrobito” with original content in Spanish (for example, something like this). You should choose a paper that appeared on astro-ph in the last three months and summarise it at an appropriate level for undergraduate students. We ask you that it is not in your specific area of expertise and we allow a maximum of 1000 words.
  • A brief (200 word maximum) note, also in Spanish, where you explain your motivation to write for Astrobitos.

Commitment: We will ask you to write a post about once per month, and to edit on a similar frequency. You would also have the opportunity to represent Astrobitos in conferences.  Our authors dedicate a couple of hours a month developing material for Astrobitos.

(There is no monetary compensation for writing for Astrobitos. Our work is ad honorem.)

If you are interested, please send us the material to write4astrobitos@astrobites.org with subject “Material para Astrobitos”. The deadline is November 1st, 2016. Thanks!

by Astrobites at September 26, 2016 03:37 PM

The n-Category Cafe

Euclidean, Hyperbolic and Elliptic Geometry

There are two famous kinds of non-Euclidean geometry: hyperbolic geometry and elliptic geometry (which almost deserves to be called ‘spherical’ geometry, but not quite because we identify antipodal points on the sphere).

In fact, these two kinds of geometry, together with Euclidean geometry, fit into a unified framework with a parameter <semantics>s<annotation encoding="application/x-tex">s \in \mathbb{R}</annotation></semantics> that tells you the curvature of space:

  • when <semantics>s>0<annotation encoding="application/x-tex">s \gt 0</annotation></semantics> you’re doing elliptic geometry

  • when <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics> you’re doing Euclidean geometry

  • when <semantics>s<0<annotation encoding="application/x-tex">s \lt 0</annotation></semantics> you’re doing hyperbolic geometry.

This is all well-known, but I’m trying to explain it in a course I’m teaching, and there’s something that’s bugging me.

It concerns the precise way in which elliptic and hyperbolic geometry reduce to Euclidean geometry as <semantics>s0<annotation encoding="application/x-tex">s \to 0</annotation></semantics>. I know this is a problem of deformation theory involving a group contraction, indeed I know all sorts of fancy junk, but my problem is fairly basic and this junk isn’t helping.

Here’s the nice part:

Give <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> a bilinear form that depends on the parameter <semantics>s<annotation encoding="application/x-tex">s \in \mathbb{R}</annotation></semantics>:

<semantics>v sw=v 1w 1+v 2w 2+sv 3w 3<annotation encoding="application/x-tex"> v \cdot_s w = v_1 w_1 + v_2 w_2 + s v_3 w_3 </annotation></semantics>

Let <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> be the group of linear transformations <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having determinant 1 that preserve <semantics> s<annotation encoding="application/x-tex">\cdot_s</annotation></semantics>. Then:

  • when <semantics>s>0<annotation encoding="application/x-tex">s \gt 0</annotation></semantics>, <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> is isomorphic to the symmetry group of elliptic geometry,

  • when <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics>, <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> is isomorphic to the symmetry group of Euclidean geometry,

  • when <semantics>s<0<annotation encoding="application/x-tex">s \lt 0</annotation></semantics>, <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> is isomorphic to the symmetry group of hyperbolic geometry.

This is sort of obvious except for <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics>. The cool part is that it’s still true in the case <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics>! The linear transformations having determinant 1 that preserve the bilinear form

<semantics>v 0w=v 1w 1+v 2w 2<annotation encoding="application/x-tex"> v \cdot_0 w = v_1 w_1 + v_2 w_2 </annotation></semantics>

look like this:

<semantics>(cosθ sinθ 0 sinθ cosθ 0 a b 1)<annotation encoding="application/x-tex">\left( \begin{array}{ccc} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ a & b & 1 \end{array} \right) </annotation></semantics>

And these form a group isomorphic to the Euclidean group — the group of transformations of the plane generated by rotations and translations!

So far, everything sounds pleasantly systematic. But then things get a bit quirky:

  • Elliptic case. When <semantics>s>0<annotation encoding="application/x-tex">s \gt 0</annotation></semantics>, the space <semantics>X={v sv=1}<annotation encoding="application/x-tex">X = \{v \cdot_s v = 1\}</annotation></semantics> is an ellipsoid. The 1d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the points of elliptic geometry. The 2d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the lines. The group <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> acts on the space of points and the space of lines, preserving the obvious incidence relation.

Why not just use <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> as our space of points? This would give a sphere, and we could use great circles as our lines—but then distinct lines would always intersect in two points, and two points would not determine a unique line. So we want to identify antipodal points on the sphere, and one way is to do what I’ve done.

  • Hyperbolic case. When <semantics>s<0<annotation encoding="application/x-tex">s \lt 0</annotation></semantics>, the space <semantics>X={v sv=1}<annotation encoding="application/x-tex">X = \{v \cdot_s v = -1\}</annotation></semantics> is a hyperboloid with two sheets. The 1d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the points of hyperbolic geometry. The 2d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X s<annotation encoding="application/x-tex">X_s</annotation></semantics> are the lines. The group <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> acts on the space of points and the space of lines, preserving the obvious incidence relation.

This time <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is hyperboloid with two sheets, but my procedure identifies antipodal points, leaving us with a single sheet. That’s nice.

But the obnoxious thing is that in the hyperbolic case I took <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to be the set of points with <semantics>v sv=1<annotation encoding="application/x-tex">v \cdot_s v = -1</annotation></semantics>, instead of <semantics>v sv=1<annotation encoding="application/x-tex">v \cdot_s v = 1</annotation></semantics>. If I hadn’t switched the sign like that, <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> would be the hyperboloid with one sheet. Maybe there’s a version of hyperbolic geometry based on the one-sheeted hyperboloid (with antipodal points identified), but nobody seems to talk about it! Have you heard about it? If not, why not?

Next:

  • Euclidean case. When <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics>, the space <semantics>X={v sv=1}<annotation encoding="application/x-tex">X = \{v \cdot_s v = 1\}</annotation></semantics> is a cylinder. The 1d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the lines of Euclidean geometry. The 2d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the points. The group <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> acts on the space of points and the space of lines, preserving their incidence relation.

Yes, any point <semantics>(a,b,c)<annotation encoding="application/x-tex">(a,b,c)</annotation></semantics> on the cylinder

<semantics>X 0={(a,b,c):a 2+b 2=1}<annotation encoding="application/x-tex"> X_0 = \{(a,b,c) : \; a^2 + b^2 = 1 \} </annotation></semantics>

determines a line in the Euclidean plane, namely the line

<semantics>ax+by+c=0<annotation encoding="application/x-tex"> a x + b y + c = 0 </annotation></semantics>

and antipodal points on the cylinder determine the same line. I’ll let you figure out the rest, or tell you if you’re curious.

The problem with the Euclidean case is that points and lines are getting switched! Points are corresponding to certain 2d subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics>, and lines are corresponding to certain 1d subspaces.

You may just tell me I just got the analogy backwards. Indeed, in elliptic geometry every point has a line orthogonal to it, and vice versa. So we can switch what counts as points and what counts as lines in that case, without causing trouble. Unfortunately, it seem for hyperbolic geometry this is not true.

There’s got to be some way to smooth things down and make them nice. I could explain my favorite option, and why it doesn’t quite work, but I shouldn’t pollute your brain with my failed ideas. At least not until you try the exact same ideas.

I’m sure someone has figured this out already, somewhere.

by john (baez@math.ucr.edu) at September 26, 2016 03:23 PM

ZapperZ - Physics and Physicists

10 Years Of Not Even Wrong
Physics World has a provocative article and podcast to commemorate the 10-year anniversary of Peter Woit's devastating criticism of String Theory in his book "Not Even Wrong".

Not Even Wrong coincided with the publication of another book – The Trouble with Physics – that had a similar theme and tone, penned by Woit’s friend and renowned physicist Lee Smolin. Together, the two books put the theory and its practitioners under a critical spotlight and took string theory’s supposed inadequacies to task. The books sparked a sensation both in the string-theory community and in the wider media, which until then had heard only glowing reports of the theory’s successes. 

Interestingly enough, the few students that I've encountered who told me that they want to go into String Theory have never heard or were not aware of Woit's book. I can understand NOT WANTING to read it, but to not even be aware of it and what it is about sounds rather .... naive. This is a prominent physicist who produced a series of undeniable criticism of a particular field of study that you want to go into. Not only should you be aware of it, but you need to read it and figure it out.

It is still a great book to read even if it is 10 years old now.

Zz.

by ZapperZ (noreply@blogger.com) at September 26, 2016 02:05 PM

Christian P. Robert - xi'an's og

maximum of a Dirichlet vector

An intriguing question on Stack Exchange this weekend, about the distribution of max{p¹,p²,…}the maximum component of a Dirichlet vector Dir(a¹,a²,…) with arbitrary hyper-parameters. Writing the density of this random variable is feasible, using its connection with a Gamma vector, but I could not find a closed-form expression. If there is such an expression, it may follow from the many properties of the Dirichlet distribution and I’d be interested in learning about it. (Very nice stamp, by the way! I wonder if the original formula was made with LaTeX…)


Filed under: Books, Statistics Tagged: cross validated, Dirichlet distribution, LaTeX, marginalisation, order statistics, Peter Dirichlet, Stack Exchange, stamp

by xi'an at September 26, 2016 12:18 PM

Peter Coles - In the Dark

My First Contribution to the Scientific Literature.

I suddenly realized yesterday that I had forgotten to mark the important anniversary of an event that had immense impact on the field of cosmology. On 15th September 1986, just over thirty years ago, my first ever scientific paper was released into the public domain.

Here is the front page:

mnras_paper

This was before the days of arXiv so there isn’t a copy on the preprint server, but you can access the whole article here on NASA/ADS.

I know it’s a shitty little paper, but you have to start somewhere! I’m particularly sad that, looking back, it reads as if I meant to be very critical of the Kaiser (1984) paper that inspired it. I still think that was a brilliant paper because it was based on a very original idea that proved to be enormously influential. The only point I was really making was that a full calculation of the size of the effect Nick Kaiser had correctly identified was actually quite hard, and his simple approximation was of limited quantitative usefulness. The idea was most definitely right, however.

I was just a year into my PhD  DPhil when this paper came out, and it wasn’t actually on what was meant to be the subject of my thesis work (which was the cosmic microwave background), although the material was related. My original version of this paper had my supervisor’s name on it, but he removed his name from the draft (as well as making a huge number of improvements to the text). At the time I naturally assumed that he took his name off because he didn’t want to be associated with such an insignificant paper, but I later realized he was just being generous. It was very good for me to have a sole-author paper very early on. I’ve taken that lesson to heart and have never insisted – like some supervisors do – in putting my name on my students’ work.

Seeing this again after such a long time brought back memories of the tedious job of making and distributing hard copies of preprints when I submitted the paper and sending them by snail mail to prominent individuals and institutions. Everyone did that in those days as email was too limited to send large papers. Nowadays we just shove our papers on the arXiv, complete with fancy graphics, and save ourselves a lot of time and effort.

I was actually surprised that quite a few recipients of my magnum opus were kind enough to respond in writing. In particular I got a nice letter from Dick Bond which began by referring to my “anti-Kaiser” preprint, which made me think he was going to have a go at me, but went on to say that he found my paper interesting and that my conclusions were correct. I was chuffed by that letter as I admired Dick Bond enormously (and still do).

Anyway, over the intervening 30 years this paper has received the princely total of 22 citations -and it hasn’t been cited at all since 2000 – so its scientific impact hardly been earth-shattering. The field has moved on quickly and left this little relic far behind. However, there is one citation I am proud of.

The great Russian scientist Yacob Borisovich Zel’dovich passed away in 1987. I was a graduate student at that time and had never had the opportunity to meet him. If I had done so I’m sure I would have found him fascinating and intimidating in equal measure, as I admired his work enormously as did everyone I knew in the field of cosmology. Anyway, a couple of years after his death a review paper written by himself and Sergei Shandarin was published, along with the note:

The Russian version of this review was finished in the summer of 1987. By the tragic death of Ya. B.Zeldovich on December 2, 1987, about four-fifths of the paper had been translated into English. Professor Zeldovich would have been 75 years old on March 8, 1989 and was vivid and creative until his last day. The theory of the structure of the universe was one of his favorite subjects, to which he made many note-worthy contributions over the last 20 years.

As one does if one is vain I looked down the reference list to see if any of my papers were cited. I’d only published the one paper before Zel’dovich died so my hopes weren’t high. As it happens, though, my very first paper (Coles 1986) was there in the list:

reference


by telescoper at September 26, 2016 11:23 AM

September 25, 2016

Christian P. Robert - xi'an's og

contemporary issues in hypothesis testing

hipocontemptThis week [at Warwick], among other things, I attended the CRiSM workshop on hypothesis testing, giving the same talk as at ISBA last June. There was a most interesting and unusual talk by Nick Chater (from Warwick) about the psychological aspects of hypothesis testing, namely about the unnatural features of an hypothesis in everyday life, i.e., how far this formalism stands from human psychological functioning.  Or what we know about it. And then my Warwick colleague Tom Nichols explained how his recent work on permutation tests for fMRIs, published in PNAS, testing hypotheses on what should be null if real data and getting a high rate of false positives, got the medical imaging community all up in arms due to over-simplified reports in the media questioning the validity of 15 years of research on fMRI and the related 40,000 papers! For instance, some of the headings questioned the entire research in the area. Or transformed a software bug missing the boundary effects into a major flaw.  (See this podcast on Not So Standard Deviations for a thoughtful discussion on the issue.) One conclusion of this story is to be wary of assertions when submitting a hot story to journals with a substantial non-scientific readership! The afternoon talks were equally exciting, with Andrew explaining to us live from New York why he hates hypothesis testing and prefers model building. With the birthday model as an example. And David Draper gave an encompassing talk about the distinctions between inference and decision, proposing a Jaynes information criterion and illustrating it on Mendel‘s historical [and massaged!] pea dataset. The next morning, Jim Berger gave an overview on the frequentist properties of the Bayes factor, with in particular a novel [to me] upper bound on the Bayes factor associated with a p-value (Sellke, Bayarri and Berger, 2001)

B¹⁰(p) ≤ 1/-e p log p

with the specificity that B¹⁰(p) is not testing the original hypothesis [problem] but a substitute where the null is the hypothesis that p is uniformly distributed, versus a non-parametric alternative that p is more concentrated near zero. This reminded me of our PNAS paper on the impact of summary statistics upon Bayes factors. And of some forgotten reference studying Bayesian inference based solely on the p-value… It is too bad I had to rush back to Paris, as this made me miss the last talks of this fantastic workshop centred on maybe the most important aspect of statistics!


Filed under: Statistics Tagged: Bayesian hypothesis testing, birthday problem, bounds, brain imaging, CRiSM, decision theory, E.T. Jaynes, England, fMRI, Genetics, Gregor Mendel, ISBA 2016, p-values, permutation tests, PNAS, podcast, psychology, University of Warwick, Zeeman building

by xi'an at September 25, 2016 10:16 PM

Sean Carroll - Preposterous Universe

Live Q&As, Past and Future

On Friday I had a few minutes free, and did an experiment: put my iPhone on a tripod, pointed it at myself, and did a live video on my public Facebook page, taking questions from anyone who happened by. There were some technical glitches, as one might expect from a short-notice happening. The sound wasn’t working when I first started, and in the recording below the video fails (replacing the actual recording with a still image of me sideways, for inexplicable reasons) just when the sound starts working. (I don’t think this happened during the actual event, but maybe it did and everyone was too polite to mention it.) And for some reason the video keeps going long after the 20-some minutes for which I was actually recording.

But overall I think it was fun and potentially worth repeating. If I were to make this an occasional thing, how best to do it? This time around I literally just read off a selection of questions that people were typing into the Facebook comment box. Alternatively, I could just talk on some particular topic, or I could solicit questions ahead of time and pick out some good ones to answer in detail.

What do you folks think? Also — is Facebook Live the right tool for this? I know the kids these days use all sorts of different technologies. No guarantees that I’ll have time to do this regularly, but it’s worth contemplating.

What makes the most sense to talk about in live chats?

by Sean Carroll at September 25, 2016 08:32 PM

Lubos Motl - string vacua and pheno

Chen-Ning Yang against Chinese colliders
The plans to build the world's new greatest collider in China have many prominent supporters – including Shing-Tung Yau, Nima Arkani-Hamed, David Gross, Edward Witten – but SixthTone and South China Morning Post just informed us about a very prominent foe: Chen-Ning Yang, the more famous part of Lee-Yang and Yang-Mills.

He is about 94 years old now but his brain is very active and his influence may even be enough to kill the project(s).



The criticism is mainly framed as a criticism of CEPC (Circular Electron-Positron Collider), a 50-70-kilometer-long [by circumference] lepton accelerator. But I guess that if the relevant people decided to build another hadron machine in China, and recall that SPPC (Super Proton-Proton Collider) is supposed to be located in the same tunnel, his criticism would be about the same. In other words, Yang is against all big Chinese colliders. If you have time, read these 403 pages on the CEPC-SPPC project. Yang may arguably make all this work futile by spitting a few milliliters of saliva.

He wrote his essay for a Chinese newspaper 3 days ago,
China shouldn't build big colliders today (autom. EN; orig. CN)
The journalists frame this opinion as an exchange with Shing-Tung Yau who famously co-wrote a pro-Chinese-collider book.




My Chinese isn't flawless. Also, his opinions and arguments aren't exactly innovative. But let me sketch what he's saying.




He says that Yau has misinterpreted his views when he said that Yang was incomprehensibly against the further progress in high-energy physics. Yang claims to be against the Chinese colliders only. Well, I wouldn't summarize his views in this way after I have read the whole op-ed.

His reasons to oppose the accelerator are:
  1. In Texas, the SSC turned out to be painful and a "bottomless pit" or a "black hole". Yang suggests it must always be the case – well, it wasn't really the case of the LHC. And he suggests that $10-$20 billion is too much.
  2. China is just a developing country. Its GDP per capita is below that of Brazil, Mexico, or Malaysia. There are poor farmers, need to pay for the environment(alism), health, medicine etc. and those should be problems of a higher priority.
  3. The collider would also steal the money from other fields of science.
  4. Supporters of the collider argue that the fundamental theory isn't complete – because gravity is missing and unification hasn't been understood; and they want to find evidence of SUSY. However, Yang is eager to say lots of the usual anti-SUSY and anti-HEP clichés. SUSY has no experimental evidence – funny, that's exactly why people keep on dreaming about more powerful experiments.
  5. High-energy physics hasn't improved human lives in the last 70 years and won't do so. This item is the main one – but not only one – suggesting that the Chinese project isn't the only problem for Yang.
  6. China and IHEP in particular hasn't achieved anything in high-energy physics. Its contributions remain below 1% of the world. Also, if someone gets the Nobel prize for a discovery, he will probably be a non-Chinese.
  7. He recommends cheaper investments – to new ways to accelerate particles; and to work on theory, e.g. string theory.
You can see that it's a mixed bag with some (but not all) of the anti-HEP slogans combined with some left-wing propaganda. I am sorry but especially the social arguments are just bogus.

What decides about a country's ability to make a big project is primarily the total GDP, not the GDP per capita. Ancient China built the Great Chinese Wall despite the fact that the GDP per capita was much lower than the today's GDP per capita. Those people couldn't buy a single Xiaomi Redmi 3 Android smartphone for their salary (I am considering this octa-core $150 smartphone – which seems to be the #1 bestselling Android phone in Czechia now – as a gift now). But they still built the wall. And today, Chinese companies are among the most important producers of many high-tech products; I just mentioned one example. As you may see with your naked eyes, this capability in no way contradicts China's low GDP per capita.

The idea that a country makes much social progress by redistributing the money it has among all the people is just a communist delusion. That's how China worked before it started to grow some 30 years ago. You just shouldn't spend or devour all this money – for healthcare of the poor citizens etc. – if you want China to qualitatively improve. You need to invest into sufficiently well-defined things. You may take those $10-$20 billion for the Chinese collider projects and spread them among the Chinese citizens. But that will bring some $10-$20 to each person – perhaps one dinner in a fancier restaurant or one package of good cigarettes. It's normal for the poor people to spend the money in such a way that the wealth quickly evaporates. The concentration of the capital is even more needed in poor countries that want to grow.

Also, China's contribution to HEP physics – and other fields – is limited now. But that's mostly because similar moves and investments that would integrate China to the world's scientific community weren't done in the past or at least they were not numerous.

Yang's remarks about the hypothetical Nobel prizes are specious, too. I don't know who will get Nobel prizes for discoveries at Chinese colliders, if anyone, so it's a pure speculation. But the Nobel prize money is clearly not why colliders are being built. Higgs and Englert got some $1 million from the Nobel committee while the LHC cost $10 billion or so. The prizes can in no way be considered the "repayment of the investments". What the experiments like that bring to science and the mankind is much more than some personal wealth for several people.

You may see that regardless of the recipients of the prize money (and regardless of the disappointing pro-SM results coming from the LHC), everyone understands that because of the LHC and its status, Europe has become essential in the state-of-the-art particle physics. Many peope may like to say unfriendly things about particle physics but at the end, I think that they also understand that at least among well-defined and concentrated disciplines, particle physics is the royal discipline of science. A "center of mass" of this discipline is located on the Swiss-French border. In ten years, China could take this leadership from Europe. This would be a benefit for China that is far more valuable than $10-$20 billion. China – whose annual GDP was some $11 trillion in 2015 – is paying much more money for various other things.



Off-topic: Some news reports talk about a new "Madala boson". It seems to be all about this 2-weeks-old 5-page-long hep-ph preprint presenting a two-Higgs-doublet model that also claims to say something about the composition of dark matter (which is said to be composed of a new scalar \(\chi\)). I've seen many two-Higgs-doublet papers and papers about dark matter and I don't see a sense in which this paper is more important or more persuasive.

The boson should already be seen in the LHC data but it's not.



Update Chinese collider:

On September 25th or so, Maria Spiropulu linked to this new Chinese article where 2+2 scholars support/dismiss the Chinese collider plans. David Gross' pro-collider story is the most detailed argumentation.

by Luboš Motl (noreply@blogger.com) at September 25, 2016 05:08 AM

John Baez - Azimuth

Struggles with the Continuum (Part 8)

We’ve been looking at how the continuum nature of spacetime poses problems for our favorite theories of physics—problems with infinities. Last time we saw a great example: general relativity predicts the existence of singularities, like black holes and the Big Bang. I explained exactly what these singularities really are. They’re not points or regions of spacetime! They’re more like ways for a particle to ‘fall off the edge of spacetime’. Technically, they are incomplete timelike or null geodesics.

The next step is to ask whether these singularities rob general relativity of its predictive power. The ‘cosmic censorship hypothesis’, proposed by Penrose in 1969, claims they do not.

In this final post I’ll talk about cosmic censorship, and conclude with some big questions… and a place where you can get all these posts in a single file.

Cosmic censorship

To say what we want to rule out, we must first think about what behaviors we consider acceptable. Consider first a black hole formed by the collapse of a star. According to general relativity, matter can fall into this black hole and ‘hit the singularity’ in a finite amount of proper time, but nothing can come out of the singularity.

The time-reversed version of a black hole, called a ‘white hole’, is often considered more disturbing. White holes have never been seen, but they are mathematically valid solutions of Einstein’s equation. In a white hole, matter can come out of the singularity, but nothing can fall in. Naively, this seems to imply that the future is unpredictable given knowledge of the past. Of course, the same logic applied to black holes would say the past is unpredictable given knowledge of the future.

If white holes are disturbing, perhaps the Big Bang should be more so. In the usual solutions of general relativity describing the Big Bang, all matter in the universe comes out of a singularity! More precisely, if one follows any timelike geodesic back into the past, it becomes undefined after a finite amount of proper time. Naively, this may seem a massive violation of predictability: in this scenario, the whole universe ‘sprang out of nothing’ about 14 billion years ago.

However, in all three examples so far—astrophysical black holes, their time-reversed versions and the Big Bang—spacetime is globally hyperbolic. I explained what this means last time. In simple terms, it means we can specify initial data at one moment in time and use the laws of physics to predict the future (and past) throughout all of spacetime. How is this compatible with the naive intuition that a singularity causes a failure of predictability?

For any globally hyperbolic spacetime M, one can find a smoothly varying family of Cauchy surfaces S_t (t \in \mathbb{R}) such that each point of M lies on exactly one of these surfaces. This amounts to a way of chopping spacetime into ‘slices of space’ for various choices of the ‘time’ parameter t. For an astrophysical black hole, the singularity is in the future of all these surfaces. That is, an incomplete timelike or null geodesic must go through all these surfaces S_t before it becomes undefined. Similarly, for a white hole or the Big Bang, the singularity is in the past of all these surfaces. In either case, the singularity cannot interfere with our predictions of what occurs in spacetime.

A more challenging example is posed by the Kerr–Newman solution of Einstein’s equation coupled to the vacuum Maxwell equations. When

e^2 + (J/m)^2 < m^2

this solution describes a rotating charged black hole with mass m, charge e and angular momentum J in units where c = G = 1. However, an electron violates this inequality. In 1968, Brandon Carter pointed out that if the electron were described by the Kerr–Newman solution, it would have a gyromagnetic ratio of g = 2, much closer to the true answer than a classical spinning sphere of charge, which gives g = 1. But since

e^2 + (J/m)^2 > m^2

this solution gives a spacetime that is not globally hyperbolic: it has closed timelike curves! It also contains a ‘naked singularity’. Roughly speaking, this is a singularity that can be seen by arbitrarily faraway observers in a spacetime whose geometry asymptotically approaches that of Minkowski spacetime. The existence of a naked singularity implies a failure of global hyperbolicity.

The cosmic censorship hypothesis comes in a number of forms. The original version due to Penrose is now called ‘weak cosmic censorship’. It asserts that in a spacetime whose geometry asymptotically approaches that of Minkowski spacetime, gravitational collapse cannot produce a naked singularity.

In 1991, Preskill and Thorne made a bet against Hawking in which they claimed that weak cosmic censorship was false. Hawking conceded this bet in 1997 when a counterexample was found. This features finely-tuned infalling matter poised right on the brink of forming a black hole. It almost creates a region from which light cannot escape—but not quite. Instead, it creates a naked singularity!

Given the delicate nature of this construction, Hawking did not give up. Instead he made a second bet, which says that weak cosmic censorshop holds ‘generically’ — that is, for an open dense set of initial conditions.

In 1999, Christodoulou proved that for spherically symmetric solutions of Einstein’s equation coupled to a massless scalar field, weak cosmic censorship holds generically. While spherical symmetry is a very restrictive assumption, this result is a good example of how, with plenty of work, we can make progress in rigorously settling the questions raised by general relativity.

Indeed, Christodoulou has been a leader in this area. For example, the vacuum Einstein equations have solutions describing gravitational waves, much as the vacuum Maxwell equations have solutions describing electromagnetic waves. However, gravitational waves can actually form black holes when they collide. This raises the question of the stability of Minkowski spacetime. Must sufficiently small perturbations of the Minkowski metric go away in the form of gravitational radiation, or can tiny wrinkles in the fabric of spacetime somehow amplify themselves and cause trouble—perhaps even a singularity? In 1993, together with Klainerman, Christodoulou proved that Minkowski spacetime is indeed stable. Their proof fills a 514-page book.

In 2008, Christodoulou completed an even longer rigorous study of the formation of black holes. This can be seen as a vastly more detailed look at questions which Penrose’s original singularity theorem addressed in a general, preliminary way. Nonetheless, there is much left to be done to understand the behavior of singularities in general relativity.

Conclusions

In this series of posts, we’ve seen that in every major theory of physics, challenging mathematical questions arise from the assumption that spacetime is a continuum. The continuum threatens us with infinities! Do these infinities threaten our ability to extract predictions from these theories—or even our ability to formulate these theories in a precise way?

We can answer these questions, but only with hard work. Is this a sign that we are somehow on the wrong track? Is the continuum as we understand it only an approximation to some deeper model of spacetime? Only time will tell. Nature is providing us with plenty of clues, but it will take patience to read them correctly.

For more

To delve deeper into singularities and cosmic censorship, try this delightful book, which is free online:

• John Earman, Bangs, Crunches, Whimpers and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, Oxford U. Press, Oxford, 1993.

To read this whole series of posts in one place, with lots more references and links, see:

• John Baez, Struggles with the continuum.


by John Baez at September 25, 2016 01:00 AM

September 24, 2016

Christian P. Robert - xi'an's og

Jim Harrison (1937-2016)

“The wilderness does not make you forget your normal life as much as it removes the distractions for  proper remembering.” J. Harrison

One of my favourite authors passed away earlier this year and I was not even aware of it! Jim Harrison died from a heart attack in Arizona on March 26. I read Legends of the Fall [for the first time] when I arrived in the US in 1987 and then other [if not all] novels like A good day to die or Wolf

“Barring love, I’ll take my life in large doses alone: rivers, forests, fish, grouse, mountains. Dogs.” J. Harrison

What I liked in those novels was less the plot, which often is secondary—even though the Cervantesque story of the two guys trying to blow a dam in A good day to die is pure genius!—, than the depiction of the characters and their almost always bleak life, as well as the love of outdoors, in a northern Michigan that is at its heart undistinguishable from (eastern) Canada or central Finland. His tales told of eating and drinking, of womanising, fishing, and hunting, of failed promises and multiple capitulations, tales that are always bawdy and brimming with testosterone, but also with a gruff tenderness for those big hairy guys and their dogs. Especially their dogs. There is a lot of nostalgia seeping through these stories, a longing for a wild rural (almost feral) America that most people will never touch. Or even conceive. But expressed in a melancholic rather than reactionary way. In a superb prose that often sounded like a poem.

“I like grit, I like love and death, I am tired of irony…” J. Harrison

If anything, remembering those great novels makes me long for the most recent books of Harrison I have not [yet] read. Plus the non-fiction book The Raw and the Cooked.


Filed under: Statistics Tagged: A Good Day to Die, Arizona, Jim Harrison, Lake Michigan, Legends of the Fall, Upper Peninsula

by xi'an at September 24, 2016 10:16 PM

Peter Coles - In the Dark

Sonnet No. 73

That time of year thou may’st in me behold 
When yellow leaves, or none, or few, do hang
Upon those boughs which shake against the cold, 
Bare ruin’d choirs, where late the sweet birds sang. 
In me thou see’st the twilight of such day, 
As after sunset fadeth in the west, 
Which by-and-by black night doth take away,
Death’s second self, that seals up all in rest. 
In me thou see’st the glowing of such fire 
That on the ashes of his youth doth lie, 
As the death-bed whereon it must expire 
Consum’d with that which it was nourish’d by. 
   This thou perceivest, which makes thy love more strong,
   To love that well which thou must leave ere long.

by William Shakespeare (1564-1616)


by telescoper at September 24, 2016 05:45 PM

Tommaso Dorigo - Scientificblogging

A Book By Guido Tonelli
Yesterday I read with interest and curiosity some pages of a book on the search and discovery of the Higgs boson, which was published last March by Rizzoli (in Italian only, at least for the time being). The book, authored by physics professor and ex CMS spokesperson Guido Tonelli, is titled "La nascita imperfetta delle cose" ("The imperfect birth of things"). 

read more

by Tommaso Dorigo at September 24, 2016 04:38 PM

September 23, 2016

Christian P. Robert - xi'an's og

trick or treat?!

Two weeks ago, we went to a local restaurant, connected to my running grounds, for dinner. While the setting in a 16th building that was part of the original Sceaux castle was quite nice, the fare was mediocre and the bill more suited for a one star Michelin than dishes I could have cooked myself. The height (or rather bottom) of the meal was a dish of sardines consisting in an half-open pilchard can… Just dumped on a plate with a slice of bread. It could have been a genius stroke from the chef had the sardines been cooked and presented in the can, alas it sounded more like the act of an evil genie! Or more plainly a swindle. As those tasty sardines came straight from the shop!


Filed under: Kids, pictures, Travel, Wines Tagged: MIchelin starred restaurant, Parc de Sceaux, restaurant, sardines

by xi'an at September 23, 2016 10:16 PM

astrobites - astro-ph reader's digest

Shooting for the Stars (and damage of doing so)

Title: The Interaction of Relativistic Spacecrafts with the Interstellar Medium

Authors: Thiem Hoang, A. Lazarian, Blakesley Burkhart, and Abraham Loeb

First Author’s Institution: Canadian Institute for Theoretical Astrophysics, University of Toronto


On the doorstep of the Solar System…

On its voyage to Pluto, the New Horizons probe broke the speed record for a spacecraft (and anything humans have ever created, for that matter), traveling at a blistering speed 40 times faster than a bullet. However, even at these Earth-shattering speeds, it would take New Horizons about 75,000 years to reach the distance of Proxima Centauri, the nearest star to our Sun. And now that we know this star has a potentially habitable Earth-like planet orbiting it, tens of thousands of years is just too long to wait.

Thankfully, physicist Steven Hawking, adventure capitalist Yuri Milner, and Facebook CEO Mark Zuckerberg concocted a better idea. They put their great minds (and loads of money) together to propose the Breakthrough Starshot initiative, a plan to send a fleet of centimeter-sized spacecrafts to the nearest star system. These spacecraft would be accelerated to a significant fraction of the speed of light by the force of radiation pressure from high-powered, Earth-based lasers and light sail technology. Breakthrough Starshot claims that the technology will be developed to accelerate these spacecrafts to 1/5th the speed of light, which means it would only take a quick 20 years to travel the 4 light-years to Proxima Centauri. However, every great idea has complications. Today we’ll confront one of the biggest ones: all that stuff that is between Earth and Proxima Centauri.

Artist's rendition of the light sail of Project Starshot. Courtesy of Breakthrough Initiatives.

Artist’s rendition of the light sail of Project Starshot. Courtesy of Breakthrough Initiatives.

Another one bites the space dust

On average, the interstellar medium (ISM) has only 1 atom inside of every cubic centimeter of space. However, a 1-square-centimeter spacecraft would still run about into two million trillion on its way to Proxima Centauri. Coincidently, this is about the number of atoms contained in a single grain of salt. Though getting hit with a grain of salt’s worth of atoms over the course of the journey doesn’t seem all too detrimental, keep in mind that the spacecraft are charging at these atoms at a fifth the speed of light, so that the spacecraft sees the atoms incoming at 60 million meters per second! These tiny atomic bullets can still alter the material structure of the spacecraft by creating microscopic holes, and heat up the spacecraft material by imparting their kinetic energy to atoms of the spacecraft. If the bullet is bigger (say, grains of interstellar dust, which are typically made up of a few molecules and are about 1000 times larger in diameter than atoms), the bigger the kinetic energy of the impact. Hoang et al. analyze the effects that the gas and dust on the way the Proxima Centauri would have on the Starshot spacecraft.

Explosive evaporation of spacecraft material, in the frame of reference of the spacecraft. Figure from today's article.

Explosive evaporation of spacecraft material, in the frame of reference of the spacecraft. Figure from today’s article.

By studying the affects of gas and dust bombardment on quartz and graphite, the authors gauged which particles would be the most detrimental to the spacecraft, how they would affect the spacecraft, and what protective measures could possibly be done to reduce damage. First they analyzed the effect of interstellar gas. Though hydrogen and helium make up most of the material in the ISM, they found that the heavier and rarer atoms (such as oxygen) would have a more notable effect on the trip to Proxima Centauri. These atoms could produce tiny holes in the spacecraft, called damage tracks, that would penetrate up to a tenth of a millimeter deep.

Dust, though less plentiful than gas in the ISM, was found to be more harmful to the spacecraft over the course of the trip. Collision by a normal interstellar dust grain could provide enough energy to evaporate material at the impact site – known as explosive evaporation. Furthermore, the atoms at the impact site become ionized, and the energetic electrons can then transfer their kinetic energy to nearby atoms on the spacecraft, raising the heat of this material. Over the course of the journey to Proxima, the impact of such dust could result in a half-millimeter layer of the spacecraft to be completely eroded, which is larger than it sounds, since these are only centimeter-scale spacecraft. Subsequent melting from these dust collisions could result in damage another couple millimeters deeper into the material. The figure below shows some of the main results of the study, plotting the damage by dust and gas on material moving through the ISM at different fractions of the speed of light.

Screen Shot 2016-09-22 at 10.54.28 PM

The thickness of surface damaged by dust and gas bombardment for quartz (left) and graphite (right). The x-axis plots the column density (the amount of material along a given line of sight between an observer and an object), with the grey band indicating the column density expected towards Proxima Centauri. Though evaporation by dust is material-independent, graphite is a better conductor which lessens melting by dust and track formation by gas. Figure from today’s article.

What if the spacecraft was unfortunate enough to encounter an abnormally large dust grain? The authors found that grains larger than the width of a silk fiber would completely destroy the gram-scale spacecraft (for reference, the average interstellar dust molecule is about 1000 times smaller than this). However, since such large grains are quite rare, they found this concern to be negligible. Based on the quantity of these abnormally large grains in the ISM, the chance that one of these spacecraft would encounter such a grain on its journey to Proxima Centauri is 10^-50, which is so incredibly unlikely that I can’t even think of a real-world analogy.

Interstellar dust buster

So what can we do about this dusty problem? Hoang et al. propose multiple means of protection, such as deflecting the incoming dust grains with an electric field or scattering them off the path of the spacecraft with the radiation pressure of little lasers. However, the best approaches seem to be the simplest. Adding a thin layer of highly-conducting material, such as graphite or beryllium, to the front of the spacecraft would prevent the track formation from gas bombardment. Though it would add weight to the spacecraft, if this layer is a few millimeters thick it would also protect the sensitive components of the spacecraft from explosive cratering and melting by dust. The authors stress that geometrical considerations should also be taken into account. If the spacecraft are needle-like, then they have a smaller cross-sectional area for gas and dust to impact.

Though something important to consider, the hindrance of gas and dust on the way to Proxima Centauri is not a deal-breaker for the Starshot initiative. The biggest test will be testing our patience, because though 25 years is a blink of the eye for our Universe, I can’t say it is the same for me.

by Michael Zevin at September 23, 2016 08:09 PM

Peter Coles - In the Dark

The Worthless University Rankings

The Times Higher World University Rankings, which were released this weekk. The main table can be found here and the methodology used to concoct them here.

Here I wish to reiterate the objection I made last year to the way these tables are manipulated year on year to create an artificial “churn” that renders them unreliable and impossible to interpret in an objective way. In other words, they’re worthless. This year, editor Phil Baty has written an article entitled Standing still is not an option in which he makes a statement that “the overall rankings methodology is the same as last year”. Actually it isn’t. In the page on methodology you will find this:

In 2015-16, we excluded papers with more than 1,000 authors because they were having a disproportionate impact on the citation scores of a small number of universities. This year, we have designed a method for reincorporating these papers. Working with Elsevier, we have developed a new fractional counting approach that ensures that all universities where academics are authors of these papers will receive at least 5 per cent of the value of the paper, and where those that provide the most contributors to the paper receive a proportionately larger contribution.

So the methodology just isn’t “the same as last year”. In fact every year that I’ve seen these rankings there’s been some change in methodology. The change above at least attempts to improve on the absurd decision taken last year to eliminate from the citation count any papers arising from large collaborations. In my view, membership of large world-wide collaborations is in itself an indicator of international research excellence, and such papers should if anything be given greater not lesser weight. But whether you agree with the motivation for the change or not is beside the point.

The real question is how can we be sure that any change in league table position for an institution from year to year are is caused by methodological tweaks rather than changes in “performance”, i.e. not by changes in the metrics but by changes in the way they are combined? Would you trust the outcome of a medical trial in which the response of two groups of patients (e.g. one given medication and the other placebo) were assessed with two different measurement techniques?

There is an obvious and easy way to test for the size of this effect, which is to construct a parallel set of league tables, with this year’s input data but last year’s methodology, which would make it easy to isolate changes in methodology from changes in the performance indicators. The Times Higher – along with other purveyors of similar statistical twaddle – refuses to do this. No scientifically literate person would accept the result of this kind of study unless the systematic effects can be shown to be under control. There is a very easy way for the Times Higher to address this question: all they need to do is publish a set of league tables using, say, the 2015/16 methodology and the 2016/17 data, for comparison with those constructed using this year’s methodology on the 2016/17 data. Any differences between these two tables will give a clear indication of the reliability (or otherwise) of the rankings.

I challenged the Times Higher to do this last year, and they refused. You can draw your own conclusions about why.


by telescoper at September 23, 2016 02:09 PM

ZapperZ - Physics and Physicists

Without Direction, or Has No Prefered Direction?
This is why popular news coverage of science can often make subtle mistakes that might change the meaning of something.

This UPI news coverage talks about a recent publication in PRL that studied the CMB and found no large-scale anisotropy in our universe. What this means is that our universe, based on the CMB, is isotropic, i.e. the same in all direction, and that our universe has no detectable rotation.

However, instead of saying that, it keeps harping on the idea that the universe "has no direction". It has directions. In fact, it has infinite directions. It is just that it looks the same in all of these directions. Not having a preferred direction, or being isotropic, is not exactly the same as "having no direction".

If you read the APS Physics article accompanying this paper, you'll notice that such a phrase was never used.

I don't know. As a layperson, if you read that UPI news article, what impression does that leave you? Or am I making a mountain out of a mole hill here?

Zz.

by ZapperZ (noreply@blogger.com) at September 23, 2016 11:47 AM

John Baez - Azimuth

Struggles with the Continuum (Part 7)

Combining electromagnetism with relativity and quantum mechanics led to QED. Last time we saw the immense struggles with the continuum this caused. But combining gravity with relativity led Einstein to something equally remarkable: general relativity.

In general relativity, infinities coming from the continuum nature of spacetime are deeply connected to its most dramatic successful predictions: black holes and the Big Bang. In this theory, the density of the Universe approaches infinity as we go back in time toward the Big Bang, and the density of a star approaches infinity as it collapses to form a black hole. Thus we might say that instead of struggling against infinities, general relativity accepts them and has learned to live with them.

General relativity does not take quantum mechanics into account, so the story is not yet over. Many physicists hope that quantum gravity will eventually save physics from its struggles with the continuum! Since quantum gravity far from being understood, this remains just a hope. This hope has motivated a profusion of new ideas on spacetime: too many to survey here. Instead, I’ll focus on the humbler issue of how singularities arise in general relativity—and why they might not rob this theory of its predictive power.

General relativity says that spacetime is a 4-dimensional Lorentzian manifold. Thus, it can be covered by patches equipped with coordinates, so that in each patch we can describe points by lists of four numbers. Any curve \gamma(s) going through a point then has a tangent vector v whose components are v^\mu = d \gamma^\mu(s)/ds. Furthermore, given two tangent vectors v,w at the same point we can take their inner product

g(v,w) = g_{\mu \nu} v^\mu w^\nu

where as usual we sum over repeated indices, and g_{\mu \nu} is a 4 \times 4 matrix called the metric, depending smoothly on the point. We require that at any point we can find some coordinate system where this matrix takes the usual Minkowski form:

\displaystyle{  g = \left( \begin{array}{cccc} -1 & 0 &0 & 0 \\ 0 & 1 &0 & 0 \\ 0 & 0 &1 & 0 \\ 0 & 0 &0 & 1 \\ \end{array}\right). }

However, as soon as we move away from our chosen point, the form of the matrix g in these particular coordinates may change.

General relativity says how the metric is affected by matter. It does this in a single equation, Einstein’s equation, which relates the ‘curvature’ of the metric at any point to the flow of energy-momentum through that point. To define the curvature, we need some differential geometry. Indeed, Einstein had to learn this subject from his mathematician friend Marcel Grossman in order to write down his equation. Here I will take some shortcuts and try to explain Einstein’s equation with a bare minimum of differential geometry. For how this approach connects to the full story, and a list of resources for further study of general relativity, see:

• John Baez and Emory Bunn, The meaning of Einstein’s equation.

Consider a small round ball of test particles that are initially all at rest relative to each other. This requires a bit of explanation. First, because spacetime is curved, it only looks like Minkowski spacetime—the world of special relativity—in the limit of very small regions. The usual concepts of ’round’ and ‘at rest relative to each other’ only make sense in this limit. Thus, all our forthcoming statements are precise only in this limit, which of course relies on the fact that spacetime is a continuum.

Second, a test particle is a classical point particle with so little mass that while it is affected by gravity, its effects on the geometry of spacetime are negligible. We assume our test particles are affected only by gravity, no other forces. In general relativity this means that they move along timelike geodesics. Roughly speaking, these are paths that go slower than light and bend as little as possible. We can make this precise without much work.

For a path in space to be a geodesic means that if we slightly vary any small portion of it, it can only become longer. However, a path \gamma(s) in spacetime traced out by particle moving slower than light must be ‘timelike’, meaning that its tangent vector v = \gamma'(s) satisfies g(v,v) < 0. We define the proper time along such a path from s = s_0 to s = s_1 to be

\displaystyle{  \int_{s_0}^{s_1} \sqrt{-g(\gamma'(s),\gamma'(s))} \, ds }

This is the time ticked out by a clock moving along that path. A timelike path is a geodesic if the proper time can only decrease when we slightly vary any small portion of it. Particle physicists prefer the opposite sign convention for the metric, and then we do not need the minus sign under the square root. But the fact remains the same: timelike geodesics locally maximize the proper time.

Actual particles are not test particles! First, the concept of test particle does not take quantum theory into account. Second, all known particles are affected by forces other than gravity. Third, any actual particle affects the geometry of the spacetime it inhabits. Test particles are just a mathematical trick for studying the geometry of spacetime. Still, a sufficiently light particle that is affected very little by forces other than gravity can be approximated by a test particle. For example, an artificial satellite moving through the Solar System behaves like a test particle if we ignore the solar wind, the radiation pressure of the Sun, and so on.

If we start with a small round ball consisting of many test particles that are initially all at rest relative to each other, to first order in time it will not change shape or size. However, to second order in time it can expand or shrink, due to the curvature of spacetime. It may also be stretched or squashed, becoming an ellipsoid. This should not be too surprising, because any linear transformation applied to a ball gives an ellipsoid.

Let V(t) be the volume of the ball after a time t has elapsed, where time is measured by a clock attached to the particle at the center of the ball. Then in units where c = 8 \pi G = 1, Einstein’s equation says:

\displaystyle{  \left.{\ddot V\over V} \right|_{t = 0} = -{1\over 2} \left( \begin{array}{l} {\rm flow \; of \;} t{\rm -momentum \; in \; the \;\,} t {\rm \,\; direction \;} + \\ {\rm flow \; of \;} x{\rm -momentum \; in \; the \;\,} x {\rm \; direction \;} + \\ {\rm flow \; of \;} y{\rm -momentum \; in \; the \;\,} y {\rm \; direction \;} + \\ {\rm flow \; of \;} z{\rm -momentum \; in \; the \;\,} z {\rm \; direction} \end{array} \right) }

These flows here are measured at the center of the ball at time zero, and the coordinates used here take advantage of the fact that to first order, at any one point, spacetime looks like Minkowski spacetime.

The flows in Einstein’s equation are the diagonal components of a 4 \times 4 matrix T called the ‘stress-energy tensor’. The components T_{\alpha \beta} of this matrix say how much momentum in the \alpha direction is flowing in the \beta direction through a given point of spacetime. Here \alpha and \beta range from 0 to 3, corresponding to the t,x,y and z coordinates.

For example, T_{00} is the flow of t-momentum in the t-direction. This is just the energy density, usually denoted \rho. The flow of x-momentum in the x-direction is the pressure in the x direction, denoted P_x, and similarly for y and z. You may be more familiar with direction-independent pressures, but it is easy to manufacture a situation where the pressure depends on the direction: just squeeze a book between your hands!

Thus, Einstein’s equation says

\displaystyle{ {\ddot V\over V} \Bigr|_{t = 0} = -{1\over 2} (\rho + P_x + P_y + P_z) }

It follows that positive energy density and positive pressure both curve spacetime in a way that makes a freely falling ball of point particles tend to shrink. Since E = mc^2 and we are working in units where c = 1, ordinary mass density counts as a form of energy density. Thus a massive object will make a swarm of freely falling particles at rest around it start to shrink. In short, gravity attracts.

Already from this, gravity seems dangerously inclined to create singularities. Suppose that instead of test particles we start with a stationary cloud of ‘dust’: a fluid of particles having nonzero energy density but no pressure, moving under the influence of gravity alone. The dust particles will still follow geodesics, but they will affect the geometry of spacetime. Their energy density will make the ball start to shrink. As it does, the energy density \rho will increase, so the ball will tend to shrink ever faster, approaching infinite density in a finite amount of time. This in turn makes the curvature of spacetime become infinite in a finite amount of time. The result is a ‘singularity’.

In reality, matter is affected by forces other than gravity. Repulsive forces may prevent gravitational collapse. However, this repulsion creates pressure, and Einstein’s equation says that pressure also creates gravitational attraction! In some circumstances this can overwhelm whatever repulsive forces are present. Then the matter collapses, leading to a singularity—at least according to general relativity.

When a star more than 8 times the mass of our Sun runs out of fuel, its core suddenly collapses. The surface is thrown off explosively in an event called a supernova. Most of the energy—the equivalent of thousands of Earth masses—is released in a ten-minute burst of neutrinos, formed as a byproduct when protons and electrons combine to form neutrons. If the star’s mass is below 20 times that of our the Sun, its core crushes down to a large ball of neutrons with a crust of iron and other elements: a neutron star.

However, this ball is unstable if its mass exceeds the Tolman–Oppenheimer–Volkoff limit, somewhere between 1.5 and 3 times that of our Sun. Above this limit, gravity overwhelms the repulsive forces that hold up the neutron star. And indeed, no neutron stars heavier than 3 solar masses have been observed. Thus, for very heavy stars, the endpoint of collapse is not a neutron star, but something else: a black hole, an object that bends spacetime so much even light cannot escape.

If general relativity is correct, a black hole contains a singularity. Many physicists expect that general relativity breaks down inside a black hole, perhaps because of quantum effects that become important at strong gravitational fields. The singularity is considered a strong hint that this breakdown occurs. If so, the singularity may be a purely theoretical entity, not a real-world phenomenon. Nonetheless, everything we have observed about black holes matches what general relativity predicts. Thus, unlike all the other theories we have discussed, general relativity predicts infinities that are connected to striking phenomena that are actually observed.

The Tolman–Oppenheimer–Volkoff limit is not precisely known, because it depends on properties of nuclear matter that are not well understood. However, there are theorems that say singularities must occur in general relativity under certain conditions.

One of the first was proved by Raychauduri and Komar in the mid-1950’s. It applies only to ‘dust’, and indeed it is a precise version of our verbal argument above. It introduced the Raychauduri’s equation, which is the geometrical way of thinking about spacetime curvature as affecting the motion of a small ball of test particles. It shows that under suitable conditions, the energy density must approach infinity in a finite amount of time along the path traced out out by a dust particle.

The first required condition is that the flow of dust be initally converging, not expanding. The second condition, not mentioned in our verbal argument, is that the dust be ‘irrotational’, not swirling around. The third condition is that the dust particles be affected only by gravity, so that they move along geodesics. Due to the last two conditions, the Raychauduri–Komar theorem does not apply to collapsing stars.

The more modern singularity theorems eliminate these conditions. But they do so at a price: they require a more subtle concept of singularity! There are various possible ways to define this concept. They’re all a bit tricky, because a singularity is not a point or region in spacetime.

For our present purposes, we can define a singularity to be an ‘incomplete timelike or null geodesic’. As already explained, a timelike geodesic is the kind of path traced out by a test particle moving slower than light. Similarly, a null geodesic is the kind of path traced out by a test particle moving at the speed of light. We say a geodesic is ‘incomplete’ if it ceases to be well-defined after a finite amount of time. For example, general relativity says a test particle falling into a black hole follows an incomplete geodesic. In a rough-and-ready way, people say the particle ‘hits the singularity’. But the singularity is not a place in spacetime. What we really mean is that the particle’s path becomes undefined after a finite amount of time.

We need to be a bit careful about what we mean by ‘time’ here. For test particles moving slower than light this is easy, since we can parametrize a timelike geodesic by proper time. However, the tangent vector v = \gamma'(s) of a null geodesic has g(v,v) = 0, so a particle moving along a null geodesic does not experience any passage of proper time. Still, any geodesic, even a null one, has a family of preferred parametrizations. These differ only by changes of variable like this: s \mapsto as + b. By ‘time’ we really mean the variable s in any of these preferred parametrizations. Thus, if our spacetime is some Lorentzian manifold M, we say a geodesic \gamma \colon [s_0, s_1] \to M is incomplete if, parametrized in one of these preferred ways, it cannot be extended to a strictly longer interval.

The first modern singularity theorem was proved by Penrose in 1965. It says that if space is infinite in extent, and light becomes trapped inside some bounded region, and no exotic matter is present to save the day, either a singularity or something even more bizarre must occur. This theorem applies to collapsing stars. When a star of sufficient mass collapses, general relativity says that its gravity becomes so strong that light becomes trapped inside some bounded region. We can then use Penrose’s theorem to analyze the possibilities.

Shortly thereafter Hawking proved a second singularity theorem, which applies to the Big Bang. It says that if space is finite in extent, and no exotic matter is present, generically either a singularity or something even more bizarre must occur. The singularity here could be either a Big Bang in the past, a Big Crunch in the future, both—or possibly something else. Hawking also proved a version of his theorem that applies to certain Lorentzian manifolds where space is infinite in extent, as seems to be the case in our Universe. This version requires extra conditions.

There are some undefined phrases in this summary of the Penrose–Hawking singularity theorems, most notably these:

• ‘exotic matter’

• ‘singularity’

• ‘something even more bizarre’.

So, let me say a bit about each.

These singularity theorems precisely specify what is meant by ‘exotic matter’. This is matter for which

\rho + P_x + P_y + P_z < 0

at some point, in some coordinate system. By Einstein’s equation, this would make a small ball of freely falling test particles tend to expand. In other words, exotic matter would create a repulsive gravitational field. No matter of this sort has ever been found; the matter we know obeys the so-called ‘dominant energy condition’

\rho + P_x + P_y + P_z \ge 0

The Penrose–Hawking singularity theorems also say what counts as ‘something even more bizarre’. An example would be a closed timelike curve. A particle following such a path would move slower than light yet eventually reach the same point where it started—and not just the same point in space, but the same point in spacetime! If you could do this, perhaps you could wait, see if it would rain tomorrow, and then go back and decide whether to buy an umbrella today. There are certainly solutions of Einstein’s equation with closed timelike curves. The first interesting one was found by Einstein’s friend Gödel in 1949, as part of an attempt to probe the nature of time. However, closed timelike curves are generally considered less plausible than singularities.

In the Penrose–Hawking singularity theorems, ‘something even more bizarre’ means that spacetime is not ‘globally hyperbolic’. To understand this, we need to think about when we can predict the future or past given initial data. When studying field equations like Maxwell’s theory of electromagnetism or Einstein’s theory of gravity, physicists like to specify initial data on space at a given moment of time. However, in general relativity there is considerable freedom in how we choose a slice of spacetime and call it ‘space’. What should we require? For starters, we want a 3-dimensional submanifold S of spacetime that is ‘spacelike’: every vector v tangent to S should have g(v,v) > 0. However, we also want any timelike or null curve to hit S exactly once. A spacelike surface with this property is called a Cauchy surface, and a Lorentzian manifold containing a Cauchy surface is said to be globally hyperbolic. There are many theorems justifying the importance of this concept. Globally hyperbolicity excludes closed timelike curves, but also other bizarre behavior.

By now the original singularity theorems have been greatly generalized and clarified. Hawking and Penrose gave a unified treatment of both theorems in 1970. The 1973 textbook by Hawking and Ellis gives a systematic introduction to this subject. Hawking gave an elegant informal overview of the key ideas in 1994, and a paper by Garfinkle and Senovilla reviews the subject and its history up to 2015.

If we accept that general relativity really predicts the existence of singularities in physically realistic situations, the next step is to ask whether they rob general relativity of its predictive power. I’ll talk about that next time!


by John Baez at September 23, 2016 01:00 AM

September 22, 2016

astrobites - astro-ph reader's digest

Black holes and populations

TITLE: Stellar populations across the black hole mass – velocity dispersion relation
AUTHORS: Ignacio Martín-Navarro, Jean P. Brodie, Remco C. E. van den Bosch, Aaron J. Romanowsky, and Duncan J. Forbes
FIRST AUTHOR INSTITUTION: University of California Observatories
STATUS: Accepted for publication in the Astrophysical Journal Letters

Introduction:

A supermassive black hole is a kind of cosmic parasite that preys on galaxies. As the host galaxy grows larger, so too does the black hole, consuming gas that would otherwise be turned into stars. Worse, it guzzles gas so quickly that the gas surging down its gravity well gets superheated; the hot gas radiates strongly, heating up surrounding gas and driving it away from the black hole. This prevents it from being turned into stars, causing the galaxy to starve. This effect actually sets a limit on how fast a black hole can grow. To cap it off, it doesn’t always finish its meal, launching jets of material clear of the galaxy which expel yet more gas (there’s an obvious simile here, which I’m not going to employ for reasons of good taste).

Naturally, all this has a profound effect on the host galaxy and its stellar population, ultimately shutting down star formation. The bigger the galaxy, the bigger the black hole – and the more aggressive it gets. A negative feedback loop is created which causes those galaxies that grow quickest to also fail quickest, a kind of cosmic ‘boom and bust‘. This leads to a tight correlation between the mass of a galaxy’s supermassive black hole and its total mass (more precisely with its velocity dispersion, which is just a stand-in for galaxy mass).

In today’s paper, the authors search for clues as to how the presence of a supermassive black hole affected the formation of the stars that did make it before the gas supply was cut off.

How it works:

We know that just as there is a connection between galaxy mass and black hole mass, so too is there a connection between galaxy mass and the chemical makeup of stars. The most massive galaxies are ‘alpha-enhanced’ – a term that, like most technical language, packs in a lot of detail. Alpha elements are common elements created by sticking together a bunch of alpha particles (Helium nuclei). Their atoms therefore have atomic masses divisible by four: Oxygen, Neon, Magnesium, Silicon, Sulphur, Argon, Calcium, and Titanium, in case you haven’t got a periodic table to hand.

(Carbon doesn’t count. It’s more like the seed to which you attach alpha particles in order to make the legitimate alpha elements. Making Carbon is hard, but once you’ve got some making alpha elements is easy. Sorry Carbon.)

There’s another class of elements called ‘iron peak’ elements. Heavier elements tend to be rarer but there’s an exception for elements with atomic numbers similar to iron. When a galaxy is alpha-enhanced, it means that the alpha elements are more abundant relative to the iron peak elements than they are, for example, in the Sun. When this happens it’s actually telling us something interesting about the history of that galaxy. All these elements are formed in stars and released back into the wild via supernova explosions, which come in two types. Core-collapse supernovae are what you’re probably most familiar with: in these a massive star runs out of fuel, can no longer support itself against its own gravity and so collapses … before rebounding in a huge explosion. The massive stars that end in these explosions don’t last long in cosmic terms, sometimes only a few million years. By contrast, other supernovae occur when a white dwarf star (the kind of thing our Sun will eventually turn into) grows above a critical mass, probably due to wrenching material away from another star or merging with another white dwarf. These supernovae almost exclusively make iron peak elements but can’t possibly occur until you actually have some white dwarfs. That means there is a delay of several billion years while you wait for stars like the Sun to reach the end of their lives.

This time delay is critical. If you grow a galaxy steadily, over billions of years, these late supernova start to go off. This seeds the galaxy with iron peak elements while it’s still making stars. If you grow your galaxy quickly, they still go off, but it’s too late: star formation is finished and those heavy elements sail off into the void. In either case the core-collapse supernovae go off quickly, so you make lots of alpha elements. In summary, alpha-enhancement means you formed your stars very quickly. It’s therefore significant that the most massive galaxies have massive black holes (which we think cut star formation off early on) and are also alpha-enhanced: the two effects are directly related.

Today’s paper:

In order to isolate the effect of the black hole, the authors look at the outliers – galaxies whose black holes are a bit more/less massive than expected given the size of the galaxy. This is shown in Figure 1. Their sample spans a wide range of masses and other galaxy properties, the idea being that the particular effect of the black hole can be isolated in this way.

The authors' sample of galaxies. The usual relationship between black hole mass and galaxy mass (velocity dispersion, on the x-axis, is just an indirect measurement of this) is plotted as a thick black line. Galaxies with slightly overweight central black holes are in orange, whilst those with comparatively light black holes are in blue (galaxies represented by circles are more compact than those represented by stars, but that's not very important here). Figure 1 from the paper.

The authors’ sample of galaxies. The usual relationship between black hole mass and galaxy mass (velocity dispersion, on the x-axis, is just an indirect measurement of this) is plotted as a thick black line. Galaxies with slightly overweight central black holes are in orange, whilst those with comparatively light black holes are in blue (galaxies represented by circles are more compact than those represented by stars, but that’s not very important here). Figure 1 from the paper.

The authors find a clear correlation between underweight black holes and younger stellar populations, bearing out the idea that less massive central black holes are not so good at cutting off star formation. In the ‘blue’ galaxies in Fig. 1 black hole growth has lagged behind galaxy growth for some reason, which has meant those galaxies were able to form stars for longer.

Both central black hole mass and the production of alpha elements (see text) such as Magnesium, Mg, are related to galaxy mass. Here we see there is a more fundamental, direct connection between the two: those galaxies with slightly more massive central black holes than expected are also slightly more abundant in Mg than expected (and vice versa). The axes show the excess black hole mass and excess Mg enhancement respectively. Figure 3 from the paper.

Both central black hole mass and the production of alpha elements (see text) such as Magnesium, Mg, are related to galaxy mass. Here we see there is a more fundamental, direct connection between the two: those galaxies with slightly more massive central black holes than expected are also slightly more abundant in Mg than expected (and vice versa). The axes show the excess black hole mass and excess Mg enhancement respectively. Figure 3 from the paper.

I said earlier that both central black hole mass and enhanced production of alpha elements are linked to total galaxy mass. What the authors are able to show is that there is a more fundamental direct link between the two (see Figure 2); this confirms that these correlations are no coincidence but really do arise from the theory I sketched out. The authors have shown us the direct effects of black hole feedback on the stellar populations of their host galaxies.

by Paddy Alton at September 22, 2016 08:44 PM

Emily Lakdawalla - The Planetary Society Blog

Juno and Marble Movie update at Apojove 1
Juno is on its second of two long orbits around Jupiter, reaching apojove (its farthest distance from the planet) today.

September 22, 2016 05:51 PM

Peter Coles - In the Dark

End of Summer, Start of Autumn

It’s a lovely warm sunny day in Cardiff today, but it is nevertheless the end of summer. The autumnal equinox came and went today (22nd September) at 14.21 Universal Time (that’s 15.21 British Summer Time), so from now on it’s all downhill (in that the Subsolar point has just crossed the equator on the southward journey it began at the Summer Solstice).

Many people adopt the autumnal equinox as the official start of autumn, but I go for an alternative criterion: summer is over when the County Championship is over. It turns out that, at least for Glamorgan, that coincided very closely to the equinox. Having bowled out Leicestershire for a paltry 96 at Grace Road in the first innings of their final Division 2 match, they went on to establish a handy first-innings lead of 103. They were then set a modest second-innings target of 181 to win. Unfortunately, their batting frailties were once again cruelly exposed and they collapsed from 144 for 4 to 154 all out and lost by 26 runs. That abject batting display sums up their season really.

Meanwhile, in Division 1 of the Championship, Middlesex are playing Yorkshire at Lord’s, a match whose outcome will determine who wins the Championship. Middlesex only need to draw to be champions, but as I write they’ve just lost an early wicket in their second innings, with Yorkshire having a first-innings lead of 120, so it’s by no means out of the question that Yorkshire might win and be champions again.

Another sign that summer is over is that the new cohort of students has arrived. This being “Freshers’ Week” there have been numerous events arranged to introduce them to various aspects of university life. Lectures proper being in Monday, when the Autumn Semester begins in earnest. I don’t have any teaching until the Spring.

This time of year always reminds me when I left home to go to University, as thousands of fledgling students have just done. I went through this rite of passage 34 years ago, getting on a train at Newcastle Central station with my bags of books and clothes. I said goodbye to my parents there. There was never any question of them taking me in the car all the way to Cambridge. It wasn’t practical and I wouldn’t have wanted them to do it anyway. After changing from the Inter City at Peterborough onto a local train, me and my luggage trundled through the flatness of East Anglia until it reached Cambridge.

I don’t remember much about the actual journey, but I must have felt a mixture of fear and excitement. Nobody in my family had ever been to University before, let alone to Cambridge. Come to think of it, nobody from my family has done so since either. I was a bit worried about whether the course I would take in Natural Sciences would turn out to be very difficult, but I think my main concern was how I would fit in generally.

I had been working between leaving school and starting my undergraduate course, so I had some money in the bank and I was also to receive a full grant. I wasn’t really worried about cash. But I hadn’t come from a posh family and didn’t really know the form. I didn’t have much experience of life outside the North East either. I’d been to London only once before going to Cambridge, and had never been abroad.

I didn’t have any posh clothes, a deficiency I thought would mark me as an outsider. I had always been grateful for having to wear a school uniform (which was bought with vouchers from the Council) because it meant that I dressed the same as the other kids at School, most of whom came from much wealthier families. But this turned out not to matter at all. Regardless of their family background, students were generally a mixture of shabby and fashionable, like they are today. Physics students in particular didn’t even bother with the fashionable bit. Although I didn’t have a proper dinner jacket for the Matriculation Dinner, held for all the new undergraduates, nobody said anything about my dark suit which I was told would be acceptable as long as it was a “lounge suit”. Whatever that is.

Taking a taxi from Cambridge station, I finally arrived at Magdalene College. I waited outside, a bundle of nerves, before entering the Porter’s Lodge and starting my life as a student. My name was found and ticked off and a key issued for my room in the Lutyens building. It turned out to be a large room, with a kind of screen that could be pulled across to divide the room into two, although I never actually used this contraption. There was a single bed and a kind of cupboard containing a sink and a mirror in the bit that could be hidden by the screen. The rest of the room contained a sofa, a table, a desk, and various chairs, all of them quite old but solidly made. Outside my  room, on the landing, was the gyp room, a kind of small kitchen, where I was to make countless cups of tea over the following months, although I never actually cooked anything there.

I struggled in with my bags and sat on the bed. It wasn’t at all like I had imagined. I realised that no amount of imagining would ever really have prepared me for what was going to happen at University.

I  stared at my luggage. I suddenly felt like I had landed on a strange island, and couldn’t remember why I had gone there or what I was supposed to be doing.

After 34 years you get used to that feeling…

 


by telescoper at September 22, 2016 03:27 PM

CERN Bulletin

CERN Bulletin Issue No. 38-39/2016
Link to e-Bulletin Issue No. 38-39/2016Link to all articles in this issue No.

September 22, 2016 08:51 AM

Emily Lakdawalla - The Planetary Society Blog

Where to find rapidly released space image data
Interested in playing with recent space image data? Here's a list of places to get the freshest photos from space.

September 22, 2016 12:06 AM

September 21, 2016

Clifford V. Johnson - Asymptotia

Super Nailed It…

quick_sketch_of_black_pantherOn the sofa, during a moment while we watched Captain America: Civil War over the weekend:

Amy: Wait, what...? Why's Cat-Woman in this movie?
Me: Er... (hesitating, not wanting to spoil what is to come...)
Amy: Isn't she a DC character?
Me: Well... (still hesitating, but secretly impressed by her awareness of the different universes... hadn't realized she was paying attention all these years.)
Amy: So who's going to show up next? Super-Dude? Bat-Fella? Wonder-Lady? (Now she's really showing off and poking fun.)
Me: We'll see... (Now choking with laughter on dinner...)

I often feel bad subjecting my wife to this stuff, but this alone was worth it.

For those who know the answers and are wondering, I held off on launching into a discussion about the fascinating history of Marvel, representation of people of African descent in superhero comics (and now movies and TV), the [...] Click to continue reading this post

The post Super Nailed It… appeared first on Asymptotia.

by Clifford at September 21, 2016 07:00 PM

Symmetrybreaking - Fermilab/SLAC

Small cat, big science

The proposed International Linear Collider has a fuzzy new ally.

Hello Kitty is known throughout Japan as the poster girl (poster cat?) of kawaii, a segment of pop culture built around all things cute.

But recently she took on a new job: representing the proposed International Linear Collider.

At the August International Conference on High Energy Physics in Chicago, ILC boosters passed out folders featuring the white kitty wearing a pair of glasses, a shirt with pens in the pocket and a bow with an L for “Lagrangian,” the name of the long equation in the background. Some picture the iconic cat sitting on an ILC cryomodule.

Hello Kitty has previously tried activities such as cooking, photography and even scuba diving. This may be her first foray into international research.

Japan is considering hosting the ILC, a proposed accelerator that could mass-produce Higgs bosons and other fundamental particles. Japan’s Advanced Accelerator Association partnered with the company Sanrio to create the special kawaii gear in the hopes of drawing attention to the large-scale project.

The ILC: Science you’ll want to snuggle.

by Ricarda Laasch at September 21, 2016 04:15 PM

ZapperZ - Physics and Physicists

Recap of ICHEP 2016
If you missed the recent brouhaha about the missing 750 GeV bump, here is the recap of ICHEP conference held recently in Chicago.

Zz.

by ZapperZ (noreply@blogger.com) at September 21, 2016 01:21 PM

Emily Lakdawalla - The Planetary Society Blog

Five things we learned from our #RocketRoadTrip
We're back from our #RocketRoadTrip through four states with NASA field centers involved in the agency's Journey to Mars program. We'll be sorting through our material for quite some time, but meanwhile, here are five key things we learned.

September 21, 2016 11:02 AM

Lubos Motl - string vacua and pheno

Nanopoulos' and pals' model is back to conquer the throne
Once upon a time, there was an evil witch-and-bitch named Cernette whose mass was \(750\GeV\) and who wanted to become the queen instead of the beloved king.



Fortunately, that witch-and-bitch has been killed and what we're experiencing is
The Return of the King: No-Scale \({\mathcal F}\)-\(SU(5)\),
Li, Maxin, and Nanopoulous point out. It's great news that the would-be \(750\GeV\) particle has been liquidated. They revisited the predictions of their class of F-theory-based, grand unified, no-scale models and found some consequences that they surprisingly couldn't have told us about in the previous 10 papers and that we should be happy about, anyway.




First, they suddenly claim that the theoretical considerations within their scheme are enough to assert that the mass of the gluino exceeds \(1.9\TeV\),\[

m_{\tilde g} \geq 1.9\TeV.

\] This is an excellent, confirmed prediction of a supersymmetric theory because the LHC experiments also say that with these conventions, the mass of the gluino exceeds \(1.9\TeV\). ;-)




Just to be sure, I did observe the general gradual increase of the masses predicted by their models so I don't take the newest ones too seriously. But I believe that there is still some justification so the probability could be something like 0.1% that in a year or two, we will consider their model to be a strong contender that has been partly validated by the experiments.

In the newest paper, they want the Higgs and top mass to be around\[

m_h\approx 125\GeV, \quad m_{\rm top} \approx 174\GeV

\] while the new SUSY-related parameters are\[

\eq{
\tan\beta &\approx 25\\
M_V^{\rm flippon}&\approx (30-80)\TeV\\
M_{\chi^1_0}&\approx 380\GeV\\
M_{\tilde \tau^\pm} &\approx M_{\chi^1_0}+1 \GeV\\
M_{\tilde t_1} &\approx 1.7\TeV\\
M_{\tilde u_R} &\approx 2.7\TeV\\
M_{\tilde g} &\approx 2.1\TeV
}

\] while the cosmological parameter \(\Omega h^2\approx 0.118\), the anomalous muon's magnetic moment \(\Delta a_\mu\approx 2\times 10^{-10}\), the branching ratio of a bottom decay \(Br(b\to s\gamma)\approx 0.00035\), the muon pair branching ratio for a B-meson \(Br(B^0_s\to \mu^+\mu^-)\approx 3.2\times 10^{-9}\), the spin-independent cross section \(\sigma_{SI}\approx (1.0-1.5)\times 10^{-11}\,{\rm pb}\) and \(\sigma_{SD} \approx (4-6)\times 10^{-9}\,{\rm pb}\), and the proton lifetime\[

\tau (p\to e^+ \pi^0) \approx 1.3\times 10^{35}\,{\rm years}.

\] Those are cool, specific predictions that are almost independent of the choice of the point in their parameter space. If one takes those claims seriously, theirs is a highly predictive theory.

But one reason I wrote this blog post was their wonderfully optimistic, fairy-tale-styled rhetoric. For example, the second part of their conclusions says:
While SUSY enthusiasts have endured several setbacks over the prior few years amidst the discouraging results at the LHC in the search for supersymmetry, it is axiomatic that as a matter of course, great triumph emerges from momentary defeat. As the precession of null observations at the LHC has surely dampened the spirits of SUSY proponents, the conclusion of our analysis here indicates that the quest for SUSY may just be getting interesting.
So dear SUSY proponents, just don't despair, return to your work, and get ready for the great victory.



Off-topic: Santa Claus is driving a Škoda and he parks on the roofs whenever he brings gifts to kids in the Chinese countryside. What a happy driver.

by Luboš Motl (noreply@blogger.com) at September 21, 2016 10:54 AM

CERN Bulletin

There’s more to particle physics at CERN than colliders

CERN’s scientific programme must be compelling, unique, diverse, and integrated into the global landscape of particle physics. One of the Laboratory’s primary goals is to provide a diverse range of excellent physics opportunities and to put its unique facilities to optimum use, maximising the scientific return.

 

In this spirit, we have recently established a Physics Beyond Colliders study group with a mandate to explore the unique opportunities offered by the CERN accelerator complex to address some of today’s outstanding questions in particle physics through projects complementary to high-energy colliders and other initiatives in the world. The study group will provide input to the next update of the European Strategy for Particle Physics.

The process kicked off with a two-day workshop at CERN on 6 and 7 September, organised by the study group conveners: Joerg Jaeckel (Heidelberg), Mike Lamont (CERN) and Claude Vallée (CPPM Marseille and DESY). Its purpose was to present experimental and theoretical ideas, and to hear proposals for compelling experiments that can be done at the extremely versatile CERN accelerator complex. From the linacs to the SPS, CERN accelerators are able to deliver high-intensity beams across a broad range of energies, particle types and time structure.

Over 300 people attended the workshop, some three quarters coming from outside CERN. The call for proposals resulted in around 30 submissions for talks, with about two third of those being discussed at the workshop. It was interesting to see a spirit of collaborative competition, the hallmark of our field, building up as the workshop progressed. The proposals addressed questions of fundamental physics using approaches complementary to those for which colliders are best adapted. They covered, among others, searches for dark-sector particles, measurements of the proton electric dipole moment, studies of ultra-rare decays, searches for axions, and many more.

The next step for the study group is to organise the work to develop and consolidate the ideas that were heard at the workshop and others that can be put forward in the coming months. Working groups will examine the physics case and technical feasibility in the global context: indeed, carrying out research here that could be done elsewhere does not allow for the best use of the discipline’s resources globally.

I’m looking forward to following the interactions and activities that these working groups will foster over the coming years, and to reading the report that will be delivered in 2018 to inform the next European Strategy update. There’s a bright future, I’m sure, for physics beyond - and alongside - colliders at CERN.

Fabiola Gianotti

September 21, 2016 09:09 AM

John Baez - Azimuth

Struggles with the Continuum (Part 6)

Last time I sketched how physicists use quantum electrodynamics, or ‘QED’, to compute answers to physics problems as power series in the fine structure constant, which is

\displaystyle{ \alpha = \frac{1}{4 \pi \epsilon_0} \frac{e^2}{\hbar c} \approx \frac{1}{137.036} }

I concluded with a famous example: the magnetic moment of the electron. With a truly heroic computation, physicists have used QED to compute this quantity up to order \alpha^5. If we also take other Standard Model effects into account we get agreement to roughly one part in 10^{12}.

However, if we continue adding up terms in this power series, there is no guarantee that the answer converges. Indeed, in 1952 Freeman Dyson gave a heuristic argument that makes physicists expect that the series diverges, along with most other power series in QED!

The argument goes as follows. If these power series converged for small positive \alpha, they would have a nonzero radius of convergence, so they would also converge for small negative \alpha. Thus, QED would make sense for small negative values of \alpha, which correspond to imaginary values of the electron’s charge. If the electron had an imaginary charge, electrons would attract each other electrostatically, since the usual repulsive force between them is proportional to e^2. Thus, if the power series converged, we would have a theory like QED for electrons that attract rather than repel each other.

However, there is a good reason to believe that QED cannot make sense for electrons that attract. The reason is that it describes a world where the vacuum is unstable. That is, there would be states with arbitrarily large negative energy containing many electrons and positrons. Thus, we expect that the vacuum could spontaneously turn into electrons and positrons together with photons (to conserve energy). Of course, this not a rigorous proof that the power series in QED diverge: just an argument that it would be strange if they did not.

To see why electrons that attract could have arbitrarily large negative energy, consider a state \psi with a large number N of such electrons inside a ball of radius R. We require that these electrons have small momenta, so that nonrelativistic quantum mechanics gives a good approximation to the situation. Since its momentum is small, the kinetic energy of each electron is a small fraction of its rest energy m_e c^2. If we let \langle \psi, E \psi\rangle be the expected value of the total rest energy and kinetic energy of all the electrons, it follows that \langle \psi, E\psi \rangle is approximately proportional to N.

The Pauli exclusion principle puts a limit on how many electrons with momentum below some bound can fit inside a ball of radius R. This number is asymptotically proportional to the volume of the ball. Thus, we can assume N is approximately proportional to R^3. It follows that \langle \psi, E \psi \rangle is approximately proportional to R^3.

There is also the negative potential energy to consider. Let V be the operator for potential energy. Since we have N electrons attracted by an 1/r potential, and each pair contributes to the potential energy, we see that \langle \psi , V \psi \rangle is approximately proportional to -N^2 R^{-1}, or -R^5. Since R^5 grows faster than R^3, we can make the expected energy \langle \psi, (E + V) \psi \rangle arbitrarily large and negative as N,R \to \infty.

Note the interesting contrast between this result and some previous ones we have seen. In Newtonian mechanics, the energy of particles attracting each other with a 1/r potential is unbounded below. In quantum mechanics, thanks the uncertainty principle, the energy is bounded below for any fixed number of particles. However, quantum field theory allows for the creation of particles, and this changes everything! Dyson’s disaster arises because the vacuum can turn into a state with arbitrarily large numbers of electrons and positrons. This disaster only occurs in an imaginary world where \alpha is negative—but it may be enough to prevent the power series in QED from having a nonzero radius of convergence.

We are left with a puzzle: how can perturbative QED work so well in practice, if the power series in QED diverge?

Much is known about this puzzle. There is an extensive theory of ‘Borel summation’, which allows one to extract well-defined answers from certain divergent power series. For example, consider a particle of mass m on a line in a potential

V(x) = x^2 + \beta x^4

When \beta \ge 0 this potential is bounded below, but when \beta < 0 it is not: classically, it describes a particle that can shoot to infinity in a finite time. Let H = K + V be the quantum Hamiltonian for this particle, where K is the usual operator for the kinetic energy and V is the operator for potential energy. When \beta \ge 0, the Hamiltonian H is essentially self-adjoint on the set of smooth wavefunctions that vanish outside a bounded interval. This means that the theory makes sense. Moreover, in this case H has a ‘ground state’: a state \psi whose expected energy \langle \psi, H \psi \rangle is as low as possible. Call this expected energy E(\beta). One can show that E(\beta) depends smoothly on \beta for \beta \ge 0, and one can write down a Taylor series for E(\beta).

On the other hand, when \beta < 0 the Hamiltonian H is not essentially self-adjoint. This means that the quantum mechanics of a particle in this potential is ill-behaved when \beta < 0. Heuristically speaking, the problem is that such a particle could tunnel through the barrier given by the local maxima of V(x) and shoot off to infinity in a finite time.

This situation is similar to Dyson’s disaster, since we have a theory that is well-behaved for \beta \ge 0 and ill-behaved for \beta < 0. As before, the bad behavior seems to arise from our ability to convert an infinite amount of potential energy into other forms of energy. However, in this simpler situation one can prove that the Taylor series for E(\beta) does not converge. Barry Simon did this around 1969. Moreover, one can prove that Borel summation, applied to this Taylor series, gives the correct value of E(\beta) for \beta \ge 0. The same is known to be true for certain quantum field theories. Analyzing these examples, one can see why summing the first few terms of a power series can give a good approximation to the correct answer even though the series diverges. The terms in the series get smaller and smaller for a while, but eventually they become huge.

Unfortunately, nobody has been able to carry out this kind of analysis for quantum electrodynamics. In fact, the current conventional wisdom is that this theory is inconsistent, due to problems at very short distance scales. In our discussion so far, we summed over Feynman diagrams with \le n vertices to get the first n terms of power series for answers to physical questions. However, one can also sum over all diagrams with \le n loops. This more sophisticated approach to renormalization, which sums over infinitely many diagrams, may dig a bit deeper into the problems faced by quantum field theories.

If we use this alternate approach for QED we find something surprising. Recall that in renormalization we impose a momentum cutoff \Lambda, essentially ignoring waves of wavelength less than \hbar/\Lambda, and use this to work out a relation between the the electron’s bare charge e_\mathrm{bare}(\Lambda) and its renormalized charge e_\mathrm{ren}. We try to choose e_\mathrm{bare}(\Lambda) that makes e_\mathrm{ren} equal to the electron’s experimentally observed charge e. If we sum over Feynman diagrams with \le n vertices this is always possible. But if we sum over Feynman diagrams with at most one loop, it ceases to be possible when \Lambda reaches a certain very large value, namely

\displaystyle{  \Lambda \; = \; \exp\left(\frac{3 \pi}{2 \alpha} + \frac{5}{6}\right) m_e c \; \approx \; e^{647} m_e c}

According to this one-loop calculation, the electron’s bare charge becomes infinite at this point! This value of \Lambda is known as a ‘Landau pole’, since it was first noticed in about 1954 by Lev Landau and his colleagues.

What is the meaning of the Landau pole? We said that poetically speaking, the bare charge of the electron is the charge we would see if we could strip off the electron’s virtual particle cloud. A somewhat more precise statement is that e_\mathrm{bare}(\Lambda) is the charge we would see if we collided two electrons head-on with a momentum on the order of \Lambda. In this collision, there is a good chance that the electrons would come within a distance of \hbar/\Lambda from each other. The larger \Lambda is, the smaller this distance is, and the more we penetrate past the effects of the virtual particle cloud, whose polarization ‘shields’ the electron’s charge. Thus, the larger \Lambda is, the larger e_\mathrm{bare}(\Lambda) becomes.

So far, all this makes good sense: physicists have done experiments to actually measure this effect. The problem is that according to a one-loop calculation, e_\mathrm{bare}(\Lambda) becomes infinite when \Lambda reaches a certain huge value.

Of course, summing only over diagrams with at most one loop is not definitive. Physicists have repeated the calculation summing over diagrams with \le 2 loops, and again found a Landau pole. But again, this is not definitive. Nobody knows what will happen as we consider diagrams with more and more loops. Moreover, the distance \hbar/\Lambda corresponding to the Landau pole is absurdly small! For the one-loop calculation quoted above, this distance is about

\displaystyle{  e^{-647} \frac{\hbar}{m_e c} \; \approx \; 6 \cdot 10^{-294}\, \mathrm{meters} }

This is hundreds of orders of magnitude smaller than the length scales physicists have explored so far. Currently the Large Hadron Collider can probe energies up to about 10 TeV, and thus distances down to about 2 \cdot 10^{-20} meters, or about 0.00002 times the radius of a proton. Quantum field theory seems to be holding up very well so far, but no reasonable physicist would be willing to extrapolate this success down to 6 \cdot 10^{-294} meters, and few seem upset at problems that manifest themselves only at such a short distance scale.

Indeed, attitudes on renormalization have changed significantly since 1948, when Feynman, Schwinger and Tomonoga developed it for QED. At first it seemed a bit like a trick. Later, as the success of renormalization became ever more thoroughly confirmed, it became accepted. However, some of the most thoughtful physicists remained worried. In 1975, Dirac said:

Most physicists are very satisfied with the situation. They say: ‘Quantum electrodynamics is a good theory and we do not have to worry about it any more.’ I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small—not neglecting it just because it is infinitely great and you do not want it!

As late as 1985, Feynman wrote:

The shell game that we play [. . .] is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.

By now renormalization is thoroughly accepted among physicists. The key move was a change of attitude emphasized by Kenneth Wilson in the 1970s. Instead of treating quantum field theory as the correct description of physics at arbitrarily large energy-momenta, we can assume it is only an approximation. For renormalizable theories, one can argue that even if quantum field theory is inaccurate at large energy-momenta, the corrections become negligible at smaller, experimentally accessible energy-momenta. If so, instead of seeking to take the \Lambda \to \infty limit, we can use renormalization to relate bare quantities at some large but finite value of \Lambda to experimentally observed quantities.

From this practical-minded viewpoint, the possibility of a Landau pole in QED is less important than the behavior of the Standard Model. Physicists believe that the Standard Model would suffer from Landau pole at momenta low enough to cause serious problems if the Higgs boson were considerably more massive than it actually is. Thus, they were relieved when the Higgs was discovered at the Large Hadron Collider with a mass of about 125 GeV/c2. However, the Standard Model may still suffer from a Landau pole at high momenta, as well as an instability of the vacuum.

Regardless of practicalities, for the mathematical physicist, the question of whether or not QED and the Standard Model can be made into well-defined mathematical structures that obey the axioms of quantum field theory remain open problems of great interest. Most physicists believe that this can be done for pure Yang–Mills theory, but actually proving this is the first step towards winning $1,000,000 from the Clay Mathematics Institute.


by John Baez at September 21, 2016 01:00 AM

September 20, 2016

ZapperZ - Physics and Physicists

We Lost Deborah Jin
Wow! I didn't see this one coming.

I just read the news that Deborah Jin, someone who I consider to be a leading candidate to win the Nobel Prize, has passed away on Sept. 15 after a battle with cancer. Her work on the ultra-cold Fermionic gasses was groundbreaking, and she should have been awarded the Nobel Prize a long time ago!

Nearly two decades ago, Jin and her then PhD student Brian DeMarco were the first researchers to observe quantum degeneracy in a sufficiently cooled gas of fermionic atoms. They were the first to demonstrate the creation and control of such an ultracold "Fermi gas", which has since provided us with new insights into superconductivity and other electronic effects in materials. You can read this 2002 feature written by Jin on "A Fermi gas of atoms"

CRAP! We have lost another good one, and well before her time! Deepest condolences to her family and friends.

Edit: Here's the press release from JILA about this.

Zz.

by ZapperZ (noreply@blogger.com) at September 20, 2016 03:25 PM

September 19, 2016

ZapperZ - Physics and Physicists

What Happen When A Law Professor Tries To Use The Physics Of Climate Change
Usually, something like this doesn't have a happy ending. This happened in a congressional hearing by Ronald Rotunda of Chapman University’s Fowler School of La.

But during the hearing, Rotunda picked an odd example of such a dissenter — Jerry Mitrovica, a Harvard geoscientist whose work has shown that when, in a warming world, you lose massive amounts of ice from Greenland or Antarctica, sea level actually plunges near these great ice sheets, but rises farther away from them. The reason is gravity: Ice sheets are so massive that they pull the ocean towards them, but as they lose mass, some of the ocean surges back across the globe.

We have covered this idea extensively in the past, including by interviewing Mitrovica. He has found, for instance, that if the West Antarctic ice sheet collapses, the United States would experience much worse sea level rise than many other parts of the world, simply because it is so distant from West Antarctica. “The peak areas are 30 to 35 percent higher,” Mitrovica told me last year.

But if Greenland melts, pretty much the opposite happens — the Southern hemisphere gets worse sea level rise. And if both melt together, they might partially offset one another.

Rotunda appears to have misinterpreted Mitrovica’s important insight as reflecting a contrarian perspective on climate change.

It is always a bad idea when a person, testifying as an "expert", does not understand the source that person is using, and then had the gall to tell a physicist questioning the conclusion to "read his article".

Zz.

by ZapperZ (noreply@blogger.com) at September 19, 2016 11:37 PM

Clifford V. Johnson - Asymptotia

Kitchen Design…

(Click for larger view.)
sample_panel_dialogues_19_09_2016Apparently I was designing a kitchen recently. Yes, but not one I intend to build in the physical world. It's the setting (in part) for a new story I'm working on for the book. The everyday household is a great place to have a science conversation, by the way, and this is what we will see in this story. It might be one of the most important conversations in the book in some sense.

This story is meant to be done in a looser, quicker style, and there I go again with the ridiculous level of detail... Just to get a sense of how ridiculous I'm being, note that this is not a page, but a small panel within a page of several.

The page establishes the overall setting, and hopefully roots you [...] Click to continue reading this post

The post Kitchen Design… appeared first on Asymptotia.

by Clifford at September 19, 2016 10:32 PM

Robert Helling - atdotde

Brute forcing Crazy Game Puzzles
In the 1980s, as a kid I loved my Crazy Turtles Puzzle ("Das verrückte Schildkrötenspiel"). For a number of variations, see here or here.

I had completely forgotten about those, but a few days ago, I saw a self-made reincarnation when staying at a friends' house:



I tried a few minutes to solve it, unsuccessfully (in case it is not clear: you are supposed to arrange the nine tiles in a square such that they form color matching arrows wherever they meet).

So I took the picture above with the plan to either try a bit more at home or write a program to solve it. Yesterday, I had about an hour and did the latter. I am a bit proud of the implementation I came up with and in particular the fact that I essentially came up with a correct program: It came up with the unique solution the first time I executed it. So, here I share it:

#!/usr/bin/perl

# 1 rot 8
# 2 gelb 7
# 3 gruen 6
# 4 blau 5

@karten = (7151, 6754, 4382, 2835, 5216, 2615, 2348, 8253, 4786);

foreach $karte(0..8) {
$farbe[$karte] = [split //,$karten[$karte]];
}
&ausprobieren(0);

sub ausprobieren {
my $pos = shift;

foreach my $karte(0..8) {
next if $benutzt[$karte];
$benutzt[$karte] = 1;
foreach my $dreh(0..3) {
if ($pos % 3) {
# Nicht linke Spalte
$suche = 9 - $farbe[$gelegt[$pos - 1]]->[(1 - $drehung[$gelegt[$pos - 1]]) % 4];
next if $farbe[$karte]->[(3 - $dreh) % 4] != $suche;
}
if ($pos >= 3) {
# Nicht oberste Zeile
$suche = 9 - $farbe[$gelegt[$pos - 3]]->[(2 - $drehung[$gelegt[$pos - 3]]) % 4];
next if $farbe[$karte]->[(4 - $dreh) % 4] != $suche;
}

$benutzt[$karte] = 1;
$gelegt[$pos] = $karte;
$drehung[$karte] = $dreh;
#print @gelegt[0..$pos]," ",@drehung[0..$pos]," ", 9 - $farbe[$gelegt[$pos - 1]]->[(1 - $drehung[$gelegt[$pos - 1]]) % 4],"\n";

if ($pos == 8) {
print "Fertig!\n";
for $l(0..8) {
print "$gelegt[$l] $drehung[$gelegt[$l]]\n";
}
} else {
&ausprobieren($pos + 1);
}
}
$benutzt[$karte] = 0;
}
}

Sorry for variable names in German, but the idea should be clear. Regarding the implementation: red, yellow, green and blue backs of arrows get numbers 1,2,3,4 respectively and pointy sides of arrows 8,7,6,5 (so matching combinations sum to 9).

It implements depth first tree search where tile positions (numbered 0 to 8) are tried left to write top to bottom. So tile $n$ shares a vertical edge with tile $n-1$ unless it's number is 0 mod 3 (leftist column) and it shares a horizontal edge with tile $n-3$ unless $n$ is less than 3, which means it is in the first row.

It tries rotating tiles by 0 to 3 times 90 degrees clock-wise, so finding which arrow to match with a neighboring tile can also be computed with mod 4 arithmetic.

by Robert Helling (noreply@blogger.com) at September 19, 2016 07:43 PM

Clifford V. Johnson - Asymptotia

Breaking, not Braking

Well, that happened. I’ve not, at least as I recollect, written a breakup letter before…until now. It had the usual “It’s not you it’s me…”, “we’ve grown apart…” sorts of phrases. And they were all well meant. This was written to my publisher, I hasten to add! Over the last … Click to continue reading this post

The post Breaking, not Braking appeared first on Asymptotia.

by Clifford at September 19, 2016 07:02 PM

The n-Category Cafe

Logical Uncertainty and Logical Induction

Quick - what’s the <semantics>10 100<annotation encoding="application/x-tex">10^{100}</annotation></semantics>th digit of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>?

If you’re anything like me, you have some uncertainty about the answer to this question. In fact, your uncertainty probably takes the following form: you assign a subjective probability of about <semantics>110<annotation encoding="application/x-tex">\frac{1}{10}</annotation></semantics> to this digit being any one of the possible values <semantics>0,1,2,9<annotation encoding="application/x-tex">0, 1, 2, \dots 9</annotation></semantics>. This is despite the fact that

  • the normality of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> in base <semantics>10<annotation encoding="application/x-tex">10</annotation></semantics> is a wide open problem, and
  • even if it weren’t, nothing random is happening; the <semantics>10 100<annotation encoding="application/x-tex">10^{100}</annotation></semantics>th digit of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> is a particular digit, not a randomly selected one, and it being a particular value is a mathematical fact which is either true or false.

If you’re bothered by this state of affairs, you could try to resolve it by computing the <semantics>10 100<annotation encoding="application/x-tex">10^{100}</annotation></semantics>th digit of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>, but as far as I know nobody has the computational resources to do this in a reasonable amount of time.

Because of this lack of computational resources, among other things, you and I aren’t logically omniscient; we don’t have access to all of the logical consequences of our beliefs. The kind of uncertainty we have about mathematical questions that are too difficult for us to settle one way or another right this moment is logical uncertainty, and standard accounts of how to have uncertain beliefs (for example, assign probabilities and update them using Bayes’ theorem) don’t capture it.

Nevertheless, somehow mathematicians manage to have lots of beliefs about how likely mathematical conjectures such as the Riemann hypothesis are to be true, and even about simpler but still difficult mathematical questions such as how likely some very large complicated number <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> is to be prime (a reasonable guess, before we’ve done any divisibility tests, is about <semantics>1lnN<annotation encoding="application/x-tex">\frac{1}{\ln N}</annotation></semantics> by the prime number theorem). In some contexts we have even more sophisticated guesses like the Cohen-Lenstra heuristics for assigning probabilities to mathematical statements such as “the class number of such-and-such complicated number field has <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>-part equal to so-and-so.”

In general, what criteria might we use to judge an assignment of probabilities to mathematical statements as reasonable or unreasonable? Given some criteria, how easy is it to find a way to assign probabilities to mathematical statements that actually satisfies them? These fundamental questions are the subject of the following paper:

Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor, Logical Induction. ArXiv:1609.03543.

Loosely speaking, in this paper the authors

  • describe a criterion called logical induction that an assignment of probabilities to mathematical statements could satisfy,
  • show that logical induction implies many other desirable criteria, some of which have previously appeared in the literature, and
  • prove that a computable logical inductor (an algorithm producing probability assignments satisfying logical induction) exists.

Logical induction is a weak “no Dutch book” condition; the idea is that a logical inductor makes bets about which statements are true or false, and does so in a way that doesn’t lose it too much money over time.

A warmup

Before describing logical induction, let me describe a different and more naive criterion you could ask for, but in fact don’t want to ask for because it’s too strong. Let <semantics>φ(φ)<annotation encoding="application/x-tex">\varphi \mapsto \mathbb{P}(\varphi)</annotation></semantics> be an assignment of probabilities to statements in some first-order language; for example, we might want to assign probabilities to statements in the language of Peano arithmetic (PA), conditioned on the axioms of PA being true (which means having probability <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>). Say that such an assignment <semantics>φ(φ)<annotation encoding="application/x-tex">\varphi \mapsto \mathbb{P}(\varphi)</annotation></semantics> is coherent if

  • <semantics>()=1<annotation encoding="application/x-tex">\mathbb{P}(\top) = 1</annotation></semantics>.
  • If <semantics>φ 1<annotation encoding="application/x-tex">\varphi_1</annotation></semantics> is equivalent to <semantics>φ 2<annotation encoding="application/x-tex">\varphi_2</annotation></semantics>, then <semantics>(φ 1)=(φ 2)<annotation encoding="application/x-tex">\mathbb{P}(\varphi_1) = \mathbb{P}(\varphi_2)</annotation></semantics>.
  • <semantics>(φ 1)=(φ 1φ 2)+(φ 1¬φ 2)<annotation encoding="application/x-tex">\mathbb{P}(\varphi_1) = \mathbb{P}(\varphi_1 \wedge \varphi_2) + \mathbb{P}(\varphi_1 \wedge \neg \varphi_2)</annotation></semantics>.

These axioms together imply various other natural-looking conditions; for example, setting <semantics>φ 1=<annotation encoding="application/x-tex">\varphi_1 = \top</annotation></semantics> in the third axiom, we get that <semantics>(φ 2)+(¬φ 2)=1<annotation encoding="application/x-tex">\mathbb{P}(\varphi_2) + \mathbb{P}(\neg \varphi_2) = 1</annotation></semantics>. Various other axiomatizations of coherence are possible.

Theorem: A probability assignment such that <semantics>(φ)=1<annotation encoding="application/x-tex">\mathbb{P}(\varphi) = 1</annotation></semantics> for all statements <semantics>φ<annotation encoding="application/x-tex">\varphi</annotation></semantics> in a first-order theory <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is coherent iff there is a probability measure on models of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> such that <semantics>(φ)<annotation encoding="application/x-tex">\mathbb{P}(\varphi)</annotation></semantics> is the probability that <semantics>φ<annotation encoding="application/x-tex">\varphi</annotation></semantics> is true in a random model.

This theorem is a logical counterpart of the Riesz-Markov-Kakutani representation theorem relating probability distributions to linear functionals on spaces of functions; I believe it is due to Gaifman.

For example, if <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is PA, then the sort of uncertainty that a coherent probability assignment conditioned on PA captures is uncertainty about which of the various first-order models of PA is the “true” natural numbers. However, coherent probability assignments are still logically omniscient: syntactically, every provable statement is assigned probability <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> because they’re all equivalent to <semantics><annotation encoding="application/x-tex">\top</annotation></semantics>, and semantically, provable statements are true in every model. In particular, coherence is too strong to capture uncertainty about the digits of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>.

Coherent probability assignments can update over time whenever they learn that some statement is true which they haven’t assigned probability <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> to; for example, if you start by believing PA and then come to also believe that PA is consistent, then conditioning on that belief will cause your probability distribution over models to exclude models of PA where PA is inconsistent. But this doesn’t capture the kind of updating a non-logically omniscient reasoner like you or me actually does, where our beliefs about mathematics can change solely because we’ve thought a bit longer and proven some statements that we didn’t previously know (for example, about the values of more and more digits of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>).

Logical induction

The framework of logical induction is for describing the above kind of updating, based solely on proving more statements. It takes as input a deductive process which is slowly producing proofs of statements over time (for example, of theorems in PA), and assigns probabilities to statements that haven’t been proven yet. Remarkably, it’s able to do this in a way that eventually outpaces the deductive process, assigning high probabilities to true statements long before they are proven (see Theorem 4.2.1).

So how does logical induction work? The coherence axioms above can be justified by Dutch book arguments, following Ramsey and de Finetti, which loosely say that a bookie can’t offer a coherent reasoner a bet about mathematical statements which they will take but which is in fact guaranteed to lose them money. But this is much too strong a requirement for a reasoner who is not logically omniscient. The logical induction criterion is a weaker version of this condition; we only require that an efficiently computable bookie can’t make arbitrarily large amounts of money by betting with a logical inductor about mathematical statements unless it’s willing to take on arbitrarily large amounts of risk (see Definition 3.0.1).

This turns out to be a surprisingly useful condition to require, loosely speaking because it corresponds to being able to “notice patterns” in mathematical statements even if we can’t prove anything about them yet. A logical inductor has to be able to notice patterns that could otherwise be used by an efficiently computable bookie to exploit the inductor; for example, a logical inductor eventually assigns probability about <semantics>110<annotation encoding="application/x-tex">\frac{1}{10}</annotation></semantics> to claims that a very large digit of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> has a particular value, intuitively because otherwise a bookie could continue to bet with the logical inductor about more and more digits of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>, making money each time (see Theorem 4.4.2).

Logical induction has many other desirable properties, some of which are described in this blog post. One of the more remarkable properties is that because logical inductors are computable, they can reason about themselves, and hence assign probabilities to statements about the probabilities they assign. Despite the possibility of running into self-referential paradoxes, logical inductors eventually have accurate beliefs about their own beliefs (see Theorem 4.11.1).

Overall I’m excited about this circle of ideas and hope that they get more attention from the mathematical community. Speaking very speculatively, it would be great if logical induction shed some light on the role of probability in mathematics more generally - for example, in the use of informal probabilistic arguments for or against difficult conjectures. A recent example is Boklan and Conway’s probabilistic arguments in favor of the conjecture that there are no Fermat primes beyond those currently known.

I’ve made several imprecise claims about the contents of the paper above, so please read it to get the precise claims!

by qchu (qchu@math.berkeley.edu) at September 19, 2016 06:18 PM

Lubos Motl - string vacua and pheno

Anti-string crackpots being emulated by a critic of macroeconomics
While only a few thousand people in the world – about one part per million – have some decent idea about what string theory is, the term "string theory" has undoubtedly penetrated the mass culture. The technical name of the theory of everything is being used to promote concerts, "string theory" is used in the title of books about tennis, and visual arts have lots of "string theory" in them, too.

But the penetration is so deep that even the self-evidently marginal figures such as the anti-string crackpots have inspired some followers in totally different parts of the human activity. In particular, five days ago, a man named Paul Romer wrote a 26-page-long rant named
The Trouble With Macroeconomics
See also Stv.tv and Power Line Blog for third parties' perspective.

If you think that the title is surprisingly similar to the title of a book against physics, "The Trouble With Physics", well, you are right. It's no coincidence. Building on the example of the notorious anti-physics jihadist named Lee Smolin, Paul Romer attacks most of macroeconomics and what it's been doing since the 1970s.




To be sure, Paul Romer is a spoiled brat from the family of a left-wing Colorado governor. Probably because he grew into just another economist who always and "flawlessly" advocates the distortion of the markets by the governments and international organizations as well as unlimited loose monetary policies, he was named the chief economist of the World Bank two months ago.

Clearly, the opinion that macroeconomics is a "post-real" pile of šit is clearly not a problem for a pseudo-intellectual with the "desired" ideology who wants to be chosen as the chief economist of a world bank, in fact, The World Bank.




Now, I think that Paul Romer is pretty much a spherical aßhole – it's an aßhole who looks like that regardless of the direction from which you observe it. He is absolutely deluded about physical sciences and I happen to think that he is largely deluded about economics, too.

But unlike him, and I think it's even more important than that, most of this "spherical shape" is a coincidence. There exists no law of Nature that would guarantee that someone who is deluded about physical sciences must be deluded about economics, too – or vice versa. The correlation between the two is probably positive – because obnoxiously stupid people tend to be wrong about almost everything – but macroeconomics and particle physics are sufficiently far enough.

Does he really believe that he can find legitimate arguments about macroeconomics by basically copying a book about particle physics, especially when it is a crackpot's popular book? Even if he tried to be inspired by the best physics research out there, it would probably be impossible to use it in any substantially positive way to advance economics.

The abstract of Romer's paper leaves no doubt that the chief economist of the World Bank is proud to be a small piece of an excrement attached to the appendix of crackpot Lee Smolin:
For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as "tight monetary policy can cause a recession." Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes. A parallel with string theory from physics hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.
Concerning the first sentence, the idea that all of macroeconomics has gone "backwards" for over 30 years is laughable. There are many people doing macroeconomics and they may be more right and less right. But there exist good ones – even though there can't be a consensus who these people are – who focus on the papers of other good people and they obviously know that this set of good macroeconomists know everything important that was known 30+ years ago plus something more.

Be sure that my opinion about the value and reliability of economics and macroeconomics is much less enthusiastic than the opinions about physics and high-energy physics but at some very general level, these two cases undoubtedly are analogous. The available empirical dataset is more extensive than 30+ years ago, the mathematical and computational methods are richer, a longer time has been available to collect and compare competing hypotheses. Clearly, what is the actual source of Romer's "trouble with macroeconomics" is that it is often producing scholarly work that disagrees with his predetermined – and largely ideologically predetermined – prejudices. But it's not the purpose of science – not even economics – to confirm someone's prejudices.

The following sentence makes the true motive of Romer's jihad rather transparent:
Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as "tight monetary policy can cause a recession."
You don't need a PhD to see that what he would actually like would be to ban any research in economics that disputes the dogma that "monetary policy should never be tight". But as every sane economist knows, there are damn good reasons for the monetary policy of a central bank to tighten as a certain point.

The Federal Reserve is rather likely to continue its tightening in the coming year – and not necessarily just by the smallest 0.25% steps – and the main reason is clear: Inflation began to re-emerge in the U.S. Also, a healthy economy simply does lead to the companies' and people's desire to borrow which must be or may be responded to by increasing the interest rates – either centrally, which is a sensible policy, or at the commercial level, which reflects the lenders' desire to be profitable. After all, millions of people did feel that the very low or zero or negative rates were a sign of something unhealthy and the return to the safely positive territory may be good news for the sentiment of those folks. In Europe, something like that will surely happen at some point, perhaps a year after the U.S. An economist who thinks that "loose is good" and "tight is bad" is simply an unbalanced ideologue who must have misunderstood something very important.

And there are reasons why numerous macroeconomics papers dispute even the weaker dogma that "a tight monetary policy can cause a recession". Just to be sure, if you define recession in the standard way – as two quarters of a negative growth, measured in the usual ways – or something close to it, I do believe that loose monetary policy generally reduces the probability of a recession in coming quarters.

But a loose monetary policy always involves some deformation of the market and whenever it's the case, the GDP numbers – measured in the straightforward ways – can no longer be uncritically trusted as a reliable source of the health of the economy and of the people's well-being. These are subtle things. Economists may also have very good reasons not to be afraid of a few years of mild deflation etc. Most of the Western economies saw deflation in recent two years or so and I think that almost everyone sees that those seemed like healthy, sometimes wonderfully healthy, economic conditions. A very low inflation is great because you feel that for the same money, you will be always able to buy the same things – but you are likely to have more money in the future. It makes the optimistic planning for the future much more transparent. The idea that all people are eager to go to a shopping spree whenever inflation becomes substantial – because they feel very well – is at least oversimplified.

So Romer really wants to ban any support for "tight monetary policies" and he is only inventing illogical slurs to make their advocates look bad. He is analogous to Lee Smolin who is inventing illogical slurs, adjectives, and stories against those who want to do and who do high-energy physics right, with the help of the state-of-the-art mathematical and physical methods.

As we can see in the bulk of Romer's paper, he is mainly fighting against theories that certain changes of the economy were ignited by what he calls "imaginary shocks":
Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes.
If I simplify just a little bit, his belief – repeated often in the paper – is that the economy is a completely deterministic system that only depends on people's (and he really means powerful people's) decisions. But that is at least sometimes not the case. It's extremely important for economists – and macroeconomists – to consider various hypotheses that describe the observations. Some of the causes according to some of the theories may look "imaginary" to advocates of others. But that doesn't mean that they are wrong. The whole philosophies may be different (compare with natural and man-made climate change) and it's just wrong to pick the winner before the research is (and careful, impartial comparisons are) actually done.

There may be random events, random changes of the mood etc. that are the actual reasons of many things. One doesn't want his theory to be all about some arbitrary, unpredictable, almost supernatural causes. On the other hand, the assumption that all causes in economics are absolutely controllable, measurable, and predictable is rubbish. So a sane economist simply needs to operate somewhere in between. Hardly predictable events sometimes play the role but within some error margin, a big part of the economic events is predictable and a good economist simply has to master the causal forces.

I am convinced that every sane economist – and thinker – would agree with me. One wants to make the economic theories as "deterministic" or physics-like as possible; but they cannot be made entirely "deterministic", especially because the individual people's – and collective – behavior often depends on quirks, changes of the mood, emotions etc. After all, even physics – the most "clean" discipline of sciences about the world around us – has known that the phenomena aren't really deterministic, not even at the fundamental level, since 1925.

Paul Romer boasts about his view that everything is entirely deterministic – except that he obviously doesn't have any theory that could actually make such fully deterministic predictions. Instead of such a theory, he offers six slurs for those macroeconomists whom he dislikes:
  • A general type of phlogiston that increases the quantity of consumption goods
    produced by given inputs
  • An "investment-specific" type of phlogiston that increases the quantity of
    capital goods produced by given inputs
  • A troll who makes random changes to the wages paid to all workers
  • A gremlin who makes random changes to the price of output
  • Aether, which increases the risk preference of investors
  • Caloric, which makes people want less leisure
So his "knowledge" of physics amounts to six mostly physics-based words – namely phlogiston 1, phlogiston 2, a troll, a gremlin, aether, and caloric – which he uses as insults. Be sure that I could also offer 6 different colorful slurs for Romer but unlike him, I don't think that such slurs may represent the beef of a legitimate argument. The alternative theories also have some causes and we could call these causes "Gargamel" or "Rumpeltiltskin" but listeners above 6 years of age and 100 points of IQ sort of know why this is no true evidence in one way or another. Note that even if some economic changes are explained as consequences of particular people's decisions, that still usually fails to explain why the people made the decisions. Some uncertainty at least about some causes will always be present in social sciences – including quantitative social sciences such as economics.

Like Lee Smolin, what he's doing is just insulting people and inventing unflattering slogans – whose correlation with the truth is basically non-existent. The following pages are full of aether, phlogiston, trolls, and gremlins while claiming to decide about the fate of numerous serious papers on economics. Even those there are probably some memorable well-defined piece of šit with sharp edges over there, I don't plan to swim in that particular cesspool.

There are way too many things – mostly deep misconceptions – on those 26 pages of Romer's paper. Some of them are the same as those that I have been debunking over the years – both in the socio-philosophical blog posts as well as the physics-philosophical ones. But let me only pick a few examples.

On Page 5, Romer attacks Milton Friedman's F-twist. Romer just doesn't like this important idea that I described and defended back in 2009 (click at the previous sentence).
In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that "the more significant the theory, the more unrealistic the assumptions (p.14)."
By the way, is Milton Friedman himself an "unnamed" authority?

But this Friedman's point simply is true and important – and it's supported by quite some explanations, not by any "authority". When we do anything like science, the initial assumptions may be arbitrarily "crazy" according to some pre-existing prejudices and expectations and the only thing that determines the rating of the theories is the agreement of the final predictions with the observations.

And in fact, the more shocking, prejudices-breaking the assumptions are, Friedman said, and the less likely the agreement could have looked a priori, the more important the advance is and the more seriously we should treat it when the predictions happen to agree with the observations. That's also why sane physicists consider relativity and quantum mechanics to be true foundations of modern physics. They build on assumptions that may be said to be a priori bizarre. But when things are done carefully, the theories work extremely well, and this combination is what makes the theories even more important.

People like Romer and Smolin don't like this principle because they don't want to rate theories accoring to their predictions and achievements but according to the agreement of their assumptions with these mediocre apes' medieval, childish prejudices. Isn't the spacetime created out of a classical wooden LEGO? So Lee Smolin will dislike it. Isn't some economic development explained as a consequence of some decision of a wise global banker? Romer will identify the explanation with gremlins, trolls, phlogiston, and aether – even though, I am sure, he doesn't really know what the words mean, why they were considered, and why they are wrong.

The name of Lee Smolin appears 8 times in Romer's "paper". And in all cases, he quotes the crackpot completely uncritically, as if he were a top intellectual. Sorry, Mr Romer, even if you are just an economist, it is still true that if you can't solve the homework problem that asks you to explain why a vast majority of Lee Smolin's book is cr*p, then you are dumb as a doorknob.

A major example. Half of Page 15 of Romer's "paper" is copying some of those slurs by Smolin from Chapter 16. String theorists were said to suffer from:
  1. Tremendous self-confidence
  2. An unusually monolithic community
  3. A sense of identification with the group akin to identification with a religious
    faith or political platform
  4. A strong sense of the boundary between the group and other experts
  5. A disregard for and disinterest in ideas, opinions, and work of experts who are
    not part of the group
  6. A tendency to interpret evidence optimistically, to believe exaggerated or
    incomplete statements of results, and to disregard the possibility that the theory
    might be wrong
  7. A lack of appreciation for the extent to which a research program ought to
    involve risk
These are just insults or accusations and it's spectacularly obvious that all of them apply much more accurately to Romer and Smolin than to and string theorists – and, I am a bit less certain here, than to the macroeconomists who disagree with Romer.

First of all, almost all string theorists are extremely humble and usually shy people – which is the actual reason why many of them have had quite some problems to get jobs. The accusation #1 is self-evident rubbish.

The accusation #2 is nonsense, too. The string theory community has many overlapping subfields (phenomenology including its competing braneworld/heterotic/F-theory/G2 sub-subfields, formal, mathematically motivated, applications of AdS/CFT), significant differences about many issues – the anthropic principle and the existence and describability of the black hole interior are two major examples in the last 2 decades. It's a giant amount of intellectual diversity if you realize that there are less than 1,000 "currently paid professional" string theorists in the world. Less than 1 person among 7 million is a professional string theorist. On the other hand, there is some agreement about issues that can be seemingly reliably if not rigorously demonstrated. So science simply never has the "anything goes" postmodern attitude. But to single out string theorists (and I think that also macroeconomists) as examples of a "monolithic community" is just silly.

In the item #3, he talks about a fanatical religious identification with the community. People who know me a little bit – and who know that I almost certainly belong among the world's top 10 most individualistic and independent people – know quite some counterexample. But the identification is silly, too. Many string theorists also tend to identify with very different types of folks. And even the political diversity among the string theorists is a bit higher than in the general Academia. At least you know that I am not really left-wing, to put it mildly. But there are other, somewhat less spectacular, examples.

Concerning #4, it is true that there's a strong sense of a boundary between string theorists and non-string theorists. But this "sense" exists because the very sharp boundary indeed exists. String theory – and the expertise needed to understand and investigate it – is similar to a skyscraper with many floors. One really needs quite some talent and patience to get to build all of them (by the floors, I mean the general "physics subjects" that depend on each other; string theory is the "last one") and get to the roof. Once he's on the roof, he sees the difference between the skyscraper and the nearby valleys really sharply. The higher the skyscraper is, the more it differs from the lowland that surrounds it. String theory is the highest skyscraper in all of science so the "sense" of the boundary between it and the surrounding lowlands is naturally the strongest one, indeed.

Top string experts are generally uninterested in the work of non-members, as #5 says, because they can see that those just don't work. They are igloos – sometimes demolished igloos – that simply look like minor structures on the background from the viewpoint of the skyscraper's roof. A Romer or a Smolin may scream that it's politically incorrect to point out that string theory is more coherent and string theorists are smarter and have learned many more things that depend on each other etc. etc. – except that whether or not these things are politically incorrect, they're true – and this truth is as self-evident to the string theorists as the fact that you're pretty high if you're on the roof of the Empire State Building. String theorists usually don't emphasize those things – after all, I believe that I am the only person in the world who systematically does so – but what annoys people like Smolin and Romer is that these things are true and because these true facts imply that neither Smolin nor Romer are anywhere close to the smartest people on Earth, they attack string theorists because of this fact. But this fact isn't string theorists' fault.

He says in #6 that "evidence is interpreted optimistically". This whole term "optimistically" reflects Romer's complete misunderstanding how cutting-edge physics works. Physical sciences – like mathematics – work hard to separate statements to right and wrong, not pessimistic and optimistic. There's no canonical way to attach the label "optimistic" and "pessimistic" to scientific statements. If someone says that there exists a set of arguments that will be found that will invalidate string theory and explain the world using some alternative theory with a unique vacuum etc., Romer may call it a "pessimistic" for string theorists. Except that string theorists would be thrilled if this were possible. So making such a prediction would be immensely optimistic even according to almost all string theorists. The problem with this assertion is that it is almost certainly wrong. There doesn't exist a tiny glimpse of evidence that something like that is possible. String theorists would love to see some groundbreaking progress that totally changes the situation of the field except that changes of the most radical magnitude don't take place too often and when someone talks about those revolutions, it isn't the same as actually igniting such a revolution. So without something that totally disrupts the balance, string theorists – i.e. theoretical physicists who carefully study the foundations of physics beyond quantum field theory – continue to have the beliefs they have extracted from the evidence that has actually been presented. Of course that string theory's being the only "game in town" when it comes to the description of Nature including QFTs and gravity is one of these conclusions that the experts have drawn.

The point #7 says that string theorists don't appreciate the importance of risk. This is just an absolutely incredible lie, the converse of the truth. Throughout the 1970s, there was just a dozen of string theorists who did those spectacular things with the risk that they will die of hunger. This existential risk may have gone away in the 1980s and 1990s but it's largely back. Young ingenious people are studying string theory while being completely ignorant whether they will be able to feed themselves for another year. Some of them have worked – and hopefully are working at this moment, when I am typing this sentence – on some very ambitious projects. It's really the same ambition that Romer and Smolin criticize elsewhere – another reason to say that they're logically inconsistent cranks.

Surprisingly, the words "testable" and "falsifiable" haven't appeared in Romer's text. Well, those were favorite demagogic buzzwords of Mr Peter Woit, the world's second most notorious anti-string crackpot. But Smolin has said similar things himself, too. The final thing I want to say is that it's very ironic for Romer to celebrate this anti-physics demagogy which often complained about the absence of "falsifiability". Why?

Romer's most well-known contribution before he became a bureaucrat was his being one of a dozen of economists who advocated the endogenous growth theory, the statement that the growth arises from within, from investment to the human capital etc., not from external forces (Romer did those things around 1986). Great, to some extent it is obvious, it's hard to immediately see what they really proposed or discovered.

But it's funny to look at the criticisms of this endogenous theory. There are some "technical" complaints that it incorrectly accounts for the convergence or divergence of incomes in various countries. However, what's particularly amusing is the final paragraph:
Paul Krugman criticized endogenous growth theory as nearly impossible to check by empirical evidence; “too much of it involved making assumptions about how unmeasurable things affected other unmeasurable things.”
Just to be sure, I am in no way endorsing Krugman here. But you may see that Krugman has made the claim that "Romer's theory is unfalsifiable" using words that are basically identical to those used by the anti-string critics against string theory. However, for some reasons, Romer has 100% identified himself with the anti-string critics. We may also say that Krugman basically criticizes Romer for using "imaginary causes" – the very same criticism that Romer directs against others! You know, the truth is that every important enough theory contains some causes that may look imaginary to skeptics or those who haven't internalized or embraced the theory.

As I have emphasized for more than a decade, all the people who trust Smolin's or Woit's criticisms as criticisms that are particularly apt for string theory are brainwashed simpletons. Whenever there is some criticism that may be relevant for somebody, it's always spectacularly clear to any person with at least some observational skills and intelligence that the criticism applies much more accurately to the likes of Smolin, Woit, and indeed, Romer themselves, than it does to string theorists.

Smolins, Woits, and Romers don't do any meaningful research today and they know that they couldn't become influential using this kind of work. So they want to be leaders in the amount of slurs and accusations that they emit and throw at actual active researchers – even if these accusations actually describe themselves much more than they describe anyone else. The world is full of worthless parasites such as Smolin, Woit, and Romer who endorse each other across the fields – plus millions of f*cked-up gullible imbeciles who are inclined to take these offensive lies seriously. Because the amount of stupidity in the world is this overwhelming, one actually needs some love for risk to simply point these things out.

by Luboš Motl (noreply@blogger.com) at September 19, 2016 02:42 PM

Tommaso Dorigo - Scientificblogging

Are There Two Higgses ? No, And I Won Another Bet!
The 2012 measurements of the Higgs boson, performed by ATLAS and CMS on 7- and 8-TeV datasets collected during Run 1 of the LHC, were a giant triumph of fundamental physics, which conclusively showed the correctness of the theoretical explanation of electroweak symmetry breaking conceived in the 1960s.

The Higgs boson signals found by the experiments were strong and coherent enough to convince physicists as well as the general public, but at the same time the few small inconsistencies unavoidably present in any data sample, driven by statistical fluctuations, were a stimulus for fantasy interpretations. Supersymmetry enthusiasts, in particular, saw the 125 GeV boson as the first found of a set of five. SUSY in fact requires the presence of at least five such states.

read more

by Tommaso Dorigo at September 19, 2016 12:06 PM

John Baez - Azimuth

Struggles with the Continuum (Part 5)

Quantum field theory is the best method we have for describing particles and forces in a way that takes both quantum mechanics and special relativity into account. It makes many wonderfully accurate predictions. And yet, it has embroiled physics in some remarkable problems: struggles with infinities!

I want to sketch some of the key issues in the case of quantum electrodynamics, or ‘QED’. The history of QED has been nicely told here:

• Silvian Schweber, QED and the Men who Made it: Dyson, Feynman, Schwinger, and Tomonaga, Princeton U. Press, Princeton, 1994.

Instead of explaining the history, I will give a very simplified account of the current state of the subject. I hope that experts forgive me for cutting corners and trying to get across the basic ideas at the expense of many technical details. The nonexpert is encouraged to fill in the gaps with the help of some textbooks.

QED involves just one dimensionless parameter, the fine structure constant:

\displaystyle{ \alpha = \frac{1}{4 \pi \epsilon_0} \frac{e^2}{\hbar c} \approx \frac{1}{137.036} }

Here e is the electron charge, \epsilon_0 is the permittivity of the vacuum, \hbar is Planck’s constant and c is the speed of light. We can think of \alpha^{1/2} as a dimensionless version of the electron charge. It says how strongly electrons and photons interact.

Nobody knows why the fine structure constant has the value it does! In computations, we are free to treat it as an adjustable parameter. If we set it to zero, quantum electrodynamics reduces to a free theory, where photons and electrons do not interact with each other. A standard strategy in QED is to take advantage of the fact that the fine structure constant is small and expand answers to physical questions as power series in \alpha^{1/2}. This is called ‘perturbation theory’, and it allows us to exploit our knowledge of free theories.

One of the main questions we try to answer in QED is this: if we start with some particles with specified energy-momenta in the distant past, what is the probability that they will turn into certain other particles with certain other energy-momenta in the distant future? As usual, we compute this probability by first computing a complex amplitude and then taking the square of its absolute value. The amplitude, in turn, is computed as a power series in \alpha^{1/2}.

The term of order \alpha^{n/2} in this power series is a sum over Feynman diagrams with n vertices. For example, suppose we are computing the amplitude for two electrons wth some specified energy-momenta to interact and become two electrons with some other energy-momenta. One Feynman diagram appearing in the answer is this:

Here the electrons exhange a single photon. Since this diagram has two vertices, it contributes a term of order \alpha. The electrons could also exchange two photons:

giving a term of \alpha^2. A more interesting term of order \alpha^2 is this:

Here the electrons exchange a photon that splits into an electron-positron pair and then recombines. There are infinitely many diagrams with two electrons coming in and two going out. However, there are only finitely many with n vertices. Each of these contributes a term proportional to \alpha^{n/2} to the amplitude.

In general, the external edges of these diagrams correspond to the experimentally observed particles coming in and going out. The internal edges correspond to ‘virtual particles’: that is, particles that are not directly seen, but appear in intermediate steps of a process.

Each of these diagrams is actually a notation for an integral! There are systematic rules for writing down the integral starting from the Feynman diagram. To do this, we first label each edge of the Feynman diagram with an energy-momentum, a variable p \in \mathbb{R}^4. The integrand, which we shall not describe here, is a function of all these energy-momenta. In carrying out the integral, the energy-momenta of the external edges are held fixed, since these correspond to the experimentally observed particles coming in and going out. We integrate over the energy-momenta of the internal edges, which correspond to virtual particles, while requiring that energy-momentum is conserved at each vertex.

However, there is a problem: the integral typically diverges! Whenever a Feynman diagram contains a loop, the energy-momenta of the virtual particles in this loop can be arbitrarily large. Thus, we are integrating over an infinite region. In principle the integral could still converge if the integrand goes to zero fast enough. However, we rarely have such luck.

What does this mean, physically? It means that if we allow virtual particles with arbitrarily large energy-momenta in intermediate steps of a process, there are ‘too many ways for this process to occur’, so the amplitude for this process diverges.

Ultimately, the continuum nature of spacetime is to blame. In quantum mechanics, particles with large momenta are the same as waves with short wavelengths. Allowing light with arbitrarily short wavelengths created the ultraviolet catastrophe in classical electromagnetism. Quantum electromagnetism averted that catastrophe—but the problem returns in a different form as soon as we study the interaction of photons and charged particles.

Luckily, there is a strategy for tackling this problem. The integrals for Feynman diagrams become well-defined if we impose a ‘cutoff’, integrating only over energy-momenta p in some bounded region, say a ball of some large radius \Lambda. In quantum theory, a particle with momentum of magnitude greater than \Lambda is the same as a wave with wavelength less than \hbar/\Lambda. Thus, imposing the cutoff amounts to ignoring waves of short wavelength—and for the same reason, ignoring waves of high frequency. We obtain well-defined answers to physical questions when we do this. Unfortunately the answers depend on \Lambda, and if we let \Lambda \to \infty, they diverge.

However, this is not the correct limiting procedure. Indeed, among the quantities that we can compute using Feynman diagrams are the charge and mass of the electron! Its charge can be computed using diagrams in which an electron emits or absorbs a photon:

Similarly, its mass can be computed using a sum over Feynman diagrams where one electron comes in and one goes out.

The interesting thing is this: to do these calculations, we must start by assuming some charge and mass for the electron—but the charge and mass we get out of these calculations do not equal the masses and charges we put in!

The reason is that virtual particles affect the observed charge and mass of a particle. Heuristically, at least, we should think of an electron as surrounded by a cloud of virtual particles. These contribute to its mass and ‘shield’ its electric field, reducing its observed charge. It takes some work to translate between this heuristic story and actual Feynman diagram calculations, but it can be done.

Thus, there are two different concepts of mass and charge for the electron. The numbers we put into the QED calculations are called the ‘bare’ charge and mass, e_\mathrm{bare} and m_\mathrm{bare}. Poetically speaking, these are the charge and mass we would see if we could strip the electron of its virtual particle cloud and see it in its naked splendor. The numbers we get out of the QED calculations are called the ‘renormalized’ charge and mass, e_\mathbb{ren} and m_\mathbb{ren}. These are computed by doing a sum over Feynman diagrams. So, they take virtual particles into account. These are the charge and mass of the electron clothed in its cloud of virtual particles. It is these quantities, not the bare quantities, that should agree with experiment.

Thus, the correct limiting procedure in QED calculations is a bit subtle. For any value of \Lambda and any choice of e_\mathrm{bare} and m_\mathrm{bare}, we compute e_\mathbb{ren} and m_\mathbb{ren}. The necessary integrals all converge, thanks to the cutoff. We choose e_\mathrm{bare} and m_\mathrm{bare} so that e_\mathbb{ren} and m_\mathbb{ren} agree with the experimentally observed charge and mass of the electron. The bare charge and mass chosen this way depend on \Lambda, so call them e_\mathrm{bare}(\Lambda) and m_\mathrm{bare}(\Lambda).

Next, suppose we want to compute the answer to some other physics problem using QED. We do the calculation with a cutoff \Lambda, using e_\mathrm{bare}(\Lambda) and m_\mathrm{bare}(\Lambda) as the bare charge and mass in our calculation. Then we take the limit \Lambda \to \infty.

In short, rather than simply fixing the bare charge and mass and letting \Lambda \to \infty, we cleverly adjust the bare charge and mass as we take this limit. This procedure is called ‘renormalization’, and it has a complex and fascinating history:

• Laurie M. Brown, ed., Renormalization: From Lorentz to Landau (and Beyond), Springer, Berlin, 2012.

There are many technically different ways to carry out renormalization, and our account so far neglects many important issues. Let us mention three of the simplest.

First, besides the classes of Feynman diagrams already mentioned, we must also consider those where one photon goes in and one photon goes out, such as this:

These affect properties of the photon, such as its mass. Since we want the photon to be massless in QED, we have to adjust parameters as we take \Lambda \to \infty to make sure we obtain this result. We must also consider Feynman diagrams where nothing comes in and nothing comes out—so-called ‘vacuum bubbles’—and make these behave correctly as well.

Second, the procedure just described, where we impose a ‘cutoff’ and integrate over energy-momenta p lying in a ball of radius \Lambda, is not invariant under Lorentz transformations. Indeed, any theory featuring a smallest time or smallest distance violates the principles of special relativity: thanks to time dilation and Lorentz contractions, different observers will disagree about times and distances. We could accept that Lorentz invariance is broken by the cutoff and hope that it is restored in the \Lambda \to \infty limit, but physicists prefer to maintain symmetry at every step of the calculation. This requires some new ideas: for example, replacing Minkowski spacetime with 4-dimensional Euclidean space. In 4-dimensional Euclidean space, Lorentz transformations are replaced by rotations, and a ball of radius \Lambda is a rotation-invariant concept. To do their Feynman integrals in Euclidean space, physicists often let time take imaginary values. They do their calculations in this context and then transfer the results back to Minkowski spacetime at the end. Luckily, there are theorems justifying this procedure.

Third, besides infinities that arise from waves with arbitrarily short wavelengths, there are infinities that arise from waves with arbitrarily long wavelengths. The former are called ‘ultraviolet divergences’. The latter are called ‘infrared divergences’, and they afflict theories with massless particles, like the photon. For example, in QED the collision of two electrons will emit an infinite number of photons with very long wavelengths and low energies, called ‘soft photons’. In practice this is not so bad, since any experiment can only detect photons with energies above some nonzero value. However, infrared divergences are conceptually important. It seems that in QED any electron is inextricably accompanied by a cloud of soft photons. These are real, not virtual particles. This may have remarkable consequences.

Battling these and many other subtleties, many brilliant physicists and mathematicians have worked on QED. The good news is that this theory has been proved to be ‘perturbatively renormalizable’:

• J. S. Feldman, T. R. Hurd, L. Rosen and J. D. Wright, QED: A Proof of Renormalizability, Lecture Notes in Physics 312, Springer, Berlin, 1988.

• Günter Scharf, Finite Quantum Electrodynamics: The Causal Approach, Springer, Berlin, 1995

This means that we can indeed carry out the procedure roughly sketched above, obtaining answers to physical questions as power series in \alpha^{1/2}.

The bad news is we do not know if these power series converge. In fact, it is widely believed that they diverge! This puts us in a curious situation.

For example, consider the magnetic dipole moment of the electron. An electron, being a charged particle with spin, has a magnetic field. A classical computation says that its magnetic dipole moment is

\displaystyle{ \vec{\mu} = -\frac{e}{2m_e} \vec{S} }

where \vec{S} is its spin angular momentum. Quantum effects correct this computation, giving

\displaystyle{ \vec{\mu} = -g \frac{e}{2m_e} \vec{S} }

for some constant g called the gyromagnetic ratio, which can be computed using QED as a sum over Feynman diagrams with an electron exchanging a single photon with a massive charged particle:

The answer is a power series in \alpha^{1/2}, but since all these diagrams have an even number of vertices, it only contains integral powers of \alpha. The lowest-order term gives simply g = 2. In 1948, Julian Schwinger computed the next term and found a small correction to this simple result:

\displaystyle{ g = 2 + \frac{\alpha}{\pi} \approx 2.00232 }

By now a team led by Toichiro Kinoshita has computed g up to order \alpha^5. This requires computing over 13,000 integrals, one for each Feynman diagram of the above form with up to 10 vertices! The answer agrees very well with experiment: in fact, if we also take other Standard Model effects into account we get agreement to roughly one part in 10^{12}.

This is the most accurate prediction in all of science.

However, as mentioned, it is widely believed that this power series diverges! Next time I’ll explain why physicists think this, and what it means for a divergent series to give such a good answer when you add up the first few terms.


by John Baez at September 19, 2016 01:00 AM

September 16, 2016

Symmetrybreaking - Fermilab/SLAC

The secret lives of long-lived particles

A theoretical species of particle might answer nearly every question about our cosmos—if scientists can find it.

The universe is unbalanced.

Gravity is tremendously weak. But the weak force, which allows particles to interact and transform, is enormously strong. The mass of the Higgs boson is suspiciously petite. And the catalog of the makeup of the cosmos? Ninety-six percent incomplete.

Almost every observation of the subatomic universe can be explained by the Standard Model of particle physics—a robust theoretical framework bursting with verifiable predictions. But because of these unsolved puzzles, the math is awkward, incomplete and filled with restrictions.

A few more particles would solve almost all of these frustrations. Supersymmetry (nicknamed SUSY for short) is a colossal model that introduces new particles into the Standard Model’s equations. It rounds out the math and ties up loose ends. The only problem is that after decades of searching, physicists have found none of these new friends.

But maybe the reason physicists haven’t found SUSY (or other physics beyond the Standard Model) is because they’ve been looking through the wrong lens.

“Beautiful sets of models keep getting ruled out,” says Jessie Shelton, a theorist at the University of Illinois, “so we’ve had to take a step back and consider a whole new dimension in our searches, which is the lifetime of these particles.”

In the past, physicists assumed that new particles produced in particle collisions would decay immediately, almost precisely at their points of origin. Scientists can catch particles that behave this way—for example, Higgs bosons—in particle detectors built around particle collision points. But what if new particles had long lifetimes and traveled centimeters—even kilometers—before transforming into something physicists could detect?

This is not unprecedented. Bottom quarks, for instance, can travel a few tenths of a millimeter before decaying into more stable particles. And muons can travel several kilometers (with the help of special relativity) before transforming into electrons and neutrinos. Many theorists are now predicting that there may be clandestine species of particles that behave in a similar fashion. The only catch is that these long-lived particles must rarely interact with ordinary matter, thus explaining why they’ve escaped detection for so long. One possible explanation for this aloof behavior is that long live particles dwell in a hidden sector of physics.

“Hidden-sector particles are separated from ordinary matter by a quantum mechanical energy barrier—like two villages separated by a mountain range,” says Henry Lubatti from the University of Washington. “They can be right next to each other, but without a huge boost in energy to get over the peak, they’ll never be able to interact with each other.”

High-energy collisions generated by the Large Hadron Collider could kick these hidden-sector particles over this energy barrier into our own regime. And if the LHC can produce them, scientists should be able to see the fingerprints of long-lived particles imprinted in their data.

Long-lived particles jolted into our world by the LHC would most likely fly at close to the speed of light for between a few micrometers and a few hundred thousand kilometers before transforming into ordinary and measurable matter. This incredibly generous range makes it difficult for scientists to pin down where and how to look for them.

But the lifetime of a subatomic particle is much like that of any living creature. Each type of particle has an average lifespan, but the exact lifetime of an individual particle varies. If these long-lived particles can travel thousands of kilometers before decaying, scientists are hoping that they’ll still be able to catch a few of the unlucky early-transformers before they leave the detector. Lubatti and his collaborators have also proposed a new LHC surface detector, which would extend their search range by many orders of magnitude.

Because these long-lived particles themselves don’t interact with the detector, their signal would look like a stream of ordinary matter spontaneously appearing out of nowhere.

“For instance, if a long lived particle decayed into quarks while inside the muon detector, it would mimic the appearance of several muons closely clustered together,” Lubatti says. “We are triggering on events like this in the ATLAS experiment.” After recording the events, scientists use custom algorithms to reconstruct the origins of these clustered particles to see if they could be the offspring of an invisible long-lived parent.

If discovered, this new breed of matter could help answer several lingering questions in physics.

“Long-lived particles are not a prediction of a single new theory, but rather a phenomenon that could fit into almost all of our frameworks for beyond-the-Standard-Model physics,” Shelton says.

In addition to rounding out the Standard Model’s mathematics, inert long-lived particles could be cousins of dark matter—an invisible form of matter that only interacts with the visible cosmos through gravity. They could also help explain the origin of matter after the Big Bang.

“So many of us have spent a lifetime studying such a tiny fraction of the universe,” Lubatti says. “We’ve understood a lot, but there’s still a lot we don’t understand—an enormous amount we don’t understand. This gives me and my colleagues pause.”

by Sarah Charley at September 16, 2016 01:00 PM

Lubos Motl - string vacua and pheno

String theory lives its first, exciting life
Gross, Dijkgraaf mostly join the sources of deluded anti-string vitriol

Just like the Czech ex-president has said that the Left has definitively won the war against the Right for any foreseeable future, I think it's true that the haters of modern theoretical physics have definitively won the war for the newspapers and the bulk of the information sources.

The Quanta Magazine is funded by the Simons Foundation. Among the owners of the media addressing non-experts, Jim Simons is as close to the high-energy theoretical physics research community as you can get. But the journalists are independent etc. and the atmosphere among the physics writers is bad so no one could prevent the creation of an unfortunate text
The Strange Second Life of String Theory
by Ms K.C. Cole. The text is a mixed, and I would say mostly negative, package of various sentiments concerning the state of string theory. Using various words, the report about an alleged "failure of string theory" is repeated about 30 times in that article. It has become nearly mandatory for journalists to write this spectacular lie to basically every new popular text about string theory. Only journalists who have some morality avoid this lie – and there aren't too many.




With an omnipresent negative accent, the article describes the richness or complexity of string theory as people have understood it in recent years and its penetration to various adjacent scientific disciplines. What I find really annoying is that some very powerful string theorists – David Gross and Robbert Dijkgraaf – have basically joined this enterprise.

They are still exceptions – I am pretty sure that Edward Witten, Cumrun Vafa, and many others couldn't be abused to write similar anti-string rants – but voices such as Gross' and Dijkgraaf's are the privileged exceptions among the corrupt class of journalist hyenas because they are willing to say something that fits the journalists' pre-determined "narrative".




OK, let me mention a few dozens of problems I have with that article.
The Strange Second Life of String Theory
It's the title. Well, it's nonsense. One could talk about a second life if either string theory had died at some moment in the past and was resuscitated; or if one could separate its aspects to two isolated categories, "lives".

It's spectacularly obvious that none of these conditions holds. String theory has never "died" so it couldn't have been resuscitated for a second life. And the applications here and there are continuously connected with all other, including the most formal, aspects of string theory.

So there's simply just one life and the claim about a "second life" is a big lie by itself. The subtitle is written down to emphasize the half-terrible, half-successful caricature of string theory that this particular writer, K.C. Cole, decided to advocate:
String theory has so far failed to live up to its promise as a way to unite gravity and quantum mechanics. At the same time, it has blossomed into one of the most useful sets of tools in science.
Well, string theory has been known to consistently unify gravity and quantum mechanics from the 1970s, and within fully realistic supersymmetric models, since the 1980s. Already in the 1970s, it was understood why string theory avoids the ultraviolet divergences that spoil the more straightforward attempts to quantize Einstein's gravity. In the 1980s, it was shown that (super)string theory has solutions that achieve this consistency; but they also contain forces, fields, particles, and processes of all qualitative types that are needed to explain all the observations that have ever been made. Whether or not we know a compactification that precisely matches Nature around us, we already know that string theory has proven that gravity and quantum mechanics are reconcilable.

So already decades ago, string theory has successfully unified gravity and quantum mechanics. No evidence whatsoever has ever emerged that something was wrong about these proofs of the consistency. So the claim about the "failure" to unify gravity and quantum mechanics is just a lie.

You may see that Cole's basic message is simple. She wants to claim that string theory is split to two parts, a failed one and a healthy one. Moreover, the failed one is all the core of string theory – all the conceptual and unification themes. The reality is that the split doesn't exist; and the formal, conceptual, unification theme in string theory is the essential and priceless one.

This deceitful theme is repeated many, many times by K.C. Cole in her text. There are lots of other deceptions, too:
To be sure, the theory came with unsettling implications. The strings were too small to be probed by experiment and lived in as many as 11 dimensions of space.
Both of these "unsettling implications of string theory" are just rubbish. It's very likely that strings are very small and the size is close to the fundamental Planck scale, \(10^{-35}\) meters. But this wasn't a new insight, as you might guess if you were capable of noticing the name "Planck" in the previous sentence. Max Planck determined that the fundamental processes of Nature probably take place at the distance of the Planck length more than 100 years ago.

It follows directly from dimensional analysis. There may be loopholes or not. The loopholes were imaginable without string theory and they are even more clearly imaginable and specific with string theory (old and large extra dimensions etc.). But string theory has only made the older ideas about the scale more specific. The ultrashort magnitude of the Planck length was in no way a new implication of string theory.

The existence of extra dimensions of space may be said to be string theory's implication. Older theories were trying to unify electricity and magnetism already in 1919 but string theory has indeed made those extra dimensions unavoidable. But what's wrong is that this implication is "unsettling". The existence of extra dimensions of some size – which may be as short as a "few Planck lengths" but may also be much longer – is a wonderful prediction of string theory that is celebrated as a great discovery.

Although it is not "experimentally proven as a must" at this point, it is compatible with all observations we know and people who understand the logic know very well that the extra dimensions wonderfully agree with the need for a structure that explains the – seemingly complicated – list of particles and interactions that has been discovered experimentally. This list and its complexity are in one-to-one correspondence with the shape and structure of the extra dimensions. This identification – the particle and fields spectrum is explained by the shape – sounds wonderful at the qualitative level. But calculations show that it actually works.

So when someone assigns negative sentiments to this groundbreaking advance, she is only exposing her idiocy.

It's more frustrating to see what David Gross is saying these days:
For a time, many physicists believed that string theory would yield a unique way to combine quantum mechanics and gravity. “There was a hope. A moment,” said David Gross, an original player in the so-called Princeton String Quartet, a Nobel Prize winner and permanent member of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. “We even thought for a while in the mid-’80s that it was a unique theory.”
What people believed for a few months in the mid 1980s was that there was a unique realistic compactified string theory to \(d=4\). It was understood early on that there existed compactified string theories that don't match the particle spectrum we have observed. But it was understood that there could be one that matches it perfectly. And it's still as true as it was 30 years ago. The theoretically acceptable list of solutions is not unique but the solution that describes Nature is unique.

Moreover, in the duality revolution of mid 1990s, people realized that all the seemingly inequivalent "string theories", as they would call them in the previous decades, are actually connected by dualities or adjustments of the moduli. So they're actually not separate theories but mutually connected solutions of one theory. That's why all competent experts stopped using the term "string theories" in the plural after the duality revolution.

David Gross likes to say that "string theory is a framework" – in the sense that it has many "specific models", just like quantum field theories may come in the form of "many inequivalent models", and these models share some mathematical methods and principles while we also need to find out which of them is relevant if we want to make a particular prediction.

So far so good. But there's also a difference between quantum field theory and string theory. Two different QFTs are really distinct, inequivalent, different theories – encoded in a different Lagrangian, for example, and there's no way to get the objects from one theory with one Lagrangian in the QFT of another theory with a different Lagrangian. But in string theory, one can always get objects of one kind by doing some (possibly extreme, but finite) change of the space or vacuum that starts in another solution. All the different vacua and physical objects and phenomena in them do follow from some exactly identical, complete laws – it's just the effective laws for an expansion around a (vacuum or another) state that may come in many forms.

String theory is one theory. No legitimate "counter-evidence" that would revert this insight of the 1990s has been found ever since. Gross adds some "slightly less annoying" comments as well. However, he also escalates the negative ones. "There was a hope," Gross said, later suggesting that there's no "hope" anymore. But there's one even bigger shocker from Gross:
After a certain point in the early ’90s, people gave up on trying to connect to the real world.
WTF!? Did Gross say that string phenomenology hasn't existed "from the early 1990s"?

Maybe you completely stopped doing and even following string phenomenology but that's just too bad. The progress has been substantial. Just one subdirection that was basically born in the last decade. Vafa's and pals' "F-theory with localized physics on singularities" models of particle physics. Let me pick e.g. Vafa-Heckman-Beasley 2008 as the starting point. 350 and 400 followups, respectively. Recent advances derive quite some interesting flavor and similar things – e.g. neutrino oscillations including the recently observed nonzero \(\theta_{13}\) angle – out of F-theory.

And this is just a fraction of string phenomenology. One could spend hours by descriptions how heterotic or M-theory phenomenology advanced e.g. from 2000. Did you really say that "people gave up on trying to connect to the real world" over 20 years ago, David? It sounds absolutely incredible to me. Maybe, while you were criticizing Joe and others for their inclinations to the anthropic principle and "giving up", you accepted this defeatist stuff yourself, and maybe even more so than Joe did, at least when it comes to your thinking or not thinking about everything else.

Maybe you – and Dijkgraaf – now think that you may completely ignore physicists like Vafa because you're directors and he isn't. But he's successfully worked on many things including the connections of string theory to the experimental data which is arguably much more important than your administrative jobs. This quote of Gross' sheds new light e.g. on his exchanges with Gordon Kane. Indeed, Gross looks like a guy who stopped thinking about some matters – string phenomenology, in this case – more than 20 years ago but who still wants to keep the illusion that he's at least as good as the most important contemporary researchers in the discipline. Sorry but if that's approximately true, then Gross is behaving basically like Peter W*it. There may be wrong statements and guesses in many string phenomenology papers but they're doing very real work and the progress in the understanding of the predictions of those string compactifications has been highly non-trivial.

Vafa and Kane were just two names I mentioned. The whole M-theory on \(G_2\) manifolds phenomenology was only started around 2000 – by Witten and others. Is this whole research sub-industry also non-existent according to Gross? What about the braneworlds? Old large and warped dimensions? Detailed stringy models of inflation and cosmology in general? Most of the research on all these topics and others took after the mid 1990s. Are you serious that people stopped thinking about the connections between strings and experiments?

But Robbert Dijkgraaf contributes to this production of toxic nonsense, too:
“We’ve been trying to aim for the successes of the past where we had a very simple equation that captured everything,” said Robbert Dijkgraaf, the director of the Institute for Advanced Study in Princeton, New Jersey. “But now we have this big mess.”
Speak for yourself, Robbert. The fundamental laws of a theory of everything – of string theory – may be given by a "very simple equation". And I've been attracted to this possibility, too, especially as a small kid. But as an adult, I've never believed it was realistic – and I am confident that the same holds for most of the currently active string theorists. In practice, the equations we had to use to study QFT or string theory were "never too simple". Well, when I liked string field theory and didn't appreciate its perturbative limited character, I liked the equations of motion of the background-independent version of string field theory,\[

A * A = 0.

\] That was a very attractive equation. The string field \(A\) acquires a vacuum condensate \(A_0\), i.e. \(A=A_0+a\), composed of some "nearly infinitesimal strings" such that \(A_0 * \Phi\) or an (anti)commutator is related to \(Q \Phi\) acting on a string and encodes the BRST operator \(Q\). The terms \(Qa\) impose the BRST-closedness of the string states. The equation above also contains the residual term \(a*a\) which is responsible for interactions. The part \(A_0*A_0=0\) of the equation is equivalent to the condition for the nilpotency of the BRST operator, \(Q^2=0\).

It's fun and (at least open, perturbative) string theory may be derived from this starting point. At the same moment, this starting point doesn't seem to allow us to calculate effects in string theory beyond perturbative expansions – at least, it doesn't seem more potent in this way than other perturbative approaches to string theory.

OK, I want to say that a vast majority of what string theorists have been doing since the very beginning of quantitative string theory, in 1968, had nothing to do with \(A*A=0\) or similar "very simple equations". Maybe Robbert Dijkgraaf was obsessed with this idea of "very simple equations" when he began to study things like that but I never was and I think that most string theorists haven't. Already when I was a kid, it was rather clear to me that one needs to deal with some "rather difficult equations" if he wants to address the most fundamental laws of physics. "Próstoj prostóty něbudět," ("There won't be any simple simplicity any longer") was a favorite quote I picked from Lev Okun's popular book on particle physics when I was 16 or so and since that moment, I was almost never believing something different.

There's still some kind of amazing "conceptual simplicity" in string theory but it's not a simplicity of the type that a very short equation could completely define everything about physics that we need and would be comprehensible to the people with some basic math training. A very simple equation like that could finally be found but the advances in string theory have never led to any significant evidence that a "very simple equation" like that should be behind everything. At least so far.

Nothing has changed about these basic qualitative matters since 1968. So Dijkgraaf's claim that string theorists have been doing research by looking for some "very simple equation" and only recently, they found reasons that this is silly, is simply a lie. This "very simple research" was never any substantial part of the string theory research and nothing has changed about these general matters in recent years or decades.

And what about the words "big mess"? What do you exactly count as parts of the "big mess"? Are the rules about the Calabi-Yau compactifications of heterotic string theory a part of the "big mess"? What about matrix string theory? AdS/CFT and its portions? Sorry, I would never use the term "big mess" for these concepts and dozens or hundreds of others. They're just parts of the paramount knowledge that was uncovered in recent decades.

Maybe, Robbert, you fell so much in love with your well-paid job of the director that you now consider the people in the IAS and elsewhere doing serious research to be inferior dirty animals that should be spitted upon. If that's the case, they should work hard to remove you. Or do it like the tribe in Papua-New Guinea.

To make things worse:
Its tentacles have reached so deeply into so many areas in theoretical physics, it’s become almost unrecognizable, even to string theorists. “Things have gotten almost postmodern,” said Dijkgraaf, who is a painter as well as mathematical physicist.
"Tentacles" don't exactly sound beautiful or friendly – well, they're still friendlier than when someone calls all these insights "tumors". But the claim that string theory has become "unrecognizable to string theorists" is just rubbish, too. Applications of string theory in some other disciplines – e.g. in condensed matter physics – may be hard for a pure string theorist. But that's because these applications are not just string theory. They are either "modified string theory" or "string theory mixed with other topics" etc.

Nothing has become "less recognizable" let alone "postmodern" about pure string theory itself. It's a theory including all physics that may be continuously connected to the general perturbative formula for S-matrix amplitudes that uses a conformal-invariant, modular-invariant theory on a two-dimensional world sheet. Period. The actual "idea" about the set of all these possible phenomena remains clear enough. There are six maximally decompactified vacua of string theory and a large number of compactified solutions that increases with the number of compactified dimensions.

The number of all such solutions and even of the elements of some subsets may be very large but there is nothing "postmodern" about large numbers. Mathematics works for small numbers as well as large numbers. Postmodernism never works. These – richness of a space of solution and postmodernism – are completely different concepts.

Now, boundaries between string and non-string theory.
“It’s hard to say really where you should draw the boundary around and say: This is string theory; this is not string theory,” said Douglas Stanford, a physicist at the IAS. “Nobody knows whether to say they’re a string theorist anymore,” said Chris Beem, a mathematical physicist at the University of Oxford. “It’s become very confusing.”
One interpretation of Beem's words is a worrisome one: He is a rat who wants to maximally lick the rectums of the powerful ones and because dishonest and generally f*cked-up string theory bashers became omnipresent and powerful, he is tempted to lick their rectums as well. So he may want to say he isn't a string theorist.

But even with the more innocent interpretation of the paragraph above, it's mostly nonsense. Just look at the list of Chris Beem's particular papers. It's very clear that he is mostly a quantum field theorist. Even though he co-authored papers with many string theorists – I know many of his co-authors – it isn't even clear from the papers whether all the authors had to be given the basic education in the subject.

It's not clear whether Chris Beem is a string theorist but it's not because string theory is ill-defined. It's because it's not clear what Chris Beem is actually interested in, what he knows, and what he works on.

There is work on the boundary of "being pure string theory" and "having no string theory at all". But there's nothing "pathological" about it. The situation is completely analogous to the questions in the 1930s whether some papers and physicists' work were on quantum mechanics as understood in the 1920s, or quantum field theory. Well, quantum field theory is just a more complete, specific, sophisticated layer of knowledge built on top of quantum mechanics – just like string theory is a more complete, specific, sophisticated layer of knowledge built on top of quantum field theory.

In the late 1920s and the 1930s, people would start to study many issues such as the corrections to magnetic moments, hydrogen energy levels from the Lamb shift (virtual photons) etc. They could have "complained" in exactly the same way: We don't know whether we're working on quantum mechanics or quantum field theory. Well, both. It's clear that you can get far enough if you think of your research as some "cleverly directed" research on some heuristic generalization of the old quantum mechanics. But you may also view it as a more rigorous derivation from the newer, more complete theory. Once the more complete, newer theory is sufficiently understood, people who understand it know exactly what they're doing. Some people don't know it as well.

Exactly the analogous statements may be made about the research on topics where the "usual QFT methods aren't enough" yet the goals look more QFT-like and less string-like than the goals of the most "purely stringy" papers. So why the hell are you trying to paint all trivial things negatively? There are many papers that have to employ many insights and many methods from various "subfields". And they often need to know many of these things just superficially. What's wrong about it? It's unsurprising that such papers can't be unambiguously categorized. Examples like that exist in (and in between) most disciplines of science.

What's actually wrong is that the number of people who do full-fledged string research has been reduced. I think that it has been reduced partly if not mostly due to the sentiment I previously attributed to Chris Beem – many people want to lick the aßes of the string-bashing scum that has penetrated many environments surrounding the research institutions. And the string-theory-bashing scum has tangibly reduced the funding etc.

David Simmons-Duffin, Eva Silverstein, and Juan Maldacena didn't say anything that could be interpreted as string-bashing in isolation. They explain that much of string theory is about interpolations of known theories or results; string theory has impact on cosmology and other fields; we don't know the role of the landscape in Nature around us (Maldacena also defines string theory as "solid theoretical research on natural geometric structures"). Nevertheless, K.C. Cole made their statements look like a part of the same string-bashing story.

There are lots of quotes and assertions in the article that are borderline and much less often completely correct but their "emotional presentation" is always bizarre in some way. But there are many additional statements that aren't right:
Toy models are standard tools in most kinds of research. But there’s always the fear that what one learns from a simplified scenario does not apply to the real world. “It’s a bit of a deal with the devil,” Beem said. “String theory is a much less rigorously constructed set of ideas than quantum field theory, so you have to be willing to relax your standards a bit,” he said. “But you’re rewarded for that. It gives you a nice, bigger context in which to work.”
Why does Mr Beem think that "string theory is a much less rigorously constructed set of ideas than QFT"? It's an atlas composed of "patches" that are as rigorously constructed as QFTs – because the patches are QFTs. So perturbative string theory is all about a proper analysis of two-dimensional conformal field theory. Everything about perturbative string theory is encoded in this subset of QFTs. Similarly, Matrix theory allow us to fully define physics of string/M-theory using some "world volume" QFTs and AdS/CFT allows us to define the gravitational physics in an AdS bulk using a boundary CFT which is, once again, exactly as rigorous as a QFT because it is a QFT. (Later in Cole's text, Dijkgraaf mentions that the right "picture" for a string theory could be an atlas, too.)

So what the hell is Beem talking about? And additional aßholes are being added to the article:
“Answering deep questions about quantum gravity has not really happened,” [Sean Carroll] said.
What? Does he say such a thing after the black hole thermodynamics was microscopically understood, not to mention lots of insights about topology change, mirror symmetry, tachyons on orbifold fixed points etc., and even after the people found the equivalence between quantum entanglement and non-traversable wormholes and many other things? Nothing of this kind has happened?

At the end, Nima Arkani-Hamed says:
If you’re excited about responsibly attacking the very biggest existential physics questions ever, then you should be excited. But if you want a ticket to Stockholm for sure in the next 15 years, then probably not.
I would agree with both sentences including the last one because it contains the word "probably". This prize is far more experimentally oriented and of course, many pieces of work (with lasers and other things) that are vastly less important than those in stringy and string-like theoretical physics have already been awarded by the Nobel prize. The Nobel prizes still look credible enough to me but I haven't been the child who's been parroting clichés that "it's great to get one" for over 25 years. It's simply not a goal of a mature physicist. On the other hand, I am not really certain that no one will get a Nobel prize for string theory in the following decade or two.

But I think it's no coincidence that just like the title, the last sentence of Cole's article is negative about string theory. Negativity about string theory is really "her main story". Too bad that numerous well-known people join this propaganda as if they were either deluded cranks or opportunity-seeking rats.

by Luboš Motl (noreply@blogger.com) at September 16, 2016 11:48 AM

John Baez - Azimuth

The Circular Electron Positron Collider

Chen-Ning Yang is perhaps China’s most famous particle physicists. Together with Tsung-Dao Lee, he won the Nobel prize in 1957 for discovering that the laws of physics known the difference between left and right. He helped create Yang–Mills theory: the theory that describes all the forces in nature except gravity. He helped find the Yang–Baxter equation, which describes what particles do when they move around on a thin sheet of matter, tracing out braids.

Right now the world of particle physics is in a shocked, somewhat demoralized state because the Large Hadron Collider has not yet found any physics beyond the Standard Model. Some Chinese scientists want to forge ahead by building an even more powerful, even more expensive accelerator.

But Yang recently came out against this. This is a big deal, because he is very prestigious, and only China has the will to pay for the next machine. The director of the Chinese institute that wants to build the next machine, Wang Yifeng, issued a point-by-point rebuttal of Yang the very next day.

Over on G+, Willie Wong translated some of Wang’s rebuttal in some comments to my post on this subject. The real goal of my post here is to make this translation a bit easier to find—not because I agree with Wang, but because this discussion is important: it affects the future of particle physics.

First let me set the stage. In 2012, two months after the Large Hadron Collider found the Higgs boson, the Institute of High Energy Physics proposed a bigger machine: the Circular Electron Positron Collider, or CEPC.

This machine would be a ring 100 kilometers around. It would collide electrons and positrons at an energy of 250 GeV, about twice what you need to make a Higgs. It could make lots of Higgs bosons and study their properties. It might find something new, too! Of course that would be the hope.

It would cost $6 billion, and the plan was that China would pay for 70% of it. Nobody knows who would pay for the rest.

According to Science:

On 4 September, Yang, in an article posted on the social media platform WeChat, says that China should not build a supercollider now. He is concerned about the huge cost and says the money would be better spent on pressing societal needs. In addition, he does not believe the science justifies the cost: The LHC confirmed the existence of the Higgs boson, he notes, but it has not discovered new particles or inconsistencies in the standard model of particle physics. The prospect of an even bigger collider succeeding where the LHC has failed is “a guess on top of a guess,” he writes. Yang argues that high-energy physicists should eschew big accelerator projects for now and start blazing trails in new experimental and theoretical approaches.

That same day, IHEP’s director, Wang Yifang, posted a point-by-point rebuttal on the institute’s public WeChat account. He criticized Yang for rehashing arguments he had made in the 1970s against building the BECP. “Thanks to comrade [Deng] Xiaoping,” who didn’t follow Yang’s advice, Wang wrote, “IHEP and the BEPC … have achieved so much today.” Wang also noted that the main task of the CEPC would not be to find new particles, but to carry out detailed studies of the Higgs boson.

Yang did not respond to request for comment. But some scientists contend that the thrust of his criticisms are against the CEPC’s anticipated upgrade, the Super Proton-Proton Collider (SPPC). “Yang’s objections are directed mostly at the SPPC,” says Li Miao, a cosmologist at Sun Yat-sen University, Guangzhou, in China, who says he is leaning toward supporting the CEPC. That’s because the cost Yang cites—$20 billion—is the estimated price tag of both the CEPC and the SPPC, Li says, and it is the SPPC that would endeavor to make discoveries beyond the standard model.

Still, opposition to the supercollider project is mounting outside the high-energy physics community. Cao Zexian, a researcher at CAS’s Institute of Physics here, contends that Chinese high-energy physicists lack the ability to steer or lead research in the field. China also lacks the industrial capacity for making advanced scientific instruments, he says, which means a supercollider would depend on foreign firms for critical components. Luo Huiqian, another researcher at the Institute of Physics, says that most big science projects in China have suffered from arbitrary cost cutting; as a result, the finished product is often a far cry from what was proposed. He doubts that the proposed CEPC would be built to specifications.

The state news agency Xinhua has lauded the debate as “progress in Chinese science” that will make big science decision-making “more transparent.” Some, however, see a call for transparency as a bad omen for the CEPC. “It means the collider may not receive the go-ahead in the near future,” asserts Institute of Physics researcher Wu Baojun. Wang acknowledged that possibility in a 7 September interview with Caijing magazine: “opposing voices naturally have an impact on future approval of the project,” he said.

Willie Wong’s prefaced his translation of Wang’s rebuttal with this:

Here is a translation of the essential parts of the rebuttal; some standard Chinese language disclaimers of deference etc are omitted. I tried to make the translation as true to the original as possible; the viewpoints expressed are not my own.

Here is the translation:

Today (September 4) published the article by CN Yang titled “China should not build an SSC today”. As a scientist who works on the front line of high energy physics and the current director of the the high energy physics institute in the Chinese Academy of Sciences, I cannot agree with his viewpoint.

(A) The first reason to Dr. Yang’s objection is that a supercollider is a bottomless hole. His objection stemmed from the American SSC wasting 3 billion US dollars and amounted to naught. The LHC cost over 10 billion US dollars. Thus the proposed Chinese accelerator cannot cost less than 20 billion US dollars, with no guaranteed returns. [Ed: emphasis original]

Here, there are actually three problems. The first is “why did SSC fail”? The second is “how much would a Chinese collider cost?” And the third is “is the estimate reasonable and realistic?” Here I address them point by point.

(1) Why did the American SSC fail? Are all colliders bottomless pits?

The many reasons leading to the failure of the American SSC include the government deficit at the time, the fight for funding against the International Space Station, the party politics of the United States, the regional competition between Texas and other states. Additionally there are problems with poor management, bad budgeting, ballooning construction costs, failure to secure international collaboration. See references [2,3] [Ed: consult original article for references; items 1-3 are English language]. In reality, “exceeding the budget” is definitely not the primary reason for the failure of the SSC; rather, the failure should be attributed to some special and circumstantial reasons, caused mainly by political elements.

For the US, abandoning the SSC was a very incorrect decision. It lost the US the chance for discovering the Higgs Boson, as well as the foundations and opportunities for future development, and thereby also the leadership position that US has occupied internationally in high energy physics until then. This definitely had a very negative impact on big science initiatives in the US, and caused one generation of Americans to lose the courage to dream. The reasons given by the American scientific community against the SSC are very similar to what we here today against the Chinese collider project. But actually the cancellation of the SSC did not increase funding to other scientific endeavors. Of course, activation of the SSC would not have reduced the funding to other scientific endeavors, and many people who objected to the project are not regretting it.

Since then, LHC was constructed in Europe, and achieved great success. Even though its construction exceeded its original budget, but not by a lot. This shows that supercollider projects do not have to be bottomless, and has a chance to succeed.

The Chinese political landscape is entirely different from that of the US. In particular, for large scale constructions, the political system is superior. China has already accomplished to date many tasks which the Americans would not, or could not do; many more will happen in the future. The failure of SSC doesn’t mean that we cannot do it. We should scientifically analyze the situation, and at the same time foster international collaboration, and properly manage the budget.

(2) How much would it cost? Our planned collider (using circumference of 100 kilometers for computations) will proceed in two steps. [Ed: details omitted. The author estimated that the electron-positron collider will cost 40 Billion Yuan, followed by the proton-proton collider which will cost 100 billion Yuan, not accounting for inflation. With approximately 10 year construction time for each phase.] The two-phase planning is to showcase the scientific longevity of the project, especially entrainment of other technical development (e.g. high energy superconductors), and that the second phase [ed: the proton-proton collider] is complementary to the scientific and technical developments of the first phase. The reason that the second phase designs are incorporated in the discussion is to prevent the scenario where design elements of the first phase inadvertently shuts off possibility of further expansion in the second phase.

(3) Is this estimate realistic? Are we going to go down the same road as the American SSC?

First, note that in the past 50 years , there were many successful colliders internationally (LEP, LHC, PEPII, KEKB/SuperKEKB etc) and many unsuccessful ones (ISABELLE, SSC, FAIR, etc). The failed ones are all proton accelerators. All electron colliders have been successful. The main reason is that proton accelerators are more complicated, and it is harder to correctly estimate the costs related to constructing machines beyond the current frontiers.

There are many successful large-scale constructions in China. In the 40 years since the founding of the high energy physics institute, we’ve built [list of high energy experiment facilities, I don’t know all their names in English], each costing over 100 million Yuan, and none are more than 5% over budget, in terms of actual costs of construction, time to completion, meeting milestones. We have a well developed expertise in budget, construction, and management.

For the CEPC (electron-positron collider) our estimates relied on two methods:

(i) Summing of the parts: separately estimating costs of individual elements and adding them up.

(ii) Comparisons: using costs for elements derived from costs of completed instruments both domestically and abroad.

At the level of the total cost and at the systems level, the two methods should produce cost estimates within 20% of each other.

After completing the initial design [ref. 1], we produced a list of more than 1000 required equipments, and based our estimates on that list. The estimates are refereed by local and international experts.

For the SPPC (the proton-proton collider; second phase) we only used the second method (comparison). This is due to the second phase not being the main mission at hand, and we are not yet sure whether we should commit to the second phase. It is therefore not very meaningful to discuss its potential cost right now. We are committed to only building the SPPC once we are sure the science and the technology are mature.

(B) The second reason given by Dr. Yang is that China is still a developing country, and there are many social-economic problems that should be solved before considering a supercollider.

Any country, especially one as big as China, must consider both the immediate and the long-term in its planning. Of course social-economic problems need to be solved, and indeed solving them is taking currently the lions share of our national budget. But we also need to consider the long term, including an appropriate amount of expenditures on basic research, to enable our continuous development and the potential to lead the world. The China at the end of the Qing dynasty has a rich populace with the world’s highest GDP. But even though the government has the ability to purchase armaments, the lack of scientific understanding reduced the country to always be on the losing side of wars.

In the past few hundred years, developments into understanding the structure of matter, from molecules, atoms, to the nucleus, the elementary particles, all contributed and led the scientific developments of their era. High energy physics pursue the finest structure of matter and its laws, the techniques used cover many different fields, from accelerator, detectors, to low temperature, superconducting, microwave, high frequency, vacuum, electronic, high precision instrumentation, automatic controls, computer science and networking, in many ways led to the developments in those fields and their broad adoption. This is a indicator field in basic science and technical developments. Building the supercollider can result in China occupying the leadership position in such diverse scientific fields for several decades, and also lead to the domestic production of many of the important scientific and technical instruments. Furthermore, it will allow us to attract international intellectual capital, and allow the training of thousands of world-leading specialists in our institutes. How is this not an urgent need for the country?

In fact, the impression the Chinese government and the average Chinese people create for the world at large is a populace with lots of money, and also infatuated with money. It is hard for a large country to have a international voice and influence without significant contribution to the human culture. This influence, in turn, affects the benefits China receive from other countries. In terms of current GDP, the proposed project (including also the phase 2 SPPC) does not exceed that of the Beijing positron-electron collider completed in the 80s, and is in fact lower than LEP, LHC, SSC, and ILC.

Designing and starting the construction of the next supercollider within the next 5 years is a rare opportunity to let us achieve a leadership position internationally in the field of high energy physics. The newly discovered Higgs boson has a relatively low mass, which allows us to probe it further using a circular positron-electron collider. Furthermore, such colliders has a chance to be modified into proton colliders. This facility will have over 5 decades of scientific use. Furthermore, currently Europe, US, and Japan all already have scientific items on their agenda, and within 20 years probably cannot construct similar facilities. This gives us an advantage in competitiveness. Thirdly, we already have the experience building the Beijing positron-electron collider, so such a facility is in our strengths. The window of opportunity typically lasts only 10 years, if we miss it, we don’t know when the next window will be. Furthermore, we have extensive experience in underground construction, and the Chinese economy is currently at a stage of high growth. We have the ability to do the constructions and also the scientific need. Therefore a supercollider is a very suitable item to consider.

(C) The third reason given by Dr. Yang is that constructing a supercollider necessarily excludes funding other basic sciences.

China currently spends 5% of all R&D budget on basic research; internationally 15% is more typical for developed countries. As a developing country aiming to joint the ranks of developed country, and as a large country, I believe we should aim to raise the ratio to 10% gradually and eventually to 15%. In terms of numbers, funding for basic science has a large potential for growth (around 100 billion yuan per annum) without taking away from other basic science research.

On the other hand, where should the increased funding be directed? Everyone knows that a large portion of our basic science research budgets are spent on purchasing scientific instruments, especially from international sources. If we evenly distribute the increased funding amount all basic science fields, the end results is raising the GDP of US, Europe, and Japan. If we instead spend 10 years putting 30 billion Yuan into accelerator science, more than 90% of the money will remain in the country, and improve our technical development and market share of domestic companies. This will also allow us to raise many new scientists and engineers, and greatly improve the state of art in domestically produced scientific instruments.

In addition, putting emphasis into high energy physics will only bring us to the normal funding level internationally (it is a fact that particle physics and nuclear physics are severely underfunded in China). For the purposes of developing a world-leading big science project, CEPC is a very good candidate. And it does not contradict a desire to also develop other basic sciences.

(D) Dr. Yang’s fourth objection is that both supersymmetry and quantum gravity have not been verified, and the particles we hope to discover using the new collider will in fact be nonexistent.

That is of course not the goal of collider science. In [ref 1] which I gave to Dr. Yang myself, we clearly discussed the scientific purpose of the instrument. Briefly speaking, the standard model is only an effective theory in the low energy limit, and a new and deeper theory is need. Even though there are some experimental evidence beyond the standard model, more data will be needed to indicate the correct direction to develop the theory. Of the known problems with the standard model, most are related to the Higgs Boson. Thus a deeper physical theory should have hints in a better understanding of the Higgs boson. CEPC can probe to 1% precision [ed. I am not sure what this means] Higgs bosons, 10 times better than LHC. From this we have the hope to correctly identify various properties of the Higgs boson, and test whether it in fact matches the standard model. At the same time, CEPC has the possibility of measuring the self-coupling of the Higgs boson, of understanding the Higgs contribution to vacuum phase transition, which is important for understanding the early universe. [Ed. in this previous sentence, the translations are a bit questionable since some HEP jargon is used with which I am not familiar] Therefore, regardless of whether LHC has discovered new physics, CEPC is necessary.

If there are new coupling mechanisms for Higgs, new associated particles, composite structure for Higgs boson, or other differences from the standard model, we can continue with the second phase of the proton-proton collider, to directly probe the difference. Of course this could be due to supersymmetry, but it could also be due to other particles. For us experimentalists, while we care about theoretical predictions, our experiments are not designed only for them. To predict whether a collider can or cannot discover a hypothetical particle at this moment in time seems premature, and is not the view point of the HEP community in general.

(E) The fifth objection is that in the past 70 years high energy physics have not led to tangible improvements to humanity, and in the future likely will not.

In the past 70 years, there are many results from high energy physics, which led to techniques common to everyday life. [Ed: list of examples include sychrotron radiation, free electron laser, scatter neutron source, MRI, PET, radiation therapy, touch screens, smart phones, the world-wide web. I omit the prose.]

[Ed. Author proceeds to discuss hypothetical economic benefits from
a) superconductor science
b) microwave source
c) cryogenics
d) electronics
sort of the usual stuff you see in funding proposals.]

(F) The sixth reason was that the institute for High Energy Physics of the Chinese Academy of Sciences has not produced much in the past 30 years. The major scientific contributions to the proposed collider will be directed by non-Chinese, and so the nobel will also go to a non-Chinese.

[Ed. I’ll skip this section because it is a self-congratulatory pat on one’s back (we actually did pretty well for the amount of money invested), a promise to promote Chinese participation in the project (in accordance to the economic investment), and the required comment that “we do science for the sake of science, and not for winning the Nobel.”]

(G) The seventh reason is that the future in HEP is in developing a new technique to accelerate particles, and developing a geometric theory, not in building large accelerators.

A new method to accelerate particles is definitely an important aspect to accelerator science. In the next several decades this can prove useful for scattering experiments or for applied fields where beam confinement is not essential. For high energy colliders, in terms of beam emittance and energy efficiency, new acceleration principles have a long way to go. During this period, high energy physics cannot be simply put on hold. In terms of “geometric theory” or “string theory”, these are too far from experimentally approachable, and is not a problem we can consider currently.

People disagree on the future of high energy physics. Currently there are no Chinese winners of the Nobel prize in physics, but there are many internationally. Dr. Yang’s viewpoints are clearly out of mainstream. Not just currently, but also in the past several decades. Dr. Yang has been documented to have held a pessimistic view of higher energy physics and its future since the 60s, and that’s how he missed out on the discovery of the standard model. He is on record as being against Chinese collider science since the 70s. It is fortunate that the government supported the Institute of High Energy Physics and constructed various supporting facilities, leading to our current achievements in synchrotron radiation and neutron scattering. For the future, we should listen to the younger scientists at the forefront of current research, for that’s how we can gain international recognition for our scientific research.

It will be very interesting to see how this plays out.


by John Baez at September 16, 2016 01:00 AM

September 15, 2016

Clifford V. Johnson - Asymptotia

Find Your Fair…

county_fair_panels_shareThis might be one of my favourite sequences from the book so far*. (Click for larger view.) It's a significant part of a page so I've watermarked it heavily. Sorry about that: It's days of work to make these things. It is based on a location scouting trip I did last year around this time at the LA County Fair. So consider this a public service announcement if you've not yet done a Summer visit this year to a county fair near you... Go! They're an excellent old-school kind of fun.

There's a whole sequence in the book that involves such a visit and I've just [...] Click to continue reading this post

The post Find Your Fair… appeared first on Asymptotia.

by Clifford at September 15, 2016 07:37 PM

The n-Category Cafe

Disaster at Leicester

You’ve probably met mathematicians at the University of Leicester, or read their work, or attended their talks, or been to events they’ve organized. Their pure group includes at least four people working in categorical areas: Frank Neumann, Simona Paoli, Teimuraz Pirashvili and Andy Tonks.

Now this department is under severe threat. A colleague of mine writes:

24 members of the Department of Mathematics at the University of Leicester — the great majority of the members of the department — have been informed that their post is at risk of redundancy, and will have to reapply for their positions by the end of September. Only 18 of those applying will be re-appointed (and some of those have been changed to purely teaching positions).

It’s not only mathematics at stake. The university is apparently on a process of “institutional transformation”, involving:

the closure of departments, subject areas and courses, including the Vaughan Centre for Lifelong Learning and the university bookshop. Hundreds of academic, academic-related and support staff are to be made redundant, many of them compulsorily.

If you don’t like this, sign the petition objecting! You’ll see lots of familiar names already on the list (Tim Gowers, John Baez, Ross Street, …). As signatory David Pritchard wrote, “successful departments and universities are hard to build and easy to destroy.”

by leinster (Tom.Leinster@ed.ac.uk) at September 15, 2016 03:56 PM

September 13, 2016

Jester - Resonaances

Next stop: tth
This was a summer of brutally dashed hopes for a quick discovery of many fundamental particles that we were imagining. For the time being we need  to focus on the ones that actually exist, such as the Higgs boson. In the Run-1 of the LHC, the Higgs existence and identity were firmly established,  while its mass and basic properties were measured. The signal was observed with large significance in 4 different decay channels (γγ, ZZ*, WW*, ττ), and two different production modes (gluon fusion, vector-boson fusion) have been isolated.  Still, there remains many fine details to sort out. The realistic goal for the Run-2 is to pinpoint the following Higgs processes:
  • (h→bb): Decays to b-quarks.
  • (Vh): Associated production with W or Z boson. 
  • (tth): Associated production with top quarks. 

It seems that the last objective may be achieved quicker than expected. The tth production process is very interesting theoretically, because its rate is proportional to the (square of the) Yukawa coupling between the Higgs boson and top quarks. Within the Standard Model, the value of this parameter is known to a good accuracy, as it is related to the mass of the top quark. But that relation can be  disrupted in models beyond the Standard Model, with the two-Higgs-doublet model and composite/little Higgs models serving as prominent examples. Thus, measurements of the top Yukawa coupling will provide a crucial piece of information about new physics.

In the Run-1, a not-so-small signal of tth production was observed by the ATLAS and CMS collaborations in several channels. Assuming that Higgs decays have the same branching fraction as in the Standard Model, the tth signal strength normalized to the Standard Model prediction was estimated as

At face value, a strong evidence for the tth production was obtained in the Run-1! This fact was not advertised by the collaborations because the measurement is not clean due to a large number of top quarks produced by other processes at the LHC. The tth signal is thus a small blip on top of a huge background, and it's not excluded that some unaccounted for systematic errors are skewing the measurements. The collaborations thus preferred to play it safe, and wait for more data to be collected.

In the Run-2 with 13 TeV collisions the tth production cross section is 4-times larger than in the Run-1, therefore the new data are coming at a fast pace. Both ATLAS and CMS presented their first Higgs results in early August, and the tth signal is only getting stronger.  ATLAS showed their measurements in the γγ, WW/ττ, and bb final states of Higgs decay, as well as their combination:
Most channels display a signal-like excess, which is reflected by the Run-2 combination being 2.5 sigma away from zero. A similar picture is emerging in CMS, with 2-sigma signals in the γγ and WW/ττ channels. Naively combining all Run-1 and and Run-2 results one then finds
At face value, this is a discovery! Of course, this number should be treated with some caution because, due to large systematic errors, a naive Gaussian combination may not represent very well the true likelihood. Nevertheless, it indicates that, if all goes well, the discovery of the tth production mode should be officially announced in the near future, maybe even this year.

Should we get excited that the measured tth rate is significantly larger than Standard Model one? Assuming  that the current central value remains, it would mean that  the top Yukawa coupling is 40% larger than that predicted by the Standard Model. This is not impossible, but very unlikely in practice. The reason is that the top Yukawa coupling also controls the gluon fusion - the main Higgs production channel at the LHC - whose rate is measured to be in perfect agreement with the Standard Model.  Therefore, a realistic model that explains the large tth rate would also have to provide negative contributions to the gluon fusion amplitude, so as to cancel the effect of the large top Yukawa coupling. It is possible to engineer such a cancellation in concrete models, but  I'm not aware of any construction where this conspiracy arises in a natural way. Most likely, the currently observed excess is  a statistical fluctuation (possibly in combination with  underestimated theoretical and/or  experimental errors), and the central value will drift toward μ=1 as more data is collected. 

by Jester (noreply@blogger.com) at September 13, 2016 07:26 PM

Jester - Resonaances

Weekend Plot: update on WIMPs
There's been a lot of discussion on this blog about the LHC not finding new physics.  I should however give justice to other experiments that also don't find new physics, often in a spectacular way. One area where this is happening is direct detection of WIMP dark matter. This weekend plot summarizes the current limits on the spin-independent scattering cross-section of dark matter particles on nucleons:
For large WIMP masses, currently the most succesful detection technology is to fill up a tank with a ton of liquid xenon and wait for a passing dark matter particle to knock one of the nuclei. Recently, we have had updates from two such experiments: LUX in the US, and PandaX in China, whose limits now cut below zeptobarn cross sections (1 zb = 10^-9 pb = 10^-45 cm^2). These two experiments are currently going head-to-head, but  Panda, being larger, will ultimately overtake LUX.  Soon, however,  it'll have to face a new fierce competitor: the XENON1T experiment, and the plot will have to be updated next year.  Fortunately, we won't need to be learning another prefix soon. Once yoctobarn sensitivity is achieved by the experiments, we will hit the neutrino floor:  the non-reducible background from solar and atmospheric neutrinos (gray area at the bottom of the plot). This will make detecting a dark matter signal much more challenging, and will certainly slow down the progress for WIMP masses larger than ~5 GeV. For lower masses,  the distance to the floor remains large. Xenon detectors lose their steam there, and another technology is needed, like germanium detectors of CDMS and CDEX, or CaWO4 crystals of CRESST. Also on this front important progress is expected soon.

What does the theory say about when we will find dark matter? It is perfectly viable that the discovery is waiting for us just behind the corner in the remaining space above the neutrino floor, but currently there's no strong theoretical hints in favor of that possibility. Usually, dark matter experiments advertise that they're just beginning to explore the interesting parameter space predicted by theory models.This is not quite correct.  If the WIMP were true to its name, that is to say if it was interacting via the weak force (meaning, coupled to Z with order 1 strength), it would have order 10 fb scattering cross section on neutrons. Unfortunately, that natural possibility was excluded in the previous century. Years of experimental progress have shown that the WIMPs, if they exist, must be interacting super-weakly with matter. For example, for a 100 GeV fermionic dark matter with the vector coupling g to the Z boson, the current limits imply g ≲ 10^-4. The coupling can be larger if the Higgs boson is the mediator of interactions between the dark and visible worlds, as the Higgs already couples very weakly to nucleons. This construction is, arguably, the most plausible one currently probed by direct detection experiments.  For a scalar dark matter particle X with mass 0.1-1 TeV  coupled to the Higgs via the interaction  λ v h |X|^2 the experiments are currently probing the coupling λ in the 0.01-1 ballpark. In general, there's no theoretical lower limit on the dark matter coupling to nucleons. Nevertheless, the weak coupling implied by direct detection limits creates some tension for the thermal production paradigm, which requires a weak (that is order picobarn) annihilation cross section for dark matter particles. This tension needs to be resolved by more complicated model building,  e.g. by arranging for resonant annihilation or for co-annihilation.

by Jester (noreply@blogger.com) at September 13, 2016 07:24 PM

Symmetrybreaking - Fermilab/SLAC

The hunt for the truest north

Many theories predict the existence of magnetic monopoles, but experiments have yet to see them.

If you chop a magnet in half, you end up with two smaller magnets. Both the original and the new magnets have “north” and “south” poles. 

But what if single north and south poles exist, just like positive and negative electric charges? These hypothetical beasts, known as “magnetic monopoles,” are an important prediction in several theories. 

Like an electron, a magnetic monopole would be a fundamental particle. Nobody has seen one yet, but many—maybe even most—physicists would say monopoles probably exist.

“The electric and magnetic forces are exactly the same force,” says Wendy Taylor of Canada’s York University. “Everything would be totally symmetric if there existed a magnetic monopole. There is a strong motivation by the beauty of the symmetry to expect that this particle exists.”

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Dirac to the future

Combining the work of many others, nineteenth-century physicist James Clerk Maxwell showed that electricity and magnetism were two aspects of a single thing: the electromagnetic interaction. 

But in Maxwell’s equations, the electric and magnetic forces weren’t quite the same. The electrical force had individual positive and negative charges. The magnetic force didn’t. Without single poles—monopoles—Maxwell’s theory looked asymmetrical, which bugged him. Maxwell thought and wrote a lot about the problem of the missing magnetic charge, but he left it out of the final version of his equations.

Quantum pioneer Paul Dirac picked up the monopole mantle in the early 20th century. By Dirac’s time, physicists had discovered electrons and determined they were indivisible particles, carrying a fundamental unit of electric charge. 

Dirac calculated the behavior of an electron in the magnetic field of a monopole. He used the rules of quantum physics, which say an electron or any particle also behaves like a wave. For an electron sitting near another particle—including a monopole—those rules say the electron’s wave must go through one or more full cycles wrapping around the other particle. In other words, the wave must have at least one crest and one trough: no half crests or quarter-troughs.

For an electron in the presence of a proton, this quantum wave rule explains the colors of light emitted and absorbed by a hydrogen atom, which is made of one electron and one proton. But Dirac found the electron could only have the right wave behavior if the product of the monopole magnetic charge and the fundamental electric charge carried by an electron were a whole number. That means monopoles, like electrons, carry a fundamental, indivisible charge. Any other particle carrying the fundamental electric charge—protons, positrons, muons, and so forth—will follow the same rule.

Interestingly, the logic runs the other way too. Dirac’s result says if a single type of monopole exists, even if that type is very rare, it explains a very important property of matter: why electrically charged particles carry multiples of the fundamental electric charge. (Quarks carry a fraction—one-third or two-thirds—of the fundamental charge, but they always combine to make whole-number multiples of the same charge.) And if more than one type of monopole exists, it must carry a whole-number multiple of the fundamental magnetic charge.

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

The magnetic unicorn

Dirac’s discovery was really a plausibility argument: If monopoles existed, they would explain a lot, but nothing would crumble if they didn’t. 

Since Dirac’s day, many theories have made predictions about the properties of magnetic monopoles. Grand unified theories predict monopoles that would be over 10 quadrillion times more massive than protons. 

Producing such particles would require more energy than Earthly accelerators can reach, “but it’s the energy that was certainly available at the beginning of the universe,” says Laura Patrizii of the Italian National Institute for Nuclear Physics. 

Cosmic ray detectors around the world are looking for signs of these monopoles, which would still be around today, interacting with molecules in the air. The MACRO experiment at Gran Sasso in Italy also looked for primordial monopoles, and provided the best constraints we have at present. 

Luckily for scientists like Patrizii and Taylor, grand unified theories aren’t the only ones to predict monopoles. Other theories predict magnetic monopoles of lower masses that could feasibly be created in the Large Hadron Collider, and of course Dirac’s original model didn’t place any mass constraints on monopoles at all. That means physicists have to be open to discovering particles that aren’t part of any existing theory. 

Both of them look for monopoles created at the Large Hadron Collider, Patrizii using the MoEDAL detector and Taylor using ATLAS.

“I think personally there's lots of reasons to believe that monopoles are out there, and we just have to keep looking,” Taylor says. 

“Magnetic monopoles are probably my favorite particle. If we discovered the magnetic monopole, [the discovery would be] on the same scale as the Higgs particle.”

by Matthew R. Francis at September 13, 2016 04:31 PM

The n-Category Cafe

HoTT and Philosophy

I’m down in Bristol at a conference – HoTT and Philosophy. Slides for my talk – The modality of physical law in modal homotopy type theory – are here.

Perhaps ‘The modality of differential equations’ would have been more accurate as I’m looking to work through an analogy in modal type theory between necessity and the jet comonad, partial differential equations being the latter’s coalgebras.

The talk should provide some intuition for a pair of talks the following day:

  • Urs Schreiber & Felix Wellen: ‘Formalizing higher Cartan geometry in modal HoTT’
  • Felix Wellen: ‘Synthetic differential geometry in homotopy type theory via a modal operator’

I met up with Urs and Felix yesterday evening. Felix is coding up in Agda geometric constructions, such as frame bundles, using the modalities of differential cohesion.

by david (d.corfield@kent.ac.uk) at September 13, 2016 07:05 AM

The n-Category Cafe

Twitter

I’m now trying to announce all my new writings in one place: on Twitter.

Why? Well…

Someone I respect said he’s been following my online writings, off and on, ever since the old days of This Week’s Finds. He wishes it were easier to find my new stuff all in one place. Right now it’s spread out over several locations:

Azimuth: serious posts on environmental issues and applied mathematics, fairly serious popularizations of diverse scientific subjects.

Google+: short posts of all kinds, mainly light popularizations of math, physics, and astronomy.

The n-Category Café: posts on mathematics, leaning toward category theory and other forms of pure mathematics that seem too intimidating for the above forums.

Visual Insight: beautiful pictures of mathematical objects, together with explanations.

Diary: more personal stuff, and polished versions of the more interesting Google+ posts, just so I have them on my own website.

It’s absurd to expect anyone to look at all these locations to see what I’m writing. Even more absurdly, I claimed I was going to quit posting on Google+, but then didn’t. So, I’ll try to make it possible to reach everything via Twitter.

by john (baez@math.ucr.edu) at September 13, 2016 06:12 AM

September 12, 2016

Tommaso Dorigo - Scientificblogging

INFN Selections - A Last Batch Of Advices
Next Monday, the Italian city of Rome will swarm with about 700 young physicists. They will be there to participate to a selection of 58 INFN research scientists. In previous articles (see e.g.

read more

by Tommaso Dorigo at September 12, 2016 02:49 PM

September 11, 2016

Tommaso Dorigo - Scientificblogging

Statistics At A Physics Conference ?!
Particle physics conferences are a place where you can listen to many different topics - not just news about the latest precision tests of the standard model or searches for new particles at the energy frontier. If we exclude the very small, workshop-like events where people gather to focus on a very precise topic, all other events do allow for the contamination from reports of parallel fields of research. The reason is of course that there is a significant cross-fertilization between these fields. 

read more

by Tommaso Dorigo at September 11, 2016 01:44 PM

September 09, 2016

Symmetrybreaking - Fermilab/SLAC

A tale of two black holes

What can the surprisingly huge mass of the black holes detected by LIGO tell us about dark matter and the early universe?

The historic detection of gravitational waves announced earlier this year breathed new life into a theory that’s been around for decades: that black holes created in the first second of the universe might make up dark matter. It also inspired a new idea: that those so-called primordial black holes could be contributing to a diffuse background light.

The connection between these perhaps seemingly disparate areas of astronomy were tied together neatly in a theory from Alexander Kashlinsky, an astrophysicist at NASA’s Goddard Spaceflight Center. And while it’s an unusual idea, as he says, it could be proven true in only a few years.

Mapping the glow

Kashlinsky’s focus has been on a residual infrared glow in the universe, the accumulated light of the earliest stars. Unfortunately, all the stars, galaxies and other bright objects in the sky—the known sources of light—oversaturate this diffuse glow. That means that Kashlinsky and his colleagues have to subtract them out of infrared images to find the light that’s left behind.

They’ve been doing precisely that since 2005, using data from the Spitzer space telescope to arrive at the residual infrared glow: the cosmic infrared background (CIB).

Other astronomers followed a similar process using Chandra X-ray Observatory data to map the cosmic X-ray background (CXB), the diffuse glow of hotter cosmic material and more energetic sources.

In 2013, Kashlinsky and colleagues compared the CIB and CXB and found correlations between the patchy patterns in the two datasets, indicating that something is contributing to both types of background light. So what might be the culprit for both types of light?

“The only sources that could be coherent across this wide range of wavelengths are black holes,” he says.

To explain the correlation they found, roughly 1 in 5 of the sources had to be black holes that lived in the first few hundred million years of our universe. But that ratio is oddly large.

“For comparison,” Kashlinsky says, “in the present populations, we have 1 in 1000 of the emitting sources that are black holes. At the peak of star formation, it’s 1 in 100.”

He wasn’t sure how the universe could have ever had enough black holes to produce the patterns his team saw in the CIB and CXB. Then the Laser Interferometric Gravitational-wave Observatory (LIGO) discovered a pair of strange beasts: two roughly-30-solar-mass black holes merging and emitting gravitational waves.

A few months later, Kashlinsky saw a study led by Simeon Bird analyzing the possibility that the black holes LIGO had detected were primordial—formed in the universe’s first second. “And it just all came together,” Kashlinsky says.

Gravitational secrets

The crucial ripples in space-time picked up by the LIGO detector on September 14, 2015, came from the last dance of two black holes orbiting each other and colliding. One black hole was 36 times the sun’s mass, the other 29 times. Those black-hole weights aren’t easy to make.

The majority of the universe’s black holes are less than about 15 solar masses and form as massive stars collapse at the end of their lives. A black hole weighing 30 solar masses would have to start from a star closer to 100 times our sun’s mass—and nature seems to have a hard time making stars that enormous. To compound the strangeness of the situation, the LIGO detection is from a pair of those black holes. Scientists weren’t expecting such a system, but the universe has a tendency to surprise us.

Bird and his colleagues from Johns Hopkins University next looked at the possibility that those black holes formed not from massive stars but instead during the universe’s first fractions of a second. Astronomers haven’t yet seen what the cosmos looked like at that time, so they have to rely on theoretical models.

In all of these models, the early universe exists with density variations. If there were regions of very high-contrasting density, those could have collapsed into black holes in the universe’s first second. If those black holes were at least as heavy as mountains when they formed, they’d stick around until today, dark and seemingly invisible and acting through the gravitational force. And because these primordial black holes formed from density perturbations, they wouldn’t be comprised of protons and neutrons, the particles that make up you, me, stars and, thus, the material that leads to normal black holes.

All of those characteristics make primordial black holes a tempting candidate for the universe’s mysterious dark matter, which we believe makes up some 25 percent of the universe and reveals itself only through the gravitational force. This possible connection has been around since the 1970s, and astronomers have looked for hints of primordial black holes since. Even though they’ve slowly narrowed down the possibilities, there are a few remaining hiding spots—including the region where the black holes that LIGO detected fall, between about 20 and 1000 solar masses.

Astronomers have been looking for explanations of what dark matter is for decades. The leading theory is that it’s a new type of particle, but searches keep coming up empty. On the other hand, we know black holes exist; they stem naturally from the theory of gravity.

“They’re an aesthetically pleasing candidate because they don’t need any new physics,” Bird says.

A glowing contribution

Kashlinsky’s newest analysis took the idea of primordial black holes the size that LIGO detected and looked at what that population would do to the diffuse infrared light of the universe. He evolved a model of the early universe, looking at how the first black holes would congregate and grow into clumps. These black holes matched the residual glow of the CIB and, he found, “would be just right to explain the patchiness of infrared background by sources that we measured in the first couple hundred million years of the universe.”

This theory fits nicely together, but it’s just one analysis of one possible model that came out of an observation of one astrophysical system. Researchers need several more pieces of evidence to say whether primordial black holes are in fact the dark matter. The good news is LIGO will soon begin another observing run that will be able to see black hole collisions even farther away from Earth and thus further back in time. The European gravitational wave observatory VIRGO will also come online in January, providing more data and working in tandem with LIGO.

More cases of gravitational waves from black holes around this 30-solar-masses range could add evidence that there is a population of primordial black holes. Bird and his colleague Ilias Cholis suggest looking for a more unique signal, though, in future gravitational-wave data. For two primordial black holes to become locked in a binary system and merge, they would likely be gravitationally captured during a glancing interaction, which could result in a signal with multiple frequencies or tones at any one moment.

“This is a rare event, but it would be very characteristic of our scenario,” Cholis says. “In the next 5 to 10 years, we might see one.”

This smoking-gun signature, as they call it, would be a strong piece of evidence that primordial black holes exist. And if such objects are floating around our universe, it might not be such a stretch to connect them to dark matter.

Editor’s note: Theorists Sébastien Clesse and Juan García-Bellido predicted the existence of massive, merging primordial black holes in a paper published on the arXiv on January 29, 2015, more than seven months before the signal of two such giants reached the LIGO detector. In the paper, they claimed that primordial black holes could have been the seeds of galaxies and constitute all of the dark matter in the universe.

by Liz Kruesi at September 09, 2016 04:28 PM

September 08, 2016

Sean Carroll - Preposterous Universe

Consciousness and Downward Causation

For many people, the phenomenon of consciousness is the best evidence we have that there must be something important missing in our basic physical description of the world. According to this worry, a bunch of atoms and particles, mindlessly obeying the laws of physics, can’t actually experience the way a conscious creature does. There’s no such thing as “what it is to be like” a collection of purely physical atoms; it would lack qualia, the irreducibly subjective components of our experience of the world. One argument for this conclusion is that we can conceive of collections of atoms that behave physically in exactly the same way as ordinary humans, but don’t have those inner experiences — philosophical zombies. (If you think about it carefully, I would claim, you would realize that zombies are harder to conceive of than you might originally have guessed — but that’s an argument for another time.)

The folks who find this line of reasoning compelling are not necessarily traditional Cartesian dualists who think that there is an immaterial soul distinct from the body. On the contrary, they often appreciate the arguments against “substance dualism,” and have a high degree of respect for the laws of physics (which don’t seem to need or provide evidence for any non-physical influences on our atoms). But still, they insist, there’s no way to just throw a bunch of mindless physical matter together and expect it to experience true consciousness.

People who want to dance this tricky two-step — respect for the laws of physics, but an insistence that consciousness can’t reduce to the physical — are forced to face up to a certain problem, which we might call the causal box argument. It goes like this. (Feel free to replace “physical particles” with “quantum fields” if you want to be fastidious.)

  1. Consciousness cannot be accounted for by physical particles obeying mindless equations.
  2. Human beings seem to be made up — even if not exclusively — of physical particles.
  3. To the best of our knowledge, those particles obey mindless equations, without exception.
  4. Therefore, consciousness does not exist.

Nobody actually believes this argument, let us hasten to add — they typically just deny one of the premises.

But there is a tiny sliver of wiggle room that might allow us to salvage something special about consciousness without giving up on the laws of physics — the concept of downward causation. Here we’re invoking the idea that there are different levels at which we can describe reality, as I discussed in The Big Picture at great length. We say that “higher” (more coarse-grained) levels are emergent, but that word means different things to different people. So-called “weak” emergence just says the obvious thing, that higher-level notions like the fluidity or solidity of a material substance emerge out of the properties of its microscopic constituents. In principle, if not in practice, the microscopic description is absolutely complete and comprehensive. A “strong” form of emergence would suggest that something truly new comes into being at the higher levels, something that just isn’t there in the microscopic description.

Downward causation is one manifestation of this strong-emergentist attitude. It’s the idea that what happens at lower levels can be directly influenced (causally acted upon) by what is happening at the higher levels. The idea, in other words, that you can’t really understand the microscopic behavior without knowing something about the macroscopic.

There is no reason to think that anything like downward causation really happens in the world, at least not down to the level of particles and forces. While I was writing The Big Picture, I grumbled on Twitter about how people kept talking about it but how I didn’t want to discuss it in the book; naturally, I was hectored into writing something about it.

But you can see why the concept of downward causation might be attractive to someone who doesn’t think that consciousness can be accounted for by the fields and equations of the Core Theory. Sure, the idea would be, maybe electrons and nuclei act according to the laws of physics, but those laws need to include feedback from higher levels onto that microscopic behavior — including whether or not those particles are part of a conscious creature. In that way, consciousness can play a decisive, causal role in the universe, without actually violating any physical laws.

One person who thinks that way is John Searle, the extremely distinguished philosopher from Berkeley (and originator of the Chinese Room argument). I recently received an email from Henrik Røed Sherling, who took a class with Searle and came across this very issue. He sent me this email, which he was kind enough to allow me to reproduce here:

Hi Professor Carroll,

I read your book and was at the same time awestruck and angered, because I thought your entire section on the mind was both well-written and awfully wrong — until I started thinking about it, that is. Now I genuinely don’t know what to think anymore, but I’m trying to work through it by writing a paper on the topic.

I took Philosophy of Mind with John Searle last semester at UC Berkeley. He convinced me of a lot of ideas of which your book has now disabused me. But despite your occasionally effective jabs at Searle, you never explicitly refute his own theory of the mind, Biological Naturalism. I want to do that, using an argument from your book, but I first need to make sure that I properly understand it.

Searle says this of consciousness: it is caused by neuronal processes and realized in neuronal systems, but is not ontologically reducible to these; consciousness is not just a word we have for something else that is more fundamental. He uses the following analogy to visualize his description: consciousness is to the mind like fluidity is to water. It’s a higher-level feature caused by lower-level features and realized in a system of said lower-level features. Of course, for his version of consciousness to escape the charge of epiphenomenalism, he needs the higher-level feature in this analogy to act causally on the lower-level features — he needs downward causation. In typical fashion he says that “no one in their right mind” can say that solidity does not act causally when a hammer strikes a nail, but it appears to me that this is what you are saying.

So to my questions. Is it right to say that your argument against the existence of downward causation boils down to the incompatible vocabularies of lower-level and higher-level theories? I.e. that there is no such thing as a gluon in Fluid Dynamics, nor anything such as a fluid in the Standard Model, so a cause in one theory cannot have an effect in the other simply because causes and effects are different things in the different theories; gluons don’t affect fluidity, temperaturs and pressures do; fluids don’t affect gluons, quarks and fields do. If I have understood you right, then there couldn’t be any upward causation either. In which case Searle’s theory is not only epiphenomenal, it’s plain inaccurate from the get-go; he wants consciousness to both be a higher-level feature of neuronal processes and to be caused by them. Did I get this right?

Best regards,
Henrik Røed Sherling

Here was my reply:

Dear Henrik–

Thanks for writing. Genuinely not knowing what to think is always an acceptable stance!

I think your summary of my views are pretty accurate. As I say on p. 375, poetic naturalists tend not to be impressed by downward causation, but not by upward causation either! At least, not if your theory of each individual level is complete and consistent.

Part of the issue is, as often happens, an inconsistent use of a natural-language word, in this case “cause.” The kinds of dynamical, explain-this-occurrence causes that we’re talking about here are a different beast than inter-level implications (that one might be tempted to sloppily refer to as “causes”). Features of a lower level, like conservation of energy, can certainly imply or entail features of higher-level descriptions; and indeed the converse is also possible. But saying that such implications are “causes” is to mean something completely different than when we say “swinging my elbow caused the glass of wine to fall to the floor.”

So, I like to think I’m in my right mind, and I’m happy to admit that solidity acts causally when a hammer strikes a nail. But I don’t describe that nail as a collection of particles obeying the Core Theory *and* additionally as a solid object that a hammer can hit; we should use one language or the other. At the level of elementary particles, there’s no such concept as “solidity,” and it doesn’t act causally.

To be perfectly careful — all this is how we currently see things according to modern physics. An electron responds to the other fields precisely at its location, in quantitatively well-understood ways that make no reference to whether it’s in a nail, in a brain, or in interstellar space. We can of course imagine that this understanding is wrong, and that future investigations will reveal the electron really does care about those things. That would be the greatest discovery in physics since quantum mechanics itself, perhaps of all time; but I’m not holding my breath.

I really do think that enormous confusion is caused in many areas — not just consciousness, but free will and even more purely physical phenomena — by the simple mistake of starting sentences in one language or layer of description (“I thought about summoning up the will power to resist that extra slice of pizza…”) but then ending them in a completely different vocabulary (“… but my atoms obeyed the laws of the Standard Model, so what could I do?”) The dynamical rules of the Core Theory aren’t just vague suggestions; they are absolutely precise statements about how the quantum fields making up you and me behave under any circumstances (within the “everyday life” domain of validity). And those rules say that the behavior of, say, an electron is determined by the local values of other quantum fields at the position of the electron — and by nothing else. (That’s “locality” or “microcausality” in quantum field theory.) In particular, as long as the quantum fields at the precise position of the electron are the same, the larger context in which it is embedded is utterly irrelevant.

It’s possible that the real world is different, and there is such inter-level feedback. That’s an experimentally testable question! As I mentioned to Henrik, it would be the greatest scientific discovery of our lifetimes. And there’s basically no evidence that it’s true. But it’s possible.

So I don’t think downward causation is of any help to attempts to free the phenomenon of consciousness from arising in a completely conventional way from the collective behavior of microscopic physical constituents of matter. We’re allowed to talk about consciousness as a real, causally efficacious phenomenon — as long as we stick to the appropriate human-scale level of description. But electrons get along just fine without it.

by Sean Carroll at September 08, 2016 05:01 PM

September 06, 2016

Symmetrybreaking - Fermilab/SLAC

Turning on the cosmic microphone

A new tool lets astronomers listen to the universe for the first time.

When Galileo first introduced the telescope in the 1600s, astronomers gained the ability to view parts of the universe that were invisible to the naked eye. This led to centuries of discovery—as telescopes advanced, they exposed new planets, galaxies and even a glimpse of the very early universe. 

Last September, scientists gained yet another invaluable tool: the ability to hear the cosmos through gravitational waves.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Ripples in space-time

Newton described gravity as a force. Thinking about gravity this way can explain most of the phenomena that happens here on Earth. For example, the force of gravity acting on an apple makes it fall from a tree onto an unsuspecting person sitting below it. However, to understand gravity on a cosmic scale, we need to turn to Einstein, who described gravity as the bending of space-time itself. 

Some physicists describe this process using a bowling ball and a blanket. Imagine space-time as a blanket. A bowling ball placed at the center of the blanket bends the fabric around it. The heavier an object is, the further it sinks. As you move the ball along the fabric, it produces ripples, much like a boat travelling through water.

“The curvature is what makes the Earth orbit the sun—the sun is a bowling ball in a fabric and it's that bending in the fabric that makes the Earth go around,” explains Gabriela González, the spokesperson for the Laser Interferometer Gravitational-Wave Observatory (LIGO) collaboration. 

Everything with mass—planets, stars and people—pulls on the fabric of space-time and produces gravitational waves as they move through space. These are passing through us all time, but they are much too weak to detect. 

To find these elusive signals, physicists built LIGO, twin observatories in Louisiana and Washington. At each L-shaped detector, a laser beam is split and sent down two four-kilometer arms. The beams reflect off the mirrors at each end and travel back to reunite. A passing gravitational wave slightly alters the relative lengths of the arms, shifting the path of the laser beam, creating a change that physicists can detect.  

Unlike telescopes, which are pointed toward very specific parts of the sky, detectors like LIGO scan a much larger area of the universe and hear sources from all directions. “Gravitational waves detectors are like microphones,” says Laura Nuttall, a postdoctoral researcher at Syracuse University. 

Illustration by Sandbox Studio, Chicago with Lexi Fodor

First detections

On the morning of September 14, 2015, a gravitational wave from two black holes that collided 1.3 billion years ago passed through the two LIGO detectors, and an automatic alert system pinged LIGO scientists around the world. “It took us a good part of the day to convince ourselves that this was not a drill,” González says. 

Because LIGO was still preparing for an observing run—researchers were still running tests and diagnostics during the day—they needed to conduct a large number of checks and analyses to make sure the signal was real. 

Months later, once researchers had meticulously checked the data for errors or noise (such as lightning or earthquakes) the LIGO collaboration announced to the world that they had finally reached a long-anticipated goal: Almost 100 years after Einstein first predicted their existence, scientists had detected gravitational waves. 

A few months after the first signal arrived, LIGO detected yet another black hole collision. “Finding a second one proves that there's a population of sources that will produce detectible gravitational waves,” Nuttall says. “We are actually an observatory now.”

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Cosmic microphones

Many have dubbed the detection of gravitational waves as the dawn of the age of gravitational wave astronomy. Scientists expect to see hundreds, maybe even thousands, of these binary black holes in the years to come. Gravitational-wave detectors will also allow astronomers to look much more closely at other astronomical phenomena, such as neutron stars, supernovae and even the Big Bang.

One important next step is to detect the optical counterparts—such as light from the surrounding matter or gamma ray bursts—of the sources of gravitational waves. To do this, astronomers need to point their telescopes to the area of the sky where the gravitational waves came from to find any detectable light. 

Currently, this feat is like finding a needle in a haystack. Because the field of view of gravitational wave detectors is much, much larger than telescopes, it is extremely difficult to connect the two. “Connecting gravitational waves with light for the first time will be such an important discovery that it's definitely worth the effort,” says Edo Berger, an astronomy professor at Harvard University.

LIGO is also one of several gravitational wave observatories. Other ground-based observatories, such as Virgo in Italy, KAGRA in Japan and the future LIGO India have similar sensitivities to LIGO. There are also other approaches that scientists are using—and plan to use in the future—to detect gravitational waves at completely different frequencies. 

The evolved Laser Interferometer Space Antenna (eLISA), for example, is a gravitational wave detector that physicists plan to build in space. Once complete, eLISA will be composed of three spacecraft that are over a million kilometers apart, making it sensitive to much lower gravitational wave frequencies, where scientists expect to detect supermassive black holes.

Pulsar array timing is a completely different method of detection. Pulsars are natural timekeepers, regularly emitting beams of electromagnetic radiation. Astronomers carefully measure the arrival time of the pulses to find discrepancies, because when a gravitational wave passes by, space-time warps, changing the distance between us and the pulsar, causing the pulses to arrive slightly earlier or later. This method is sensitive to even lower frequencies than eLISA. 

These and many other observatories will reveal a new view of the universe, helping scientists to study phenomena such as merging black holes, to test theories of gravity and possibly even to discover something completely unexpected, says Daniel Holz, a professor of physics and astronomy at the University of Chicago. “Usually in science you're just pushing the boundaries a little bit, but in this case, we're opening up a whole new frontier.”

by Diana Kwon at September 06, 2016 02:04 PM

September 05, 2016

Tommaso Dorigo - Scientificblogging

Farewell, Gino
Gino Bolla was an Italian scientist and the head of the Silicon Detector Facility at Fermilab. And he was a friend and a colleague. He died yesterday in a home accident. Below I remember him by recalling some good times together. Read at your own risk. 

Dear Gino,

   news of your accident reach me as I am about to board a flight in Athens, headed back home after a conference in Greece. Like all unfiltered, free media, Facebook can be quite cruel as a means of delivering this kind of information, goddamnit.

read more

by Tommaso Dorigo at September 05, 2016 07:32 PM

September 04, 2016

Lubos Motl - string vacua and pheno

Serious neutrinoless double beta-decay experiment cools down
Data collection to begin in early 2017

The main topic of my term paper in a 1998 Rutgers Glennys Farrar course was the question "Are neutrinos Majorana or Dirac?". I found the neutrino oscillations more important which is why I internalized that topic more deeply – although it was supposed to be reserved by a classmate of mine (and for some Canadian and Japanese guys who got a recent Nobel prize for the experiments). At any rate, the question I was assigned may be experimentally answered soon. Or not. (You may also want to see a similarly old term paper on the Milky Way at the galactic center.)



Neutrinos are spin-1/2 fermions. Their masses may arise just like the masses of electrons or positrons. In that case, we need a full Dirac spinor, two 2-component spinors, distinct particles and antiparticles (neutrinos and antineutrinos), and everything about the mass works just like in the case of the electrons and positrons. The Dirac mass terms are schematically\[

{\mathcal L}_{\rm Dirac} = m\bar\Psi \Psi = m \epsilon^{AB} \eta_A \chi_B + {\rm h.c.}

\] If neutrinos were Dirac particles in this sense, it would mean that right-handed neutrinos and left-handed antineutrinos do exist, after all – just like the observed left-handed neutrinos and right-handed antineutrinos. They would just be decoupled i.e. hard to be created.




However, the mass of the observed neutrinos may also arise from the Majorana mass terms that don't need to introduce any new 2-component spinors, as I will discuss in a minute. Surprisingly, these two different possibilities look indistinguishable – even if we study the neutrino oscillations.

Note that the conservation of the angular momentum, and therefore the helicity, guarantees that neutrinos or antineutrinos in the real world – which move almost by the speed of light – can't change from left-handed ones to right-handed ones or vice versa. So even in the Majorana case, the "neutrino or antineutrino" identity of the particle is "effectively" preserved because of the angular momentum conservation law, even when oscillations are taken into account.




CUORE is one of the experiments (see the picture at the top) in Gran Sasso, central Italy. This particular experiment tries to find a neutrinoless double-decay. For some technical reasons I don't really understand (but I mostly believe that there are good reasons), tellurium – namely the isotope \({}^{130}{\rm Te}\) – is being used as the original nucleus that is supposed to decay.

Oxygen only plays the role of turning the tellurium-130 to an oxide, \({\rm TeO}_2\), which is a crystal. When the tellurium nucleus decays, it heats up some matter around it whose resistance changes proportionally to the absorbed heat and this change may be measured – this form of detection is known as the "bolometer".

If you look at the isotopes of the tellurium, you will learn that 38 isotopes (plus 17 nuclear isomers) of the element are known. Two of them, tellurium-128 and tellurium-130, decay by the double beta-decay. These two isotopes also happen to be the most widespread ones, accounting for 32% and 34% of "tellurium in Nature", respectively. Tellurium-130 decays some 3,000 times more quickly – the half-life is a bit below \(10^{21}\) years – which is why this isotope's decays are easier to be seen.

Well, what's easily seen is the decay through the "normal" double beta-decay: two electrons and two antineutrinos are being emitted, exactly twice the products in a normal beta-decay. The remaining nucleus is xenon-130.\[

{}^{130}{\rm Te} \to {}^{130}{\rm Xe} + e^- + e^- + \bar\nu_e + \bar\nu_e

\] Because the energy of the xenon is known, the two electrons and two antineutrinos divide the energy difference between the tellurium-130 and the xenon-130 nuclei. It's clear that the neutrinos devour a significant fraction of that energy.

However, when neutrinos are Majorana particle, the other 2-spinor doesn't really exist – or it doesn't exist in the low-energy effective theory. The mass terms for the neutrinos, including those that are responsible for the oscillations, are created from a single 2-component spinor, schematically\[

{\mathcal L}_{\rm mass} \sim m_{\rm Majorana} \epsilon^{AB} \eta_A \eta_B + {\rm h.c.}

\] I used some 2-component antisymmetric epsilon symbol for the 2-spinors. Note that none of the etas in the product has a bar or a star. So this mass term actually "creates a pair of neutrinos" or "creates a pair of antineutrinos" or "destroys a neutrino and creates an antineutrino instead" or vice versa. As a result, it violates the conservation of the lepton number by \(\Delta L=\pm 2\).

See a question and an answer about the two types of the mass terms.

Because the lepton number is no longer conserved, the neutrinos may in principle be created and destroyed in pairs – without any "anti". This process may occur in the beta-decay as well. So if the neutrino masses arise from the Majorana term, a simplified reaction – the neutrinoless beta-decay – must also be allowed:\[

{}^{130}{\rm Te} \to {}^{130}{\rm Xe} + e^- + e^-

\] You may obtain the Feynman diagrams simply by connecting the two lines of the antineutrinos, which used to be external lines, and turning these lines into an internal propagator of the Feynman diagram. This linking should be possible; you may always use the Majorana mass term as a "vertex" with two external legs. I think that if you know the mass of the neutrino and assume it's a Majorana mass, it should also be possible to calculate the branching ratio (or partial decay rate) of this neutrinoless double beta-decay.

And it's this process that CUORE will try to find. If the neutrinos are Majorana particles, the experiment should ultimately see this rare process. When they draw the graph of the total energies of 2 electrons in "all" kinds of the double beta-decay, there should be a small peak near the maximum value of the two electrons' energy – which is some \(2.5275\MeV\), if you want to know.
See the Symmetry Magazine for a rather fresh story. The cooling has begun. In 2 months, the temperature will be dragged below 10 millikelvins. The experiment will become the coldest cubic meter in the known Universe. In early 2017, CUORE will begin to take data.
As a high-energy formal top-down theorist, I think it's more likely that the known neutrino masses are indeed Majorana masses because nothing prohibits these masses. Neutrinos don't carry any conserved charges under known forces (analogous to the electric charge) – and probably any (including so far unknown) long-range forces. In grand unified theories etc., the seesaw mechanism (or other mechanisms producing interactions in effective theories) naturally produce Majorana masses for the known 2-component neutrino spinors only. The other two-spinors, the right-handed neutrinos, may exist but they may have huge, GUT-scale masses, so these particles "effectively" don't exist in doable experiments.

There can also be Dirac masses on top of these Majorana masses. But I think that the Majorana masses of the magnitude comparable to the known neutrino masses (well, we only know the differences between squared masses of neutrinos, from the oscillations) should exist, too. And that's why I expect the neutrinoless double beta-decay to exist, too.

However, I am not particularly certain about it. The experiment may very well show that the neutrino masses aren't Majorana – or most of them are Dirac masses. It doesn't really contradict any "universal law of physics" I am aware of. I can imagine that some unknown symmetries may basically ban the neutrino masses and make the Dirac masses involving the unknown components of the neutrinos mandatory.



Because a planned upgrade of CUORE is called CUPID (another one is ABSuRD), just like the Roman god of love, I embedded a Czech cover version "Amor Magor" ("Moronic Amor") of the 1958 Connie Francis' and 1959 Neil Sedaka's song Stupid Cupid (see also an English cover from the 1990s). Ewa Farna sings about the stupid cupid who isn't able to hit her sweetheart's (or any other male) heart because he's as blind as a bullet. This Czech version was first recorded by top singer Lucie Bílá in 1998.

Her experience and emotions are superior but I chose teenage Ewa Farna because she was cute and relatively innocent at that time which seems more appropriate to me for the lyrics. ;-) If you want to take this argument to the limit, you may prefer this excellent 10-year-old Kiki Petráková's cover. (It seems to be a favorite song of Czech girls. Decreasingly perfect covers by Nelly Řehořová, Kamila Nývltová, Vendula Příhodová, Tereza Drábková, Michaela Kulhavá, Karolína Repetná, Tereza Kollerová.)

P.S.: While searching for my 1998 term paper on the Internet, I could only find a 2013 paper by Alvarez-Gaume and Vazquez-Mozo that thanks me in a footnote for some insight I can no longer remember, at least not clearly. Maybe I remember things from 1998 and older more than I remember those from 2013. ;-)



Completely off-topic: TV Prima is now broadcasting a Czech edition of Fort Boyart, with Czech participants and Czech actors. It feels like a cute sign of prosperity if a bunch of Czechs rents a real French fortress. But I just learned that already 31 nations have filmed their Fort Boyart over there... At least, the Czech version is the only one called "Something Boyard" where "something" isn't the French word "fort". We call it a pevnost. This word for a fortress is derived from the adjective "pevný", i.e. firm/solid/robust/hard/tough/resilient/steady.

by Luboš Motl (noreply@blogger.com) at September 04, 2016 06:52 AM

September 03, 2016

Jester - Resonaances

Plot for Weekend: new limits on neutrino masses
This weekend's plot shows the new limits on neutrino masses from the KamLAND-Zen experiment:

KamLAND-Zen is a group of buddhist monks studying a balloon filled with the xenon isotope Xe136. That isotope has a very long lifetime, of order 10^21 years, and undergoes the lepton-number-conserving double beta decay Xe136 → Ba136 2e- 2νbar. What the monks hope to observe is the lepton violating neutrinoless double beta decay Xe136 → Ba136+2e, which would show as a peak in the invariant mass distribution of the electron pairs near 2.5 MeV. No such signal has been observed, which sets the limit on the half-life for this decay at T>1.1*10^26 years.

The neutrinoless decay is predicted to occur if neutrino masses are of Majorana type, and the rate can be characterized by the effective mass Majorana mββ (y-axis in the plot). That parameter is a function of the masses and mixing angles of the neutrinos. In particular it depends on the mass of the lightest neutrino (x-axis in the plot) which is currently unknown. Neutrino oscillations experiments have precisely measured the mass^2 differences between neutrinos, which  are roughly (0.05 eV)^2 and (0.01 eV)^2. But oscillations are not sensitive to the absolute mass scale; in particular, the lightest neutrino may well be massless for all we know.  If the heaviest neutrino has a small electron flavor component, then we expect that the mββ parameter is below 0.01 eV.  This so-called normal hierarchy case is shown as the red region in the plot, and is clearly out of experimental reach at the moment. On the other hand, in the inverted hierarchy scenario (green region in the plot), it is the two heaviest neutrinos that have a significant electron component. In this case,  the effective Majorana mass mββ is around 0.05 eV.  Finally, there is also the degenerate scenario (funnel region in the plot) where all 3 neutrinos have very similar masses with small splittings, however this scenario is now strongly disfavored by cosmological limits on the sum of the neutrino masses (e.g. the Planck limit Σmν < 0.16 eV).

As can be seen in the plot, the results from KamLAND-Zen, when translated into limits on the effective Majorana mass, almost touch the inverted hierarchy region. The strength of this limit depends on some poorly known nuclear matrix elements (hence the width of the blue band). But even in the least favorable scenario future, more sensitive experiments should be able to probe that region. Thus, there is a hope  that within the next few years we may prove the Majorana nature of neutrinos, or at least disfavor the inverted hierarchy scenario.

by Jester (noreply@blogger.com) at September 03, 2016 12:20 PM

Jester - Resonaances

After the hangover
The loss of the 750 GeV diphoton resonance is a big blow to the particle physics community. We are currently going through the 5 stages of grief, everyone at their own pace, as can be seen e.g. in this comments section. Nevertheless, it may already be a good moment to revisit the story one last time, so as  to understand what went wrong.

In the recent years, physics beyond the Standard Model has seen 2 other flops of comparable impact: the faster-than-light neutrinos in OPERA, and the CMB tensor fluctuations in BICEP.  Much as the diphoton signal, both of the above triggered a binge of theoretical explanations, followed by a massive hangover. There was one big difference, however: the OPERA and BICEP signals were due to embarrassing errors on the experiments' side. This doesn't seem to be the case for the diphoton bump at the LHC. Some may wonder whether the Standard Model background may have been slightly underestimated,  or whether one experiment may have been biased by the result of the other... But, most likely, the 750 GeV bump was just due to a random fluctuation of the background at this particular energy. Regrettably, the resulting mess cannot be blamed on experimentalists, who were in fact downplaying the anomaly in their official communications. This time it's the theorists who  have some explaining to do.

Why did theorists write 500 papers about a statistical fluctuation?  One reason is that it didn't look like one at first sight. Back in December 2015, the local significance of the diphoton  bump in ATLAS run-2 data was 3.9 sigma, which means the probability of such a fluctuation was 1 in 10000. Combining available run-1 and run-2 diphoton data in ATLAS and CMS, the local significance was increased to 4.4 sigma.  All in all, it was a very unusual excess, a 1-in-100000 occurrence! Of course, this number should be interpreted with care. The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them. This can be partly taken into account by calculating the global significance, which is the probability of finding a background fluctuation of the observed size anywhere in the diphoton spectrum. The global significance of the 750 GeV bump quoted by ATLAS was only about two sigma, the fact strongly emphasized by the collaboration.  However, that number can be misleading too.  One problem with the global significance is that, unlike for the local one, it cannot be  easily combined in the presence of separate measurements of the same observable. For the diphoton final state we  have ATLAS and CMS measurements in run-1 and run-2,  thus 4 independent datasets, and their robust concordance was crucial  in creating the excitement.  Note also that what is really relevant here is the probability of a fluctuation of a given size in any of the  LHC measurement, and that is not captured by the global significance.  For these reasons, I find it more transparent work with the local significance, remembering that it should not be interpreted as the probability that the Standard Model is incorrect. By these standards, a 4.4 sigma fluctuation in a combined ATLAS and CMS dataset is still a very significant effect which deserves a special attention. What we learned the hard way is that such large fluctuations do happen at the LHC...   This lesson will certainly be taken into account next time we encounter a significant anomaly.

Another reason why the 750 GeV bump was exciting is that the measurement is rather straightforward.  Indeed, at the LHC we often see anomalies in complicated final states or poorly controlled differential distributions, and we treat those with much skepticism.  But a resonance in the diphoton spectrum is almost the simplest and cleanest observable that one can imagine (only a dilepton or 4-lepton resonance would be cleaner). We already successfully discovered one particle this way - that's how the Higgs boson first showed up in 2011. Thus, we have good reasons to believe that the collaborations control this measurement very well.

Finally, the diphoton bump was so attractive because theoretical explanations were  plausible.  It was trivial to write down a model fitting the data, there was no need to stretch or fine-tune the parameters, and it was quite natural that the particle first showed in as a diphoton resonance and not in other final states. This is in stark contrast to other recent anomalies which typically require a great deal of gymnastics to fit into a consistent picture.   The only thing to give you a pause was the tension with the LHC run-1 diphoton data, but even that became  mild after the Moriond update this year.

So we got a huge signal of a new particle in a clean channel with plausible theoretic models to explain it...  that was a really bad luck.  My conclusion may not be shared by everyone but I don't think that the theory community committed major missteps  in this case.  Given that for 30 years we have been looking for a clue about the fundamental theory beyond the Standard Model, our reaction was not disproportionate once a seemingly reliable one had arrived.  Excitement is an inherent part of physics research. And so is disappointment, apparently.

There remains a question whether we really needed 500 papers...   Well, of course not: many of  them fill an important gap.  Yet many are an interesting read, and I personally learned a lot of exciting physics from them.  Actually, I suspect that the fraction of useless papers among the 500 is lower than for regular daily topics.  On a more sociological side, these papers exacerbate the problem with our citation culture (mass-grave references), which undermines the citation count as a means to evaluate the research impact.  But that is a wider issue which I don't know how to address at the moment.

Time to move on. The ICHEP conference is coming next week, with loads of brand new results based on up to 16 inverse femtobarns of 13 TeV LHC data.  Although the rumor is that there is no new exciting  anomaly at this point, it will be interesting to see how much room is left for new physics. The hope lingers on, at least until the end of this year.

In the comments section you're welcome to lash out on the entire BSM community - we made a wrong call so we deserve it. Please, however, avoid personal attacks (unless on me). Alternatively, you can also give us a hug :) 

by Jester (noreply@blogger.com) at September 03, 2016 10:44 AM

Jester - Resonaances

Black hole dark matter
The idea that dark matter is made of primordial black holes is very old but has always been in the backwater of particle physics. The WIMP or asymmetric dark matter paradigms are preferred for several reasons such as calculability, observational opportunities, and a more direct connection to cherished theories beyond the Standard Model. But in the recent months there has been more interest, triggered in part by the LIGO observations of black hole binary mergers. In the first observed event, the mass of each of the black holes was estimated at around 30 solar masses. While such a system may well be of boring astrophysical origin, it is somewhat unexpected because typical black holes we come across in everyday life are either a bit smaller (around one solar mass) or much larger (supermassive black hole in the galactic center). On the other hand, if the dark matter halo were made of black holes, scattering processes would sometimes create short-lived binary systems. Assuming a significant fraction of dark matter in the universe is made of primordial black holes, this paper estimated that the rate of merger processes is in the right ballpark to explain the LIGO events.

Primordial black holes can form from large density fluctuations in the early universe. On the largest observable scales the universe is incredibly homogenous, as witnessed by the uniform temperature of the Cosmic Microwave Background over the entire sky. However on smaller scales the primordial inhomogeneities could be much larger without contradicting observations.  From the fundamental point of view, large density fluctuations may be generated by several distinct mechanism, for example during the final stages of inflation in the waterfall phase in the hybrid inflation scenario. While it is rather generic that this or similar process may seed black hole formation in the radiation-dominated era, severe fine-tuning is required to produce the right amount of black holes and ensure that the resulting universe resembles the one we know.

All in all, it's fair to say that the scenario where all or a significant fraction of  dark matter  is made of primordial black holes is not completely absurd. Moreover, one typically expects the masses to span a fairly narrow range. Could it be that the LIGO events is the first indirect detection of dark matter made of O(10)-solar-mass black holes? One problem with this scenario is that it is excluded, as can be seen in the plot.  Black holes sloshing through the early dense universe accrete the surrounding matter and produce X-rays which could ionize atoms and disrupt the Cosmic Microwave Background. In the 10-100 solar mass range relevant for LIGO this effect currently gives the strongest constraint on primordial black holes: according to this paper they are allowed to constitute  not more than 0.01% of the total dark matter abundance. In astrophysics, however, not only signals but also constraints should be taken with a grain of salt.  In this particular case, the word in town is that the derivation contains a numerical error and that the corrected limit is 2 orders of magnitude less severe than what's shown in the plot. Moreover, this limit strongly depends on the model of accretion, and more favorable assumptions may buy another order of magnitude or two. All in all, the possibility of dark matter made of  primordial black hole in the 10-100 solar mass range should not be completely discarded yet. Another possibility is that black holes make only a small fraction of dark matter, but the merger rate is faster, closer to the estimate of this paper.

Assuming this is the true scenario, how will we know? Direct detection of black holes is discouraged, while the usual cosmic ray signals are absent. Instead, in most of the mass range, the best probes of primordial black holes are various lensing observations. For LIGO black holes, progress may be made via observations of fast radio bursts. These are strong radio signals of (probably) extragalactic origin and millisecond duration. The radio signal passing near a O(10)-solar-mass black hole could be strongly lensed, leading to repeated signals detected on Earth with an observable time delay. In the near future we should observe hundreds of such repeated bursts, or obtain new strong constraints on primordial black holes in the interesting mass ballpark. Gravitational wave astronomy may offer another way.  When more statistics is accumulated, we will be able to say something about the spatial distributions of the merger events. Primordial black holes should be distributed like dark matter halos, whereas astrophysical black holes should be correlated with luminous galaxies. Also, the typical eccentricity of the astrophysical black hole binaries should be different.  With some luck, the primordial black hole dark matter scenario may be vindicated or robustly excluded  in the near future.

See also these slides for more details. 

by Jester (noreply@blogger.com) at September 03, 2016 10:44 AM

September 02, 2016

Symmetrybreaking - Fermilab/SLAC

CUORE almost ready for first cool-down

The refrigerator that will become the coldest cubic meter in the universe is fully loaded and ready to go.

Deep within a mountain in Italy, scientists have finished the assembly of an experiment more than one decade in the making. The detector of CUORE, short for Cryogenic Underground Observatory for Rare Events, is ready to be cooled down to its operating temperature for the first time.

Ettore Fiorini, the founder of the collaboration, proposed the use of low temperature detectors to search for rare events in 1984 and started creating the first prototypes with his group in Milano. What began as a personal project involving a tiny crystal and a small commercial cooler has grown to a collaboration of 165 scientists loading almost one ton of crystals and several tons of refrigerator and shields.

The CUORE experiment is looking for a rare process that would be evidence that almost massless particles called neutrinos are their own antiparticles, something that would give scientists a clue as to how our universe came to be.

Oliviero Cremonesi, current spokesperson of the CUORE collaboration, joined the quest in 1988 and helped write the first proposal for the experiment. At first, funding agencies in Italy and the United States approved a smaller version: Cuoricino.

“We had five exciting years of measurements from 2003 to 2008 on this machine, but we knew that we wanted to go bigger. So we kept working on CUORE,” Cremonesi says.

In 2005 the collaboration got approval for the big detector, which they called CUORE. That started them on a whole new journey involving growing crystals in China, bringing them to Italy by boat, and negotiating with archeologists for the right to use 2000-year-old Roman lead as shielding material. 

“I imagine climbing Mount Everest is a little bit like this,” says Lindley Winslow, a professor at the Massachusetts Institute of Technology and group leader of the MIT activities on CUORE. “We can already see the top, but this last part is the hardest. The excitement is high, but also the fear that something goes wrong.”

The CUORE detector, assembled between 2012 and 2014, consists of 19 fragile copper towers that each host 52 tellurium oxide crystals connected by wires and sensors to measure their temperature.

For this final stage, scientists built a custom refrigerator from extremely pure materials. They shielded and housed it inside of a mountain at Gran Sasso, Italy. At the end of July, scientists began moving the detector to its new home. After a brief pause to ensure the site had not been affected by the 6.2-magnitude earthquake that hit central Italy on August 24, they finished the job on August 26.

The towers now reside in the largest refrigerator used for a scientific purpose. By the end of October, they will be cooled below 10 millikelvin (negative 460 Fahrenheit), colder than outer space.

Everything has to be this cold because the scientists are searching for minuscule temperature changes caused by an ultra-rare process called neutrinoless double beta decay.

During a normal beta decay, one atom changes from one chemical element into its daughter element and sends out one electron and one antineutrino. For the neutrinoless double beta decay, this would be different: The element would change into its granddaughter. Instead of one electron and one neutrino sharing the energy of the decay, only two electrons would leave, and an observer would see no neutrinos at all.

This would only happen if neutrinos were their own antiparticles. In that case, the two neutrinos would cancel each other out, and it would seem like they never existed in the first place.

If scientists measure this decay, it would change the current scientific thinking about the neutrino and give scientists clues about why there is so much more matter than anti-matter in the universe.  

“We are excited to start the cool-down, and if everything works according to plan, we can start measuring at the beginning of next year,” Winslow says.

by Ricarda Laasch at September 02, 2016 07:41 PM

September 01, 2016

Symmetrybreaking - Fermilab/SLAC

Universe steps on the gas

A puzzling mismatch is forcing astronomers to re-think how well they understand the expansion of the universe.

Astronomers think the universe might be expanding faster than expected.

If true, it could reveal an extra wrinkle in our understanding of the universe, says Nobel Laureate Adam Riess of the Space Telescope Science Institute and Johns Hopkins University. That wrinkle might point toward new particles or suggest that the strength of dark energy, the mysterious force accelerating the expansion of the universe, actually changes over time.

The result appears in a study published in The Astrophysical Journal this July, in which Riess’s team measured the current expansion rate of the universe, also known as the Hubble constant, better than ever before.

In theory, determining this expansion is relatively simple, as long as you know the distance to a galaxy and the rate at which it is moving away from us. But distance measurements are tricky in practice and require using objects of known brightness, so-called standard candles, to gauge their distances.

The use of Type Ia supernovae—exploding stars that shine with the same intrinsic luminosity—as standard candles led to the discovery that the universe was accelerating in the first place and earned Riess, as well as Saul Perlmutter and Brian Schmidt, a Nobel Prize in 2011.

The latest measurement builds on that work and indicates that the universe is expanding by 73.2 kilometers per second per megaparsec (a unit that equals 3.3 million light-years). Think about dividing the universe into grids that are each a megaparsec long. Every time you reach a new grid, the universe is expanding 73.2 kilometers per second faster than the grid before.

Although the analysis pegs the Hubble constant to within experimental errors of just 2.4 percent, the latest result doesn’t match the expansion rate predicted from the universe’s trajectory. Here, astronomers measure the expansion rate from the radiation released 380,000 years after the Big Bang and then run that expansion forward in order to calculate what today’s expansion rate should be.

It’s similar to throwing a ball in the air, Riess says. If you understand the state of the ball (how fast it's traveling and where it is) and the physics (gravity and drag), then you should be able to precisely predict how fast that ball is traveling later on.

“So in this case, instead of a ball, it's the whole universe, and we think we should be able to predict how fast it's expanding today,” Riess says. “But the caveat, I would say, is that most of the universe is in a dark form that we don't understand.”

The rates predicted from measurements made on the early universe with the Planck satellite are 9 percent smaller than the rates measured by Riess’ team—a puzzling mismatch that suggests the universe could be expanding faster than physicists think it should.

David Kaplan, a theorist at Johns Hopkins University who was not involved with the study, is intrigued by the discrepancy because it could be easily explained with the addition of a new theory, or even a slight tweak to a current theory.

“Sometimes there's a weird discrepancy or signal and you think 'holy cow, how am I ever going to explain that?'” Kaplan says. “You try to come up with some cockamamie theory. This, on the other hand, is something that lives in a regime where it's really easy to explain it with new degrees of freedom.”

Kaplan’s favorite explanation is that there’s an undiscovered particle, which would affect the expansion rate in the early universe. “If there are super light particles that haven't been taken into account yet and they make up some smallish fraction of the universe, it seems that can explain the discrepancy relatively comfortably,” he says.

But others disagree. “We understand so little about dark energy that it's tempting to point to something there,” says David Spergel, an astronomer from Princeton University who was also not involved in the study. One explanation is that dark energy, the cause of the universe’s accelerating expansion, is growing stronger with time.

“The idea is that if dark energy is constant, clusters of galaxies are moving apart from each other but the clusters of galaxies themselves will remain forever bound,” says Alex Filippenko, an astronomer at the University of California, Berkeley and a co-author on Riess’ paper. But if dark energy is growing in strength over time, then one day—far in the future—even clusters of galaxies will get ripped apart. And the trend doesn’t stop there, he says. Galaxies, clusters of stars, stars, planetary systems, planets, and then even atoms will be torn to shreds one by one.

The implications could—literally—be Earth-shattering. But it’s also possible that one of the two measurements is wrong, so both teams are currently working toward even more precise measurements. The latest discrepancy is also relatively minor compared to past disagreements.

“I'm old enough to remember when I was first a student and went to conferences and people argued over whether the Hubble constant was 50 or 100,” says Spergel. “We're now in a situation where the low camp is arguing for 67 and the high camp is arguing for 73. So we've made progress! And that's not to belittle this discrepancy. I think it's really interesting. It could be the signature of new physics.”

by Shannon Hall at September 01, 2016 03:48 PM

August 30, 2016

Symmetrybreaking - Fermilab/SLAC

Our galactic neighborhood

What can our cosmic neighbors tell us about dark matter and the early universe?

Imagine a mansion.

Now picture that mansion at the heart of a neighborhood that stretches irregularly around it, featuring other houses of different sizes—but all considerably smaller. Cloak the neighborhood in darkness, and the houses appear as clusters of lights. Many of the clusters are bright and easy to see from the mansion, but some can just barely be distinguished from the darkness. 

This is our galactic neighborhood. The mansion is the Milky Way, our 100,000-light-years-across home in the universe. Stretching roughly a million light years from the center of the Milky Way, our galactic neighborhood is composed of galaxies, star clusters and large roving gas clouds that are gravitationally bound to us.

The largest satellite galaxy, the Large Magellanic Cloud, is also one of the closest. It is visible to the naked eye from areas clear of light pollution in the Southern Hemisphere. If the Large Magellanic Cloud were around the size of the average American home—about 2,500 square feet—then by a conservative estimate the Milky Way mansion would occupy more than a full city block. On that scale, our most diminutive neighbors would occupy the same amount of space as a toaster.

Our cosmic neighbors promise answers to questions about hidden matter and the ancient universe. Scientists are setting out to find them.

What makes a neighbor

If we are the mansion, the neighboring houses are dwarf galaxies. Scientists have identified about 50 possible galaxies orbiting the Milky Way and have confirmed the identities of roughly 30 of them. These galaxies range in size from several billion stars to only a few hundred. For perspective, the Milky Way contains somewhere between 100 billion to a trillion stars. 

Dwarf galaxies are the most dark-matter-dense objects known in the universe. In fact, they have far more dark matter than regular matter. Segue 1, our smallest confirmed neighbor, is made of 99.97 percent dark matter.

Dark matter is key to galaxy formation. A galaxy forms when enough regular matter is attracted to a single area by the gravitational pull of a clump of dark matter.

Projects such as the Dark Energy Survey, or DES, find these galaxies by snapping images of a segment of the sky with a powerful telescope camera. Scientists analyze the resulting images, looking for the pattern of color and brightness characteristic of galaxies. 

Scientists can find dark matter clumps by measuring the motion and chemical composition of stars. If a smaller galaxy seems to be behaving like a more massive galaxy, observers can conclude a considerable amount of dark matter must anchor the galaxy.

“Essentially, they are nearby clouds of dark matter with just enough stars to detect them,” says Keith Bechtol, a postdoctoral researcher at the University of Wisconsin-Madison and a member of the Dark Energy Survey.

Through these methods of identification (and thanks to the new capabilities of digital cameras), the Sloan Digital Sky Survey kicked off the modern hunt for dwarf galaxies in the early 2000s. The survey, which looked at the northern part of the sky, more than doubled the number of known satellite dwarf galaxies from 11 to 26 galaxies between 2005 and 2010. Now DES, along with some other surveys, is leading the search. In the last few years DES and its Dark Energy Camera, which maps the southern part of the sky, brought the total to 50 probable galaxies. 

Dark matter mysteries

Dwarf galaxies serve as ideal tools for studying dark matter. While scientists haven’t yet directly discovered dark matter, in studying dwarf galaxies they’ve been able to draw more and more conclusions about how it behaves and, therefore, what it could be. 

“Dwarf galaxies tell us about the small-scale structure of how dark matter clumps,” says Alex Drlica-Wagner of Fermi National Accelerator Laboratory, one of the leaders of the DES analysis. “They are excellent probes for cosmology at the smallest scales.”

Dwarf galaxies also present useful targets for gamma-ray telescopes, which could tell us more about how dark matter particles behave. Some models posit that dark matter is its own antiparticle. If that were so, it could annihilate when it meets other dark matter particles, releasing gamma rays. Scientists are looking for those gamma rays. 

But while studying these neighbors provides clues about the nature of dark matter, they also raise more and more questions. The prevailing cosmological theory of dark matter has accurately described much of what scientists observe in the universe. But when scientists looked to our neighbors, some of the predictions didn’t hold up.

The number of galaxies appears to be lower than expected from calculations, for example, and those that are around seem to be too small. While some of the solutions to these problems may lie in the capabilities of the telescopes or the simulations themselves, we may also need to reconsider the way we think dark matter interacts. 

The elements of the neighborhood

Dwarf galaxies don’t just tell us about dark matter: They also present a window into the ancient past. Most dwarf galaxies’ stars formed more than 10 billion years ago, not long after the Big Bang. Our current understanding of galaxy formation, according to Bechtol, is that after small galaxies formed, some of them merged over billions of years into larger galaxies. 

If we didn’t have these ancient neighbors, we’d have to peer all the way across the universe to see far enough back in time to glimpse galaxies that formed soon after the big bang. While the Milky Way and other large galaxies bustle with activity and new star formation, the satellite galaxies remain mostly static—snapshots of galaxies soon after their birth. 

“They’ve mostly been sitting there, waiting for us to study them,” says Josh Simon, an astronomer at the Carnegie Institution for Science.

The abundance of certain elements in stars in dwarf galaxies can tell scientists about the conditions and mechanisms that produce them. Scientists can also look to the elements to learn about even older stars. 

The first generation of stars are thought to have looked very different than those formed afterward. When they exploded as supernovae, they released new elements that would later appear in stars of the next generation, some of which are found in our neighboring galaxies.

“They do give us the most direct fingerprint we can get as to what those first stars might have been like,” Simon says.

Scientists have learned a lot about our satellites in just the past few years, but there’s always more to learn. DES will begin its fourth year of data collection in August. Several other surveys are also underway. And the Large Synoptic Survey Telescope, an ambitious international project currently under construction in Chile, will begin operating fully in 2022. LSST will create a more detailed map than any of the previous surveys’ combined. 

 


Use this interactive graphic to explore our neighboring galaxies. Click on the abbreviated name of the galaxy to find out more about it. 

The size of each galaxy is listed in parsecs, a unit equal to about 3.26 light-years or 19 trillion miles. The distance from the Milky Way is described in kiloparsecs, or 1000 parsecs. The luminosity of each galaxy, L⊙, is explained in terms of how much energy it emits compared to our sun. Right ascension and declination are astronomical coordinates that specify the galaxy's location as viewed from Earth. 

Read extra descriptive text about some of our most notable neighboring galaxies (the abbreviations for which appear in darker red).

 

 
 

  • &odot

by Molly Olmstead at August 30, 2016 01:00 PM

August 26, 2016

Symmetrybreaking - Fermilab/SLAC

Winners declared in SUSY bet

Physicists exchanged cognac in Copenhagen at the conclusion of a bet about supersymmetry and the LHC.

As a general rule, theorist Nima Arkani-Hamed does not get involved in physics bets.

“Theoretical physicists like to take bets on all kinds of things,” he says. “I’ve always taken the moral high ground… Nature decides. We’re all in pursuit of the truth. We’re all on the same side.”

But sometimes you’re in Copenhagen for a conference, and you’re sitting in a delightfully unusual restaurant—one that sort of reminds you of a cave—and a fellow physicist gives you the opportunity to get in on a decade-old wager about supersymmetry and the Large Hadron Collider. Sometimes then, you decide to bend your rule. “It was just such a jovial atmosphere, I figured, why not?”

That’s how Arkani-Hamed found himself back in Copenhagen this week, passing a 1000-Krone bottle of cognac to one of the winners of the bet, Director of the Niels Bohr International Academy Poul Damgaard.

Arkani-Hamed had wagered that experiments at the LHC would find evidence of supersymmetry by the arbitrary date of June 16, 2016. Supersymmetry, SUSY for short, is a theory that predicts the existence of partner particles for the members of the Standard Model of particle physics

The deadline was not met. But in a talk at the Niels Bohr Institute, Arkani-Hamed pointed out that the end of the gamble does not equal the end of the theory.

“I was not a good student in school,” Arkani-Hamed explained. “One of my big problems was not getting homework done on time. It was a constant battle with my teachers… Just give me another week! It’s kind of like the bet.”

He pointed out that so far the LHC has gathered just 1 percent of the total amount of data it aims to collect.

With that data, scientists can indeed rule out the most vanilla form of supersymmetry. But that’s not the version of supersymmetry Arkani-Hamed would expect the LHC to find anyway, he said.

It is still possible LHC experiments will find evidence of other SUSY models—including the one Arkani-Hamed prefers, called split SUSY, which adds superpartners to just half of the Standard Model’s particles. And if LHC scientists don’t find evidence of SUSY, Arkani-Hamed pointed out, the theoretical problems it aimed to solve will remain an exciting challenge for the next generation of theorists to figure out.

“I think Winston Churchill said that in victory you should be magnanimous,” Damgaard said after Arkani-Hamed’s talk. “I know also he said that in defeat you should be defiant. And that’s certainly Nima.”

Arkani-Hamed shrugged. But it turned out he was not the only optimist in the room. Panelist Yonit Hochberg of the University of California, Berkeley conducted an informal poll of attendees. She found that the majority still think that in the next 20 years, as data continues to accumulate, experiments at the LHC will discover something new.

Video of As9raVaTFGA

by Kathryn Jepsen at August 26, 2016 09:48 PM

Quantum Diaries

The Delirium over Beryllium

This post is cross-posted from ParticleBites.

Article: Particle Physics Models for the 17 MeV Anomaly in Beryllium Nuclear Decays
Authors: J.L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait, F. Tanedo
Reference: arXiv:1608.03591 (Submitted to Phys. Rev. D)
Also featuring the results from:
— Gulyás et al., “A pair spectrometer for measuring multipolarities of energetic nuclear transitions” (description of detector; 1504.00489NIM)
— Krasznahorkay et al., “Observation of Anomalous Internal Pair Creation in 8Be: A Possible Indication of a Light, Neutral Boson”  (experimental result; 1504.01527PRL version; note PRL version differs from arXiv)
— Feng et al., “Protophobic Fifth-Force Interpretation of the Observed Anomaly in 8Be Nuclear Transitions” (phenomenology; 1604.07411; PRL)

Editor’s note: the author is a co-author of the paper being highlighted. 

Recently there’s some press (see links below) regarding early hints of a new particle observed in a nuclear physics experiment. In this bite, we’ll summarize the result that has raised the eyebrows of some physicists, and the hackles of others.

A crash course on nuclear physics

Nuclei are bound states of protons and neutrons. They can have excited states analogous to the excited states of at lowoms, which are bound states of nuclei and electrons. The particular nucleus of interest is beryllium-8, which has four neutrons and four protons, which you may know from the triple alpha process. There are three nuclear states to be aware of: the ground state, the 18.15 MeV excited state, and the 17.64 MeV excited state.

Beryllium-8 excited nuclear states. The 18.15 MeV state (red) exhibits an anomaly. Both the 18.15 MeV and 17.64 states decay to the ground through a magnetic, p-wave transition. Image adapted from Savage et al. (1987).

Most of the time the excited states fall apart into a lithium-7 nucleus and a proton. But sometimes, these excited states decay into the beryllium-8 ground state by emitting a photon (γ-ray). Even more rarely, these states can decay to the ground state by emitting an electron–positron pair from a virtual photon: this is called internal pair creation and it is these events that exhibit an anomaly.

The beryllium-8 anomaly

Physicists at the Atomki nuclear physics institute in Hungary were studying the nuclear decays of excited beryllium-8 nuclei. The team, led by Attila J. Krasznahorkay, produced beryllium excited states by bombarding a lithium-7 nucleus with protons.

Preparation of beryllium-8 excited state

Beryllium-8 excited states are prepare by bombarding lithium-7 with protons.

The proton beam is tuned to very specific energies so that one can ‘tickle’ specific beryllium excited states. When the protons have around 1.03 MeV of kinetic energy, they excite lithium into the 18.15 MeV beryllium state. This has two important features:

  1. Picking the proton energy allows one to only produce a specific excited state so one doesn’t have to worry about contamination from decays of other excited states.
  2. Because the 18.15 MeV beryllium nucleus is produced at resonance, one has a very high yield of these excited states. This is very good when looking for very rare decay processes like internal pair creation.

What one expects is that most of the electron–positron pairs have small opening angle with a smoothly decreasing number as with larger opening angles.

Screen Shot 2016-08-22 at 9.18.11 AM

Expected distribution of opening angles for ordinary internal pair creation events. Each line corresponds to nuclear transition that is electric (E) or magenetic (M) with a given orbital quantum number, l. The beryllium transitionsthat we’re interested in are mostly M1. Adapted from Gulyás et al. (1504.00489).

Instead, the Atomki team found an excess of events with large electron–positron opening angle. In fact, even more intriguing: the excess occurs around a particular opening angle (140 degrees) and forms a bump.

Adapted from Krasznahorkay et al.

Number of events (dN/dθ) for different electron–positron opening angles and plotted for different excitation energies (Ep). For Ep=1.10 MeV, there is a pronounced bump at 140 degrees which does not appear to be explainable from the ordinary internal pair conversion. This may be suggestive of a new particle. Adapted from Krasznahorkay et al., PRL 116, 042501.

Here’s why a bump is particularly interesting:

  1. The distribution of ordinary internal pair creation events is smoothly decreasing and so this is very unlikely to produce a bump.
  2. Bumps can be signs of new particles: if there is a new, light particle that can facilitate the decay, one would expect a bump at an opening angle that depends on the new particle mass.

Schematically, the new particle interpretation looks like this:

Schematic of the Atomki experiment.

Schematic of the Atomki experiment and new particle (X) interpretation of the anomalous events. In summary: protons of a specific energy bombard stationary lithium-7 nuclei and excite them to the 18.15 MeV beryllium-8 state. These decay into the beryllium-8 ground state. Some of these decays are mediated by the new X particle, which then decays in to electron–positron pairs of a certain opening angle that are detected in the Atomki pair spectrometer detector. Image from 1608.03591.

As an exercise for those with a background in special relativity, one can use the relation (pe+ + pe)2 = mX2 to prove the result:

Untitled

This relates the mass of the proposed new particle, X, to the opening angle θ and the energies E of the electron and positron. The opening angle bump would then be interpreted as a new particle with mass of roughly 17 MeV. To match the observed number of anomalous events, the rate at which the excited beryllium decays via the X boson must be 6×10-6 times the rate at which it goes into a γ-ray.

The anomaly has a significance of 6.8σ. This means that it’s highly unlikely to be a statistical fluctuation, as the 750 GeV diphoton bump appears to have been. Indeed, the conservative bet would be some not-understood systematic effect, akin to the 130 GeV Fermi γ-ray line.

The beryllium that cried wolf?

Some physicists are concerned that beryllium may be the ‘boy that cried wolf,’ and point to papers by the late Fokke de Boer as early as 1996 and all the way to 2001. de Boer made strong claims about evidence for a new 10 MeV particle in the internal pair creation decays of the 17.64 MeV beryllium-8 excited state. These claims didn’t pan out, and in fact the instrumentation paper by the Atomki experiment rules out that original anomaly.

The proposed evidence for “de Boeron” is shown below:

Beryllium

The de Boer claim for a 10 MeV new particle. Left: distribution of opening angles for internal pair creation events in an E1 transition of carbon-12. This transition has similar energy splitting to the beryllium-8 17.64 MeV transition and shows good agreement with the expectations; as shown by the flat “signal – background” on the bottom panel. Right: the same analysis for the M1 internal pair creation events from the 17.64 MeV beryllium-8 states. The “signal – background” now shows a broad excess across all opening angles. Adapted from de Boer et al. PLB 368, 235 (1996).

When the Atomki group studied the same 17.64 MeV transition, they found that a key background component—subdominant E1 decays from nearby excited states—dramatically improved the fit and were not included in the original de Boer analysis. This is the last nail in the coffin for the proposed 10 MeV “de Boeron.”

However, the Atomki group also highlight how their new anomaly in the 18.15 MeV state behaves differently. Unlike the broad excess in the de Boer result, the new excess is concentrated in a bump. There is no known way in which additional internal pair creation backgrounds can contribute to add a bump in the opening angle distribution; as noted above: all of these distributions are smoothly falling.

The Atomki group goes on to suggest that the new particle appears to fit the bill for a dark photon, a reasonably well-motivated copy of the ordinary photon that differs in its overall strength and having a non-zero (17 MeV?) mass.

Theory part 1: Not a dark photon

With the Atomki result was published and peer reviewed in Physics Review Letters, the game was afoot for theorists to understand how it would fit into a theoretical framework like the dark photon. A group from UC Irvine, University of Kentucky, and UC Riverside found that actually, dark photons have a hard time fitting the anomaly simultaneously with other experimental constraints. In the visual language of this recent ParticleBite, the situation was this:

Beryllium-8

It turns out that the minimal model of a dark photon cannot simultaneously explain the Atomki beryllium-8 anomaly without running afoul of other experimental constraints. Image adapted from this ParticleBite.

The main reason for this is that a dark photon with mass and interaction strength to fit the beryllium anomaly would necessarily have been seen by the NA48/2 experiment. This experiment looks for dark photons in the decay of neutral pions (π0). These pions typically decay into two photons, but if there’s a 17 MeV dark photon around, some fraction of those decays would go into dark-photon — ordinary-photon pairs. The non-observation of these unique decays rules out the dark photon interpretation.

The theorists then decided to “break” the dark photon theory in order to try to make it fit. They generalized the types of interactions that a new photon-like particle, X, could have, allowing protons, for example, to have completely different charges than electrons rather than having exactly opposite charges. Doing this does gross violence to the theoretical consistency of a theory—but they goal was just to see what a new particle interpretation would have to look like. They found that if a new photon-like particle talked to neutrons but not protons—that is, the new force were protophobic—then a theory might hold together.

Schematic description of how model-builders “hacked” the dark photon theory to fit both the beryllium anomaly while being consistent with other experiments. This hack isn’t pretty—and indeed, comes at the cost of potentially invalidating the mathematical consistency of the theory—but the exercise demonstrates the target for how to a complete theory might have to behave. Image adapted from this ParticleBite.

Theory appendix: pion-phobia is protophobia

Editor’s note: what follows is for readers with some physics background interested in a technical detail; others may skip this section.

How does a new particle that is allergic to protons avoid the neutral pion decay bounds from NA48/2? Pions decay into pairs of photons through the well-known triangle-diagrams of the axial anomaly. The decay into photon–dark-photon pairs proceed through similar diagrams. The goal is then to make sure that these diagrams cancel.

A cute way to look at this is to assume that at low energies, the relevant particles running in the loop aren’t quarks, but rather nucleons (protons  and neutrons). In fact, since only the proton can talk to the photon, one only needs to consider proton loops. Thus if the new photon-like particle, X, doesn’t talk to protons, then there’s no diagram for the pion to decay into γX. This would be great if the story weren’t completely wrong.

Avoiding NA48

Avoiding NA48/2 bounds requires that the new particle, X, is pion-phobic. It turns out that this is equivalent to X being protophobic. The correct way to see this is on the left, making sure that the contribution of up-quark loops cancels the contribution from down-quark loops. A slick (but naively completely wrong) calculation is on the right, arguing that effectively only protons run in the loop.

The correct way of seeing this is to treat the pion as a quantum superposition of an up–anti-up and down–anti-down bound state, and then make sure that the X charges are such that the contributions of the two states cancel. The resulting charges turn out to be protophobic.

The fact that the “proton-in-the-loop” picture gives the correct charges, however, is no coincidence. Indeed, this was precisely how Jack Steinberger calculated the correct pion decay rate. The key here is whether one treats the quarks/nucleons linearly or non-linearly in chiral perturbation theory. The relation to the Wess-Zumino-Witten term—which is what really encodes the low-energy interaction—is carefully explained in chapter 6a.2 of Georgi’s revised Weak Interactions.

Theory part 2: Not a spin-0 particle

The above considerations focus on a new particle with the same spin and parity as a photon (spin-1, parity odd). Another result of the UCI study was a systematic exploration of other possibilities. They found that the beryllium anomaly could not be consistent with spin-0 particles. For a parity-odd, spin-0 particle, one cannot simultaneously conserve angular momentum and parity in the decay of the excited beryllium-8 state. (Parity violating effects are negligible at these energies.)

Parity

Parity and angular momentum conservation prohibit a “dark Higgs” (parity even scalar) from mediating the anomaly.

For a parity-odd pseudoscalar, the bounds on axion-like particles at 20 MeV suffocate any reasonable coupling. Measured in terms of the pseudoscalar–photon–photon coupling (which has dimensions of inverse GeV), this interaction is ruled out down to the inverse Planck scale.

Screen Shot 2016-08-24 at 4.01.07 PM

Bounds on axion-like particles exclude a 20 MeV pseudoscalar with couplings to photons stronger than the inverse Planck scale. Adapted from 1205.2671 and 1512.03069.

Additional possibilities include:

  • Dark Z bosons, cousins of the dark photon with spin-1 but indeterminate parity. This is very constrained by atomic parity violation.
  • Axial vectors, spin-1 bosons with positive parity. These remain a theoretical possibility, though their unknown nuclear matrix elements make it difficult to write a predictive model. (See section II.D of 1608.03591.)

Theory part 3: Nuclear input

The plot thickens when once also includes results from nuclear theory. Recent results from Saori Pastore, Bob Wiringa, and collaborators point out a very important fact: the 18.15 MeV beryllium-8 state that exhibits the anomaly and the 17.64 MeV state which does not are actually closely related.

Recall (e.g. from the first figure at the top) that both the 18.15 MeV and 17.64 MeV states are both spin-1 and parity-even. They differ in mass and in one other key aspect: the 17.64 MeV state carries isospin charge, while the 18.15 MeV state and ground state do not.

Isospin is the nuclear symmetry that relates protons to neutrons and is tied to electroweak symmetry in the full Standard Model. At nuclear energies, isospin charge is approximately conserved. This brings us to the following puzzle:

If the new particle has mass around 17 MeV, why do we see its effects in the 18.15 MeV state but not the 17.64 MeV state?

Naively, if the new particle emitted, X, carries no isospin charge, then isospin conservation prohibits the decay of the 17.64 MeV state through emission of an X boson. However, the Pastore et al. result tells us that actually, the isospin-neutral and isospin-charged states mix quantum mechanically so that the observed 18.15 and 17.64 MeV states are mixtures of iso-neutral and iso-charged states. In fact, this mixing is actually rather large, with mixing angle of around 10 degrees!

The result of this is that one cannot invoke isospin conservation to explain the non-observation of an anomaly in the 17.64 MeV state. In fact, the only way to avoid this is to assume that the mass of the X particle is on the heavier side of the experimentally allowed range. The rate for emission goes like the 3-momentum cubed (see section II.E of 1608.03591), so a small increase in the mass can suppresses the rate of emission by the lighter state by a lot.

The UCI collaboration of theorists went further and extended the Pastore et al. analysis to include a phenomenological parameterization of explicit isospin violation. Independent of the Atomki anomaly, they found that including isospin violation improved the fit for the 18.15 MeV and 17.64 MeV electromagnetic decay widths within the Pastore et al. formalism. The results of including all of the isospin effects end up changing the particle physics story of the Atomki anomaly significantly:

Parameter fits

The rate of X emission (colored contours) as a function of the X particle’s couplings to protons (horizontal axis) versus neutrons (vertical axis). The best fit for a 16.7 MeV new particle is the dashed line in the teal region. The vertical band is the region allowed by the NA48/2 experiment. Solid lines show the dark photon and protophobic limits. Left: the case for perfect (unrealistic) isospin. Right: the case when isospin mixing and explicit violation are included. Observe that incorporating realistic isospin happens to have only a modest effect in the protophobic region. Figure from 1608.03591.

The results of the nuclear analysis are thus that:

  1. An interpretation of the Atomki anomaly in terms of a new particle tends to push for a slightly heavier X mass than the reported best fit. (Remark: the Atomki paper does not do a combined fit for the mass and coupling nor does it report the difficult-to-quantify systematic errors  associated with the fit. This information is important for understanding the extent to which the X mass can be pushed to be heavier.)
  2. The effects of isospin mixing and violation are important to include; especially as one drifts away from the purely protophobic limit.

Theory part 4: towards a complete theory

The theoretical structure presented above gives a framework to do phenomenology: fitting the observed anomaly to a particle physics model and then comparing that model to other experiments. This, however, doesn’t guarantee that a nice—or even self-consistent—theory exists that can stretch over the scaffolding.

Indeed, a few challenges appear:

  • The isospin mixing discussed above means the X mass must be pushed to the heavier values allowed by the Atomki observation.
  • The “protophobic” limit is not obviously anomaly-free: simply asserting that known particles have arbitrary charges does not generically produce a mathematically self-consistent theory.
  • Atomic parity violation constraints require that the X couple in the same way to left-handed and right-handed matter. The left-handed coupling implies that X must also talk to neutrinos: these open up new experimental constraints.

The Irvine/Kentucky/Riverside collaboration first note the need for a careful experimental analysis of the actual mass ranges allowed by the Atomki observation, treating the new particle mass and coupling as simultaneously free parameters in the fit.

Next, they observe that protophobic couplings can be relatively natural. Indeed: the Standard Model Z boson is approximately protophobic at low energies—a fact well known to those hunting for dark matter with direct detection experiments. For exotic new physics, one can engineer protophobia through a phenomenon called kinetic mixing where two force particles mix into one another. A tuned admixture of electric charge and baryon number, (Q-B), is protophobic.

Baryon number, however, is an anomalous global symmetry—this means that one has to work hard to make a baryon-boson that mixes with the photon (see 1304.0576 and 1409.8165 for examples). Another alternative is if the photon kinetically mixes with not baryon number, but the anomaly-free combination of “baryon-minus-lepton number,” Q-(B-L). This then forces one to apply additional model-building modules to deal with the neutrino interactions that come along with this scenario.

In the language of the ‘model building blocks’ above, result of this process looks schematically like this:

Model building block

A complete theory is completely mathematically self-consistent and satisfies existing constraints. The additional bells and whistles required for consistency make additional predictions for experimental searches. Pieces of the theory can sometimes  be used to address other anomalies.

The theory collaboration presented examples of the two cases, and point out how the additional ‘bells and whistles’ required may tie to additional experimental handles to test these hypotheses. These are simple existence proofs for how complete models may be constructed.

What’s next?

We have delved rather deeply into the theoretical considerations of the Atomki anomaly. The analysis revealed some unexpected features with the types of new particles that could explain the anomaly (dark photon-like, but not exactly a dark photon), the role of nuclear effects (isospin mixing and breaking), and the kinds of features a complete theory needs to have to fit everything (be careful with anomalies and neutrinos). The single most important next step, however, is and has always been experimental verification of the result.

While the Atomki experiment continues to run with an upgraded detector, what’s really exciting is that a swath of experiments that are either ongoing or in construction will be able to probe the exact interactions required by the new particle interpretation of the anomaly. This means that the result can be independently verified or excluded within a few years. A selection of upcoming experiments is highlighted in section IX of 1608.03591:

Experimental searches

Other experiments that can probe the new particle interpretation of the Atomki anomaly. The horizontal axis is the new particle mass, the vertical axis is its coupling to electrons (normalized to the electric charge). The dark blue band is the target region for the Atomki anomaly. Figure from 1608.03591; assuming 100% branching ratio to electrons.

We highlight one particularly interesting search: recently a joint team of theorists and experimentalists at MIT proposed a way for the LHCb experiment to search for dark photon-like particles with masses and interaction strengths that were previously unexplored. The proposal makes use of the LHCb’s ability to pinpoint the production position of charged particle pairs and the copious amounts of D mesons produced at Run 3 of the LHC. As seen in the figure above, the LHCb reach with this search thoroughly covers the Atomki anomaly region.

Implications

So where we stand is this:

  • There is an unexpected result in a nuclear experiment that may be interpreted as a sign for new physics.
  • The next steps in this story are independent experimental cross-checks; the threshold for a ‘discovery’ is if another experiment can verify these results.
  • Meanwhile, a theoretical framework for understanding the results in terms of a new particle has been built and is ready-and-waiting. Some of the results of this analysis are important for faithful interpretation of the experimental results.

What if it’s nothing?

This is the conservative take—and indeed, we may well find that in a few years, the possibility that Atomki was observing a new particle will be completely dead. Or perhaps a source of systematic error will be identified and the bump will go away. That’s part of doing science.

Meanwhile, there are some important take-aways in this scenario. First is the reminder that the search for light, weakly coupled particles is an important frontier in particle physics. Second, for this particular anomaly, there are some neat take aways such as a demonstration of how effective field theory can be applied to nuclear physics (see e.g. chapter 3.1.2 of the new book by Petrov and Blechman) and how tweaking our models of new particles can avoid troublesome experimental bounds. Finally, it’s a nice example of how particle physics and nuclear physics are not-too-distant cousins and how progress can be made in particle–nuclear collaborations—one of the Irvine group authors (Susan Gardner) is a bona fide nuclear theorist who was on sabbatical from the University of Kentucky.

What if it’s real?

This is a big “what if.” On the other hand, a 6.8σ effect is not a statistical fluctuation and there is no known nuclear physics to produce a new-particle-like bump given the analysis presented by the Atomki experimentalists.

The threshold for “real” is independent verification. If other experiments can confirm the anomaly, then this could be a huge step in our quest to go beyond the Standard Model. While this type of particle is unlikely to help with the Hierarchy problem of the Higgs mass, it could be a sign for other kinds of new physics. One example is the grand unification of the electroweak and strong forces; some of the ways in which these forces unify imply the existence of an additional force particle that may be light and may even have the types of couplings suggested by the anomaly.

Could it be related to other anomalies?

The Atomki anomaly isn’t the first particle physics curiosity to show up at the MeV scale. While none of these other anomalies are necessarily related to the type of particle required for the Atomki result (they may not even be compatible!), it is helpful to remember that the MeV scale may still have surprises in store for us.

  • The KTeV anomaly: The rate at which neutral pions decay into electron–positron pairs appears to be off from the expectations based on chiral perturbation theory. In 0712.0007, a group of theorists found that this discrepancy could be fit to a new particle with axial couplings. If one fixes the mass of the proposed particle to be 20 MeV, the resulting couplings happen to be in the same ballpark as those required for the Atomki anomaly. The important caveat here is that parameters for an axial vector to fit the Atomki anomaly are unknown, and mixed vector–axial states are severely constrained by atomic parity violation.
KTeV anomaly

The KTeV anomaly interpreted as a new particle, U. From 0712.0007.

  • The anomalous magnetic moment of the muon and the cosmic lithium problem: much of the progress in the field of light, weakly coupled forces comes from Maxim Pospelov. The anomalous magnetic moment of the muon, (g-2)μ, has a long-standing discrepancy from the Standard Model (see e.g. this blog post). While this may come from an error in the very, very intricate calculation and the subtle ways in which experimental data feed into it, Pospelov (and also Fayet) noted that the shift may come from a light (in the 10s of MeV range!), weakly coupled new particle like a dark photon. Similarly, Pospelov and collaborators showed that a new light particle in the 1-20 MeV range may help explain another longstanding mystery: the surprising lack of lithium in the universe (APS Physics synopsis).

Could it be related to dark matter?

A lot of recent progress in dark matter has revolved around the possibility that in addition to dark matter, there may be additional light particles that mediate interactions between dark matter and the Standard Model. If these particles are light enough, they can change the way that we expect to find dark matter in sometimes surprising ways. One interesting avenue is called self-interacting dark matter and is based on the observation that these light force carriers can deform the dark matter distribution in galaxies in ways that seem to fit astronomical observations. A 20 MeV dark photon-like particle even fits the profile of what’s required by the self-interacting dark matter paradigm, though it is very difficult to make such a particle consistent with both the Atomki anomaly and the constraints from direct detection.

Should I be excited?

Given all of the caveats listed above, some feel that it is too early to be in “drop everything, this is new physics” mode. Others may take this as a hint that’s worth exploring further—as has been done for many anomalies in the recent past. For researchers, it is prudent to be cautious, and it is paramount to be careful; but so long as one does both, then being excited about a new possibility is part what makes our job fun.

For the general public, the tentative hopes of new physics that pop up—whether it’s the Atomki anomaly, or the 750 GeV diphoton bumpa GeV bump from the galactic center, γ-ray lines at 3.5 keV and 130 GeV, or penguins at LHCb—these are the signs that we’re making use of all of the data available to search for new physics. Sometimes these hopes fizzle away, often they leave behind useful lessons about physics and directions forward. Maybe one of these days an anomaly will stick and show us the way forward.

Further Reading

Here are some of the popular-level press on the Atomki result. See the references at the top of this ParticleBite for references to the primary literature.

UC Riverside Press Release
UC Irvine Press Release
Nature News
Quanta Magazine
Quanta Magazine: Abstractions
Symmetry Magazine
Los Angeles Times

by Flip Tanedo at August 26, 2016 01:52 AM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
September 27, 2016 03:21 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at