Particle Physics Planet

September 30, 2016

Christian P. Robert - xi'an's og

Pu Erh

Two massive pancakes of Pu Erh (Pǔ’ěr) [fermented] tea my student Changye brought me back from Yunnan and that I am looking forward tasting!

Filed under: pictures, Travel Tagged: black tea, China, fermented tea, Pu Erh tea, Yunnan

by xi'an at September 30, 2016 10:16 PM

Clifford V. Johnson - Asymptotia


Are you going to watch the Luke Cage series that debuts today on Netflix? I probably will at some point (I've got several decades old reasons, and also it was set up well in the excellent Jessica Jones last year).... but not soon as I've got far too many deadlines. Here'a a related item: Using the Luke Cage character as a jumping off point, physicist Martin Archer has put together a very nice short video about the business of strong and tough (not the same thing) materials in the real world.


Have a look if you want to appreciate the nuances, and learn a bit about what's maybe just over the horizon for new amazing materials that might be come part of our every day lives. Video embed below: [...] Click to continue reading this post

The post Super-Strong…? appeared first on Asymptotia.

by Clifford at September 30, 2016 07:40 PM

Peter Coles - In the Dark

Friday Music Quiz: The Yardbird Suite

Not much time to write today so I thought I’d put up a bit of music to end the week. This is a classic from 1946, featuring Charlie Parker leading a band that included a very young Miles Davis. The Yardbird Suite an original composition by Parker, and has become a jazz standard, but he never copyrighted the tune so never earned any royalties from it.

Now, here’s a little question to tease you with. Can anyone spot the connection between this tune and a notable event that occurred today, 30th September 2016?

Answers through the comments box please!


by telescoper at September 30, 2016 02:25 PM

Peter Coles - In the Dark

Book Review : HMS Ulysses

Following on from yesterday’s post about the Arctic Convoys, here is a review of HMS Ulysses by Alistair Maclean which I found on another wordpress site.

loony radio

Most war or action novels have a few things in common : A handsome hero who can shoot you between the eyes with his left hand while he lights a cigar with his right, a funny sidekick who never ever tries to steal the limelight, a pretty girl who is in serious and frequent need of rescuing, and plenty of ugly, stupid bad guys. My favorite one of all time (and I assure you, I’ve read a lot), however, involves a single warship at sea. The handsome hero is missing, so are sidekicks and pretty girls. The bad guys are not ugly or stupid at all. They are menacing, ruthless and brilliant; and they manage to outfox the good guys at almost every turn.

Welcome to HMS Ulysses (1955), the first novel by the Scottish author Alistair Maclean.  Maclean, incidentally, also happens to be one of my favorite authors…

View original post 808 more words

by telescoper at September 30, 2016 01:59 PM

Emily Lakdawalla - The Planetary Society Blog

Rosetta is gone
Today there is one less spacecraft returning science data from beyond Earth. The European Space Operations Centre received the final transmission from Rosetta at 11:19 September 30, UT.

September 30, 2016 01:13 PM

ZapperZ - Physics and Physicists

Dark Matter Biggest Challenge
A very nice article on Forbes' website on the latest challenge in understanding Dark Matter.

It boils down to on why in some cases, Dark Matter dominates, while in others, it seems that everything can be satisfactorily explained without using it. It is why we continue to study this and why we look for possible Dark Matter candidates.  There is still a lot of physics to be done here.


by ZapperZ ( at September 30, 2016 12:56 PM

September 29, 2016

astrobites - astro-ph reader's digest

‘One Direction’ – Isotropic Universe or not?

 Title: How Isotropic is the Universe?

Authors: D. Saadeh, S.M. Feeney, A. Pontzen, H.V. Peiris and J.D. McEwen

First Author’s Institution: University College London, London, UK

Status: Accepted in Physical Review Letters, Sept 2016

By the end of this article, you will have learnt two things:
  1. ‘A Bianchi model’ is a cool name for a hypothetical model of the universe.
  2. One Direction is a terrible band, except this one song with a video that has been shot entirely at NASA.

General Relativity and Cosmology gave rise to the current model of the universe – the famous Lambda-CDM. This is a universe made up of regular matter, cold dark matter and dark energy. One of the pillars of this model is something called the Cosmological Principle, which says that the universe is homogenous and isotropic on large scales. Isotropy implies that there are no preferred directions, and homogeneity means that there are no preferred locations. With respect to our observable universe, this means that an observer sees the same distribution of matter no matter (!) where in the universe she is (homogenous), and an observer sees the same distribution of matter in whichever direction in the sky she looks. This is obviously not true when you look in the direction of or sit in the middle of, let’s say, a cluster of galaxies which has a much denser concentration of matter than most parts in the universe, but on a large scale of the order of millions of light years, observations indicate that homogeneity and isotropy are principles that we can live by when it comes to the universe. Figure 1 illustrates these closely related yet different properties.


Fig 1. Cases of homogeneity and isotropy being preserved independently, in a) 3-D and b) 2-D examples.

It is crucial that we convince ourselves that it is possible to construct a model of the universe that is homogenous but not isotropic i.e. a universe that has roughly the same density, but a preferred direction where distribution of galaxies or even gaseous matter is more clumped or sparse. We could possibly be in a universe that is uniformly dense, like M&Ms lined up in neat straight lines on a chocolate cake – in the direction of the straight line, the universe looks different than it does in a diagonal direction.

Is the reverse possible? If the Cosmological Principle is anything to go by – we have no real reason to believe that we are in a special place in the universe – then isotropy from different locations in the universe automatically implies that the universe is homogenous. This means that if I see the same number of galaxies on average from the earth in all directions, and somehow I know that I would see the same number on average even if I sit on a planet millions of light years away, that has to imply that the number density of galaxies in the universe is same on average. Pretty neat, huh!

One awesome way that astrophysicists have figured out that the universe could be isotropic is by looking at the Cosmic Microwave Background – the relic radiation from the first epochs of the universe that is a signature of the early stages when matter and light headed their separate ways. The CMB is a radiation that is uniform to roughly 1 part in 100,000 – in whichever direction you look at! Since the CMB indicates the rough path that matter perturbations took since the early universe, it is suggestive of an isotropic (and hence a homogenous) universe. Here’s the catch though.


Fig 2. PLANCK map of the CMB in the entire universe. While being mostly of the same temperature, at small scales, one can see anisotropies (denoted by blue cold and red hot spots).

CMB actually does have anisotropies – fluctuations/inhomogeneities in the tiniest parts in temperature – the very 1 part in 100,000 that I mentioned above. This means that it’s worth investigating theoretical models of the universe that demand homogeneity but allow a subtle level of anisotropy, and map that to data from CMB telescopes like PLANCK, a telescope in space. These theoretical models are called the Bianchi Models.

Today’s paper characterizes some Bianchi models that are within the realm of the anistropy of the CMB. Anisotropy dictates that CMB photons from the early universe travel different paths and take different times (that changes their orientation or polarization) because of the tiny anisotropy. This is called ‘shearing’ of the CMB photons, an effect that is minute yet detectable in CMB maps. In this paper, several new kinds of shears were studied in order to investigate whether the universe is definitely isotropic, or mostly isotropic with tiny fluctuations and some slightly ‘preferred’ directions. For this, the team uses not only CMB temperature data, but polarization data too. Polarization of the CMB tells us of the orientation of the photons (especially a characterization of their electric and magnetic fields), which means more information on the ‘shearing’ due to these anisotropies.


Fig 3. Temperature, and Electric and Magnetic field polarization data were used to reconstruct the CMB map of the universe, with Bianchi models that conformed to current Lambda-CDM predictions.

In this study, the universe was modeled as a perfect fluid with dark matter, dark energy and normal baryonic matter, and allowed to expand with the formalisms of General relativity and CMB anisotropy i.e. the universe was allowed to depart from isotropy and yet kept in check so that cosmological parameters resemble the ones observed. Keeping the theoretical details aside, the team concluded that the Bianchi models being considered here resemble the Lambda-CDM model to a greater degree. In other words, they were successful in improving the constraints that we have on the isotropy of the universe. Hence, for now, the universe is nearly flat, homogenous and isotropic – there’s no ‘one direction’ that’s preferred. Long live Lambda-CDM!

by Gourav Khullar at September 29, 2016 10:24 PM

Christian P. Robert - xi'an's og

Le Monde puzzle [#967]

A Sudoku-like Le Monde mathematical puzzle for a come-back (now that it competes with The Riddler!):

Does there exist a 3×3 grid with different and positive integer entries such that the sum of rows, columns, and both diagonals is a prime number? If there exist such grids, find the grid with the minimal sum?

I first downloaded the R package primes. Then I checked if by any chance a small bound on the entries was sufficient:


Running the blind experiment

for (t in 1:1e6){
  if (cale(matrix(sample(n,9),3))) print(n)}

I got 10 as the minimal value of n. Trying with n=9 did not give any positive case. Running another blind experiment checking for the minimal sum led to the result

> A
 [,1] [,2] [,3]
[1,] 8 3 6
[2,] 1 5 7
[3,] 2 11 4

with sum 47.

Filed under: Books, Kids, pictures, Statistics, Travel, University life Tagged: arithmetics, Le Monde, mathematical puzzle, prime number, primes, R, R package, sudoku

by xi'an at September 29, 2016 10:16 PM

ZapperZ - Physics and Physicists

Could You Pass A-Level Physics Now?
This won't tell if you will pass it, since A-Level Physics consists of several papers, including essay questions. But it is still an interesting test, and you might make a careless mistake if you don't read the question carefully.

And yes, I did go through the test, and I got 13 out of 13 correct even though I guessed at one of them (I wasn't sure what "specific charge" meant and was too lazy to look it up). The quiz at the end asked if I was an actual physicist! :)

You're probably an actual physicist, aren't you?

Check it out. This is what those A-level kids had to contend with.


by ZapperZ ( at September 29, 2016 07:54 PM

Symmetrybreaking - Fermilab/SLAC

LHC smashes old collision records

The Large Hadron Collider is now producing about a billion proton-proton collisions per second.

The LHC is colliding protons at a faster rate than ever before, approximately 1 billion times per second. Those collisions are adding up: This year alone the LHC has produced roughly the same number of collisions as it did during all of the previous years of operation together.

This faster collision rate enables scientists to learn more about rare processes and particles such as Higgs bosons, which the LHC produces about once every billion collisions.

“Every time the protons collide, it’s like the spin of a roulette wheel with several billion possible outcomes,” says Jim Olsen, a professor of physics at Princeton University working on the CMS experiment. “From all these possible outcomes, only a few will teach us something new about the subatomic world. A high rate of collisions per second gives us a much better chance of seeing something rare or unexpected.”

Since April, the LHC has produced roughly 2.4 quadrillion particle collisions in both the ATLAS and CMS experiments. The unprecedented performance this year is the result of both the incremental increases in collision rate and the sheer amount of time the LHC is up and running.

“This year the LHC is stable and reliable,” says Jorg Wenninger, the head of LHC operations. “It is working like clockwork. We don’t have much downtime.”

Scientists predicted that the LHC would produce collisions around 30 percent of the time during its operation period. They expected to use the rest of the time for maintenance, rebooting, refilling and ramping the proton beams up to their collision energy. However, these numbers have flipped; the LHC is actually colliding protons 70 percent of the time.

“The LHC is like a juggernaut,” says Paul Laycock, a physicist from the University of Liverpool working on the ATLAS experiment. “We took around a factor of 10 more data compared to last year, and in total we already have more data in Run 2 than we took in the whole of Run 1. Of course the biggest difference between Run 1 and Run 2 is that the data is at twice the energy now, and that’s really important for our physics program.”

This unexpected performance comes after a slow start-up in 2015, when scientists and engineers still needed to learn how to operate the machine at that higher energy.

“With more energy, the machine is much more sensitive,” says Wenninger. “We decided not to push it too much in 2015 so that we could learn about the machine and how to operate at 13 [trillion electronvolts]. Last year we had good performance and no real show-stoppers, so now we are focusing on pushing up the luminosity.”

The increase in collision rate doesn’t come without its difficulties for the experiments.

“The number of hard drives that we buy and store the data on is determined years before we take the data, and it’s based on the projected LHC uptime and luminosity,” Olsen says. “Because the LHC is outperforming all estimates and even the best rosy scenarios, we started to run out of disk space. We had to quickly consolidate the old simulations and data to make room for the new collisions.”

The increased collision rate also increased the importance of vigilant detector monitoring and adjustments of experimental parameters in real time. All the LHC experiments are planning to update and upgrade their experimental infrastructure in winter 2017.

“Even though we were kept very busy by the deluge of data, we still managed to improve on the quality of that data,” says Laycock. “I think the challenges that arose thanks to the fantastic performance of the LHC really brought the best out of ATLAS, and we’re already looking forward to next year.”

Astonishingly, 2.4 quadrillion collisions represent just 1 percent of the total amount planned during the lifetime of the LHC research program. The LHC is scheduled to run through 2037 and will undergo several rounds of upgrades to further increase the collision rate.

“Do we know what we will find? Absolutely not,” Olsen says. “What we do know is that we have a scientific instrument that is unprecedented in human history, and if new particles are produced at the LHC, we will find them.”

by Sarah Charley at September 29, 2016 05:59 PM

Emily Lakdawalla - The Planetary Society Blog

Rosetta spacecraft may be dying, but Rosetta science will go on
The Rosetta mission will end tomorrow when the spacecraft impacts the comet. ESA took advantage of the presence of hundreds of members of the media to put on a showcase of Rosetta science. If there’s one thing I learned today from all the science presentations, it’s this: Rosetta data will be informing scientific work for decades to come.

September 29, 2016 05:47 PM

Emily Lakdawalla - The Planetary Society Blog

Dawn Journal: 9th Anniversary
Nine years ago today, Dawn set sail on an epic journey of discovery and adventure. The intrepid explorer has sailed the cosmic seas and collected treasures that far exceeded anything anticipated or even hoped for.

September 29, 2016 05:05 PM

Peter Coles - In the Dark

The Arctic Convoys

Today also marks a far less happy anniversary. On this day 75 years ago, on 29th September 1941, the Allied Convoy PQ 1 set sail from Hvalfjörður in Iceland; it arrived in Arkhangelsk in Northern Russia on October 11. This wasn’t the first of the Arctic convoys – that was Operation Dervish,  which set out in August 1941 , but it was the first of the most famous sequence, numbered from PQ 1 to PQ 18. The PQ sequence was terminated in September 1942, but convoys resumed in 1943 with a different numbering system (JW) for the duration of the Second World War. For every PQ convoy there was also a QP convoy making the return journey; the counterpart of the JW sequence was RA.

The Arctic convoys carried military supplies (including tanks and aircraft) to the Soviet Union after Germany invaded in the summer of 1941. Their purpose was largely political – to demonstrate the willingness of the Allies to support the Soviet Union, especially before before a second front could be opened.

Arctic Convoy

The reality of the Arctic convoys was unimaginably grim. Slow-moving merchant ships had to run the gauntlet of German U-Boats and aircraft. During the summer, when the Arctic ice retreated, the convoys took a longer route but the long daylight hours of an Arctic summer made for an exhausting journey with the constant threat of air attack. In the winter the route was shorter, but made in terrible weather conditions of biting cold and ferocious storms.


The map is taken from this site, which also gives detailed information about each convoy.

As it happens, one of my teachers at school (Mr Luke, who taught Latin), who was also an officer in the Royal Navy Reserve, served on Royal Navy escort vessel in some of the Arctic convoys in 1941.  I was interested in naval history when I was a teenager and when he told me he had first-hand experience of the Arctic convoys I asked him to tell me more. He talked about the bitter cold but about everything else he refused to speak, his eyes filling with tears. I didn’t such things understand then, I was too young, but later I saw that it was less that he wouldn’t talk about it, more that he couldn’t. Terrible experiences leave very deep scars on the survivors.

The most infamous convoy in the PQ series was PQ 17 which sailed on June 27 1942 from Reykjavik. Rumours that the German battleship Tirpitz had left its berth in Northern Norway to intercept the convoy led to the Admiralty issuing an order for the escort to withdraw and for the convoy to disperse, each vessel to make its way on its own to its destination. The unprotected merchant ships were set upon by planes and submarines, and of the 35 that had left Reykjavik, 24 were sunk. It was a catastrophe. Just a year earlier, Convoy PQ 1 had arrived at its destination unscathed.

There is a project under way to set up a museum as a lasting memorial to the brave men who served during the Arctic convoys. I think it’s well worth supporting. Although the 75th anniversary of the arrival of Dervish was commemorated earlier this year, the courage and sacrifice of those who served in this theatre is not sufficiently recognised .


by telescoper at September 29, 2016 04:28 PM

Peter Coles - In the Dark

Happy 70th Birthday to the “Third Programme”!

I’ve just got time for a quick post-prandial post to mark the fact that 70 years ago today, on September 29th 1946, the British Broadcasting Corporation (BBC) made its first radio broadcast on what was then called The BBC Third Programme. The channel changed its name in 1970 to BBC Radio 3, but I’m just about old enough to remember a time when it was called the Third Programme; I was only 6 when it changed.


It was a bold idea to launch a channel devoted to the arts in the depths of post-War austerity and it was perceived by some at the time as being “elitist”. I think some people probably think that of the current Radio 3 too. I don’t see it that way at all. Culture enriches us all, regardless of our background or education, if only we are given access to it. You don’t have to like classical music or opera or jazz, but you can only make your mind up if you have the chance to listen to it and decide for yourself.

My own relationship with Radio 3 started by accident at some point during the 1990s while I was living in London. I was used to listening to the Today programme on Radio 4 when I woke up, but one morning when my alarm switched on it was playing classical music. It turned out that there was a strike of BBC news staff so they couldn’t broadcast Today and had instead put Radio 3 on the Radio 4 frequency. I very much enjoyed it to the extent that when the strike was over and Radio 4 reappeared, I re-tuned my receiver to Radio 3. I’ve stayed with it ever since. I can’t bear the Today programme at all, in fact; almost everyone on it makes me angry, which is no way to start the day.

Over the years there have been some changes to Radio 3 that I don’t care for very much – I think there’s a bit too much chatter and too many gimmicks these days (and they should leave that to Classic FM) – but I listen most days, not only in the morning but also in the evening,  especially to the live concert performances every night during the week. Many of these concerts feature standard classical repertoire, but I particularly appreciate the number of performances of new music or otherwise unfamiliar pieces.

I also enjoy Words and Music, which is on Sunday afternoons and Opera on 3, which includes some fantastic performances Live from the Metropolitan Opera in New York, and which is usually on Saturday evenings. And of course the various Jazz on 3 programmes: Jazz Record Requests, Jazz Line-up, Geoffrey Smith’s Jazz, etc.

It’s not the just the music, though. I think BBC Radio 3 has a very special group of presenters who are not only friendly and pleasant to listen to, but also very knowledgeable about the music. They also have some wonderful names: Petroc Trelawny, Clemency Burton-Hill, and Sara Mohr-Pietsch, to name but a few. There’s also a newsreader whose name I thought, when I first heard it, was Porn Savage.

I feel I’ve found out about so many things through listening to Radio 3, but there’s much more to my love-affair with this channel than that. Some years ago I was quite ill, and among other things suffering very badly from insomnia. Through the Night brought me relief in the form a continuous stream of wonderful music during many long sleepless nights.

I wish everyone at BBC Radio 3 a very happy 70th birthday. Long may you broadcast!


by telescoper at September 29, 2016 01:19 PM

Emily Lakdawalla - The Planetary Society Blog

OSIRIS-REx’s cameras see first light
As OSIRIS-REx speeds away from Earth, it’s been turning on and testing out its various engineering functions and science instruments. Proof of happy instrument status has come from several cameras, including the star tracker, MapCam, and StowCam.

September 29, 2016 08:34 AM

astrobites - astro-ph reader's digest

Insights into Planet Formation via Harry Potter Analogies

Title: The Imprint of Exoplanet Formation History on Observable Present-Day Spectra of Hot Jupiters

Authors: C. Mordasini, R. van Boekel, P. Molliere, T. Henning, B. Benneke

First Author’s Institution: MPIA & Physikalisches Institut

Paper Status: Accepted for publication in ApJ

Imagine you sit down to watch TV and the fourth Harry Potter movie is just about to start. You aren’t a fan (gasp) of the Harry Potter series so you haven’t read any of the books (double gasp). You watch the movie anyway and are wildly confused. Nevertheless, you end up knowing pretty well what happens in the fourth book, and barely enough to make some guesses about the other three books.

Now, why the long winded analogy? Planets are complex. They form through dynamical interactions of billions of planetesimals in a disk, evolve to form bonafide planets, and develop atmospheres with complicated molecular abundances. Ideally, we could thoroughly analyze each stages of a planet’s formation history (i.e. read all the books) but the reality is that all we can do is indirectly measure the planet’s current atmospheric composition via transmission spectroscopy and use that to infer something about the planet’s formation history. In other words, we are the ninnies who only watched the fourth Harry Potter movie.

The author’s of today’s paper try and assess how much knowledge you can actually gain about the first four Harry Potter books by only watching the fourth movie. In other words, by looking at observations of exoplanet transmission spectra, what can we learn about the various stages of planet formation?

Writing the Books: Modeling Formation History 

In order to tackle this problem the authors create separate computer models to describe each process associated with planet formation. The input of one model goes into the output of the next, just like a series of novels weaving together to form a greater story.

Book 1: Formation

The first model depicts the very first seeds of planet formation starting from when there is just dust in the protoplanetary disk itself. It computes how the disk evolves and how the first seeds of a planet are born through accretion. Although this is a canonical model (see this and this), the authors provide a novel addition. It is well-known that once particles in a disk form small planetesimals, several of them begin to collide with each other to coalesce into a larger planet (like a snowball rolling down the side of a hill). When this newly formed planet gathers enough mass it starts to gravitationally attract a gaseous envelope, creating an atmosphere. Once the planet develops an atmosphere, any new planetesimals colliding with the planet evaporate in the atmosphere, instead of adding more mass to the planet’s core.

This process entirely changes the composition of the young planet’s atmosphere as these evaporating planetesimals deliver new, unknown molecular species. This is the climax of Book 1’s story line. With our newly formed planet and atmosphere combo, we’ve set the stage for Book 2: Evolution.

Book 2: Evolution

The second model propagates our newly formed planet through a thermodynamic evolution. Our planet begins to cool and contract as its planetary structure solidifies. As the temperature and structure of the planet changes, the atmosphere itself begins to evaporate away. Our planet is now about 5 Gyr old and it has a clearly defined mass, radius and luminosity: the three pillars of the Book 3: Composition.

Book 3: Composition

Book 3 is where the planet’s atmosphere officially gets its chemical makeup. Think of this as a flashback in our story during which the authors work on character development without progressing the story any further. Remember that it was the planetesimals bombarding our newly formed planet that were providing most of the atmospheric enrichment. In this model the authors assign various viable chemical compositions to those planetesimals in order to determine a full range of chemical compositions the planet’s atmosphere could have. This leads us to our last Book 4: Observations.

Book 4 & Movie 4: Spectroscopy with the James Webb Space Telescope

The authors now take all the information from Book 1-3 and use them to set the storyline for our final book. Using the chemical composition and temperature structure of our planetary atmosphere, the authors create model transmission and emission spectra (see this and links therein). Figure 1 depicts what our model spectra look like. Each scenario presented has a different structure, since each spectra was produced under different conditions (explained below).

Exoplantary emission spectra (on the left) and transmission spectra (on the right). Each model spectrum is the result of a extremely different initial condition propagated through the author's four models. Main point: in some wavelength spaces, spectra look seemingly similar.

Figure 1. Exoplantary emission spectra (on the left) and transmission spectra (on the right). For emission spectra the y axis is just the flux of the planet, for transmission spectra the y axis is the apparent radius of the planet divided by the radius of the star. Each model spectrum is the result of a extremely different initial condition propagated through the author’s four models. Main point: in some wavelength spaces, spectra similar but still distinguishable.

But then, the movie comes along and butchers the highly intricate and detailed book the authors have written. Observing with even the most state-of-the-art telescope introduces a series of instrumental noise sources, which make what we see through the eyes of our telescopes slightly different from what we model. Figure 2 shows what the series of spectra shown in Figure 1 will look like through the eyes of the James Webb Space Telescope, scheduled for launch in October of 2018.

Here are the same exact scenarios as Figure 1 but now we are looking at what those planet cases would look like through the eyes of the James Webb Space Telescope. Main Point: Because instrument noise degrades the resolution of our spectrum, now our models look even more similar.

Figure 2. Here are the same exact scenarios as Figure 1 but now we are looking at what those planet cases would look like through the eyes of the James Webb Space Telescope. Main Point: Because instrument noise degrades the resolution and quality of our spectrum, now our models look very similar in some regions.

We’ve completed the story line but we haven’t yet answered the question. By looking at observations of explanatory spectra (the movie) can we infer what’s happened in Books 1-4? The authors answer this question by creating two wildly different initial conditions and propagating them both through the four models (think of this as two different prefaces to the same book series). In the first scenario our planet begins to form in the warm inner part of the disk (red and black curves in Figure 1 & 2) and in the second, the planet begins to form in the outer colder part (blue curve in Figure 3). Both scenarios, after being propagated through the models, form Hot Jupiter-like planets. But, is their chemistry different enough, such that we would be able to tell?

If this were the case, the final observations in Figure 3 would need to be so different that we could easily discern between the author’s two cases. However, the authors find that even these extremely different initial conditions lead to pretty similar JWST observations. Hot Jupiters, regardless of where they formed in their disk, end up being oxygen rich. See Figure 3 below, where the authors demonstrate the various outcomes of Book 3: Composition. In all the different chemical scenarios, most planets form with a carbon-to-oxygen ratio of less than 1.

Figure 3. Each panel shows a different chemical composition scenario. Deciphering exactly what the chemical composition is less important to note. What is more important is to realize that all scenarios result in very similar distributions. Main Point: Most cases result in carbon to oxygen ratios of less than 1.

Figure 3. Each panel shows a different chemical composition scenario. Deciphering exactly what the chemical composition is less important to note. What is more important is to realize that all scenarios result in very similar distributions. Main Point: Most cases result in carbon to oxygen ratios of less than 1.

This is where my Harry Potter analogy ends because I can’t think of two wildly different prefaces to Harry Potter and the Sorcerer’s (Philospher’s…) Stone that would’ve resulted in nearly identical movies. Consider this a formal challenge to all the people still reading this. I conclude by saying that planet formation is definitely more complex than the Harry Potter series. And in the coming years, as we prep for the launch of the James Webb Space Telescope, you can expect to gain more insights into this complicated tale.

by Natasha Batalha at September 29, 2016 12:35 AM

September 28, 2016

Christian P. Robert - xi'an's og

advanced computational methods for complex models in Biology [talk]

St Pancras. London, Jan. 26, 2012

Here are the slides of the presentation I gave at the EPSRC Advanced Computational methods for complex models in Biology at University College London, last week. Introducing random forests as proper summaries for both model choice and parameter estimation (with considerable overlap with earlier slides, obviously!). The other talks of that highly interesting day on computational Biology were mostly about ancestral graphs, using Wright-Fisher diffusions for coalescents, plus a comparison of expectation-propagation and ABC on a genealogy model by Mark Beaumont and the decision theoretic approach to HMM order estimation by Chris Holmes. In addition, it gave me the opportunity to come back to the Department of Statistics at UCL more than twenty years after my previous visit, at a time when my friend Costas Goutis was still there. And to realise it had moved from its historical premises years ago. (I wonder what happened to the two staircases built to reduce frictions between Fisher and Pearson if I remember correctly…)

Filed under: Books, pictures, Statistics, Travel, University life Tagged: ABC, Bayesian computing, Biology, coalescent, computational biology, England, EPSRC, expectation-propagation, London, random forests, UCL, University College London, Wright-Fisher model

by xi'an at September 28, 2016 10:16 PM

Peter Coles - In the Dark

Relocation, Relocation, Relocation

It seems my relocation to Cardiff is now more-or-less complete. The boxes of stuff from my old office at the University of Sussex arrived on Monday and I’ve been gradually stacking the books on the shelves in the rather large office to which I’ve been assigned:


In fact the removals people caught me on the hop, as they said they would phone me about an hour before they were due to arrive but didn’t do so. I was quite surprised to see all the boxes already there when I came in on Monday!

I was planning to have all this delivered a while ago to my house, because I didn’t think I was going to be given an office big enough to accommodate much of it. But then I had to delay the removal because my visit to hospital was put back so I wouldn’t have been able to receive it. Then I found out I had plenty of space at the University so I decided to have it all moved here.



I’ll be sharing this space with other members of the Data Innovation Research Institute, but for the time being I’m here on my own. The books make it look a bit more “lived-in” than it did when I arrived, though the mini-bar still hasn’t arrived yet.

It’s actually about four years since I was appointed to my previous job at Sussex; I moved there from Cardiff in early 2013. It’s a bit strange being back. I didn’t imagine when I started at Sussex that I would be returning relatively soon, but then I didn’t imagine a lot of the things that would lead to my resignation. From what I’ve heard, many of those things have been getting even worse since I left. I think I’ll keep a discussion of all that to myself, though, at least until I write my memoirs!



by telescoper at September 28, 2016 04:00 PM

Emily Lakdawalla - The Planetary Society Blog

SpaceX and the Blank Slate
SpaceX's plans to colonize Mars differ considerably from NASA's Journey to Mars ambitions. But direct comparison is difficult. SpaceX is able to wipe the slate clean and start fresh with a bold new approach to humans in space. NASA has no such luxury, and must use existing pieces and people to make their goals a reality.

September 28, 2016 03:20 PM

Axel Maas - Looking Inside the Standard Model

Searching for structure
This time I want to report on a new bachelor thesis, which I supervise. In this project we try to understand a little better the foundations of so-called gauge symmetries. In particular we address some of the ground work we have to lay for understanding our theories.

Let me briefly outline the problem: Most of the theories in particle physics include some kind of redundancy I.e., there are more things in it then we actually see in experiments. The surplus stuff is actually not real. It is just a kind of mathematical device to make calculations simpler. It is like a ladder, which we bring to climb a wall. We come, use the ladder, and are on top. The ladder we take again with us, and the wall remains as it was. The ladder made live simpler. Of course, we could have climbed the wall without it. But it would have been more painful.

Unfortunately, theories are more complicated than wall climbing.

One of the problems is that we usually cannot solve problems exactly. And as noted before, this can mess up the removal of the surplus stuff.

The project the bachelor student and I am working on has the following basic idea: If we can account for all of the surplus stuff, we should be able to know whether our approximations did something wrong. It is like preparing an engine. If something is left afterwards it is usually not a good sign. Unfortunately, things are again more complicated. For the engine, we just have to look through our workspace to see whether anything is left. But how to do so for our theories? And this is precisely the project.

So, the project is essentially about listing stuff. We start out with something we know is real and important. For this, we take the most simplest thing imaginable: Nothing. Nothing means in this case just an empty universe, no particles, no reactions, no nothing. That is certainly a real thing, and one we want to include in our calculations.

Of this nothing, there are also versions where some of the surplus stuff appears. Like some ghost image of particles. We actually know how to add small amounts of ghost stuff. Like a single particle in a whole universe. But these situations are not so very interesting, as we know how to deal with them. No, the really interesting stuff happens if well fill the whole universe with ghost images. With surplus stuff which we add just to make life simpler. At least originally. And the question is now: How can we add this stuff systematically? As the ghost stuff is not real, we know it must fulfill special mathematical equations.

Now we do something, which is very often done in theoretical physics: We use an analogy. The equations in question are not unique to the problem at hand, but appear also in quite different circumstances, although with a completely different meaning. In fact, the same equations describe how in quantum physics one particle is bound to each other. In quantum physics, depending on the system at hand, there may be one or more different ways how this binding occurs. You can count the number, and there is a set which one can label by whole numbers. Incidentally, this feature is where the name quantum originates from.

Returning to our original problem, we do the following analogy: Enumerating the ghost stuff can be cast into the same form as enumerating the possibilities of binding two particles together in quantum mechanics. The actual problem is only to find the correct quantum system which is the precise analogous one to our original problem. Finding this is still a complicated mathematical problem. Finding only one solution for one example is the aim of this bachelor thesis. But already finding one would be a huge step forward, as so far we do not have one at all. Having it will probably be like having a first stepping stone for crossing a river. From understanding it, we should be able to understand how to generate more. Hopefully, we will eventually understand how to create arbitrary such examples. And thus solve our enumeration problem. But this is still in the future. For the moment, we do the first step.

by Axel Maas ( at September 28, 2016 12:11 PM

September 27, 2016

Christian P. Robert - xi'an's og

stability of noisy Metropolis-Hastings

noisymcmcFelipe Medina-Aguayo, Antony Lee and Gareth Roberts (all at Warwick University) have recently published—even though the paper was accepted a year ago—a paper in Statistics and Computing about a variant to the pseudo-marginal Metropolis-Hastings algorithm. The modification is to simulate an estimate of the likelihood or posterior at the current value of the Markov chain at every iteration, rather than reproducing the current estimate. The reason for this refreshment of the weight estimate is to prevent stickiness in the chain, when a random weight leads to a very high value of the posterior. Unfortunately, this change leads to a Markov chain with the wrong stationary distribution. When this stationary exists! The paper actually produces examples of transient noisy chains, even in simple cases such as a geometric target distribution. And even when taking the average of a large number of weights. But the paper also contains sufficient conditions, like negative weight moments or uniform ergodicity of the proposal, for the noisy chain to be geometrically ergodic. Even though the applicability of those conditions to complex targets is not always obvious.

Filed under: Statistics Tagged: Markov chain, Markov chain Monte Carlo algorithm, MCMC convergence, particle filter, pseudo-marginal MCMC, sequential Monte Carlo, University of Warwick

by xi'an at September 27, 2016 10:16 PM

Symmetrybreaking - Fermilab/SLAC

You keep using that physics word

I do not think it means what you think it means.

Physics can often seem inconceivable. It’s a field of strange concepts and special terms. Language often fails to capture what’s really going on within the math and theories. And to make things even more complicated, physics has repurposed a number of familiar English words. 

Much like Americans in England, folks from beyond the realm of physics may enter to find themselves in a dream within a dream, surrounded by a sea of words that sound familiar but are still somehow completely foreign. 

Not to worry! Symmetry is here to help guide you with this list of words that acquire a new meaning when spoken by physicists.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


The physics version of quench has nothing to do with Gatorade products or slaking thirst. Instead, a quench is what happens when superconducting materials lose their ability to superconduct (or carry electricity with no resistance). During a quench, the electric current heats up the superconducting wire and the liquid coolant meant to keep the wire at its cool, superconducting temperature warms and turns into a gas that escapes through vents. Quenches are fairly common and an important part of training magnets that will focus and guide beams through particle accelerators. They also take place in superconducting accelerating cavities.

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Cannibalism, strangulation and suffocation

These gruesome words take on a new, slightly kinder meaning in astrophysics lingo. They are different ways that a galaxy's shape or star formation rate can be changed when it is in a crowded environment such as a galaxy cluster. Galactic cannibalism, for example, is what happens when a large galaxy merges with a companion galaxy through gravity, resulting in a larger galaxy.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


Depending on how much you know about racecars and driving terms, you may or may not have heard of a chicane. In the driving world, a chicane is an extra turn or two in the road, designed to force vehicles to slow down. This isn’t so different from chicanes in accelerator physics, where collections of four dipole magnets compress a particle beam to cluster the particles together. It squeezes the bunch of particles together so that those in the head (the high-momentum particles at the front of the group) are closer to the tail (the particles in the rear).

Illustration by Sandbox Studio, Chicago with Corinne Mucha


A beam cooler won’t be of much use at your next picnic. Beam cooling makes particle accelerators more efficient by keeping the particles in a beam all headed the same direction. Most beams have a tendency to spread out as they travel (something related to the random motion, or “heat,” of the particles), so beam cooling helps kick rogue particles back onto the right path—staying on the ideal trajectory as they race through the accelerator.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


In particle physics, a house is a place for magnets to reside in a particle accelerator. House is also used as a collective noun for a group of magnets. Fermilab’s Tevatron particle accelerator, for example, had six sectors, each of which had four houses of magnets.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


A barn is a unit of measurement used in nuclear and particle physics that indicates the target area (“cross section”) a particle represents. The meaning of the science term was originally classified, owing to the secretive nature of efforts to better understand the atomic nucleus in the 1940s. Now you can know: One barn is equal to 10-24 cm2. In the subatomic world, a particle with that size is quite large—and hitting it with another particle is practically like hitting the broad side of a barn.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


Most people dread cavities, but not in particle physics. A cavity is the name for a common accelerator part. These metal chambers shape the accelerator’s electric field and propel particles, pushing them closer to the speed of light. The electromagnetic field within a radio-frequency cavity changes back and forth rapidly, kicking the particles along. The cavities also keep the particles bunched together in tight groups, increasing the beam’s intensity.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


Most people associate doping with drug use and sports. But doping can be so much more! It’s a process to introduce additional materials (often considered impurities) into a metal to change its conducting properties. Doped superconductors can be far more efficient than their pure counterparts. Some accelerator cavities made of niobium are doped with atoms of argon or nitrogen. This is being investigated for use in designing superconducting magnets as well.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


In particle physics, injections don’t deliver a vaccine through a needle into your arm. Instead, injections are a way to transfer particle beams from one accelerator into another. Particle beams can be injected from a linear accelerator into a circular accelerator, or from a smaller circular accelerator (a booster) into a larger one.

Illustration by Sandbox Studio, Chicago with Corinne Mucha


Most people associate decay with things that are rotting. But a particle decay is the process through which one particle changes into other particles. Most particles in the Standard Model are unstable, which means that they decay almost immediately after coming into being. When a particle decays, its energy is divided into less massive particles, which may then decay as well.

by Lauren Biron at September 27, 2016 04:21 PM

astrobites - astro-ph reader's digest

Astronomical celebrity, or just another pretender? The curious case of CR7

Title: No evidence for Population III stars or a Direct Collapse Black Hole in the z = 6.6 Lyman-α emitter ‘CR7’

Authors: R. A. A. Bowler, R. J. McLure, J. S. Dunlop, D. J. McLeod, E. R. Stanway , J. J. Eldridge, M. J. Jarvis

First Author’s Institution: University of Oxford, UK

Last year Astrobites covered the discovery of CR7, a luminous galaxy in the early universe (a redshift of 6.6, approximately 700 million years after the big bang; equivalent to when the Universe was 4 years old if we scale its age to an average human lifespan of 70 years). It’s made up of three spatial components (see Figure 2), one of which appeared to be a peculiarity: it’s light contained very few emission lines, which suggested that it contained very few metals (Astronomy parlance for elements heavier than Helium). It’s this lack of metals that got astronomers puzzled, and granted theorists imaginations a licence to run wild.


Figure 1. CR7: Population III stars, direct collapse black hole, footballing legend, or just another hotshot galaxy? Background image credit: ESO/M. Kornmesser

Metals are formed in the cores of stars. When a massive star dies it goes supernovae, flinging its metals out into nearby gas during the explosion. This gas is then said to be ‘enriched’ with metals. Subsequent generations of stars forming from this enriched gas will contain more metals than the previous generation, visible through emission lines in the light from the stars. The very first generation of stars (known as population III stars) would necessarily be almost completely metal free. Gas from the early universe that hasn’t been host to any stars would also contain very few metals.

So when CR7 didn’t appear to contain many metals, the original paper authors speculated that it could be a collection of population III stars. This would represent the very first discovery of such stars in the universe – pretty big news! Other authors speculated that CR7 could be something altogether different, a direct collapse black hole.

‘Normal’ black holes are formed when massive stars die, and collapse in on themselves due to the immense gravity caused by their mass. A direct collapse black hole is a different beast, forming from a giant, primordial gas cloud that, under the right conditions, collapses down into a single, supermassive star. This supermassive star would be unlike any star in the universe today, and would quickly collapse directly into a black hole. The immense black hole that is formed would then start to grow by gathering nearby gas, and this accretion would emit light.

One of the conditions for such a collapse to occur is low metallicity – unlike gas clouds with lots of metals, metal-free clouds tend not to break up into smaller clouds that lead to multiple lower-mass stars but collapse globally as a whole. This low metallicity would show up in the spectrum of light we observe from the accreting black hole (for more information on direct collapse black holes check out some of these previous astro bites). Direct collapse black holes have been theorised but never seen, so this would again make CR7 a world first. But is CR7 really as groundbreaking as it first seems?

CR7 in WFC3

Figure 2. Hubble Space Telescope Wide Field Camera 3 image of CR7 in two ultraviolet / optical bands. The three components are labelled; object A is responsible for the peculiar spectrum of light observed previously.

The authors of today’s paper carried out new observations of CR7 with both ground and space based telescopes. In particular, they find strong evidence for an optical emission line, namely doubly ionised Oxygen (you can see this in Figure 3 as the very high green data point above the green line for object A near the grey [OIII] line). Such a line suggests that whatever is powering the light from CR7 contains metals, which could sound the death knell for both the Population III or Direct Collapse Black Hole explanations.

They also find a weaker emission line from singly ionised helium than seen previously. Such an emission line is typically only produced in the presence of very energetic photons of light, which population III stars are theorised to produce. This provides additional evidence against the population III claim.

So if it’s not Population III stars or a direct collapse black hole, what is powering CR7? The authors suggest a couple of more ‘standard’ explanations. The ionised helium line could be explained by a more traditional accreting black hole at the center of the galaxy, something almost all galaxies appear to have. Alternatively, it could be a low-metallicity galaxy undergoing an intense period of star formation. New models of stellar light that include binary stars, or Wolf-Rayet stars, could also help explain the spectrum.

So has CR7 lost its claim to fame? The latest evidence suggests so, but the most damning evidence will come soon with the launch of the James Webb Space Telescope, which will be able to probe the optical region of the spectrum at much higher resolution. Until then, the true identity of CR7 will remain just out of reach.

CR7 photometry

Figure 3: The new photometry measurements for the three components of CR7. Ground based and Space based observations are shown as diamonds and circles, respectively. Each line shows the best fit to a Stellar Population Synthesis model, which models the light from a collection of stars based on the group properties such as their ages and metallicities.

by Christopher Lovell at September 27, 2016 07:30 AM

astrobites - astro-ph reader's digest

Gravity-Darkened Seasons on Planets

Title: Gravity-Darkened Seasons: Insolation Around Rapid Rotators
Authors: John P. Ahlers
First Author’s Institution: Physics Department, University of Idaho

On Earth, our seasons come about due to the Earth’s tilted rotational axis relative to its orbital plane (and not due to changes in distance from the Sun, as it is commonly mistaken!) Essentially, this is due to the varying amounts of radiation that the Earth receives from the Sun in each hemisphere. But what would happen if the Sun were to radiate at different temperatures across its surface?

It’s hard to imagine such a scenario, but a phenomenon known as gravity darkening causes rapidly spinning stars to have non-uniform surface temperatures due to their non-spherical shape. As a star spins, its equator bulges outwards as a result of centrifugal forces (specifically, into an oblate spheroid). Since a star is made of gas, this has interesting implications for its temperature. If its equator is bulging outwards, the gas at the equator experiences a lower surface gravity (being slightly further away from the star’s center) a lower density and temperature. The equator of a spinning star is thus considered to be “gravitationally darkened”. The gas at the star’s poles on the other hand, has a slightly higher density and temperature (“gravitational brightening”) since it is closer to the center of the star relative to the gas at the equatorial bulge. Thus, there is a temperature gradient between the poles and equator of a rapidly rotating star.

While this is an interesting phenomenon in itself, the author of today’s paper introduces a new twist: what if there’s a planet orbiting such a star, and what implication does this gravity darkening have on a planet’s seasonal temperature variations? Compared to Earth. exoplanets have potentially more complex factors governing its surface temperature variations. For example, if a planet’s orbit is inclined relative to the star’s equator (see Figure 1), it can preferentially receive radiation from different parts of its star during the course of its orbit.

Fig 1: All the parameters describing a planet's orbit. In this paper, the author mainly focuses on the inclination i, which is the angle of a planet's orbital plane relative to the star's equator. (Image courtesy of Wikipedia)

Fig 1: All the parameters describing a planet’s orbit. In this paper, the author mainly focuses on the inclination i, which is the angle of a planet’s orbital plane relative to the star’s equator. (Image courtesy of Wikipedia)

The author claims that this effect can cause a planet’s surface temperature to vary as much as 15% (Figure 2). This essentially doubles the number of seasonal temperature variations a planet can experience over the course of an orbit. However, the author does not attempt to model the complex heat transfer that occurs on the planets surface due to the atmosphere and winds.

Fig. 2: Some examples of seasonal temperature changes of a planet for various orbital parameters. The top left figure shows the orientation of the planet’s tilt (precession angle, color-coded to match the plots), and the times corresponding to one orbit around the host star. In each subplot, the author shows the flux a planet would receive for different orbital inclinations (i.e. the angle i in Fig. 1). 

Not only that, but there is also some variation in the type of radiation that a planet receives during the course of its orbit. Since the poles of rotating star are at a higher temperature, it will radiate relatively more UV radiation compared to the equatorial regions. The author claims that a planet orbiting in a highly inclined orbit will alternate receiving radiation preferentially from a star’s poles or equator, causing the amount of UV radiation to vary as much as 80%. High levels of UV radiation can cause a planet’s atmosphere to evaporate, as well as other complex photochemical reactions (such as those responsible for the hazy atmosphere on Saturn’s moon Titan).

As we discover new exoplanets over the course of the coming years, we will likely find examples of planets potentially experiencing these gravitationally darkened seasons. This will have interesting implications on how we view the habitability of these other worlds.

[Update (9/30/2016): Fig. 2 has been updated to correctly match Fig. 4 given in the paper]

by Anson Lam at September 27, 2016 07:03 AM

September 26, 2016

Clifford V. Johnson - Asymptotia

Where I’d Rather Be…?

floorboards_shareRight now, I'm much rather be on the sofa reading a novel (or whatever it is she's reading)....instead of drawing all those floorboards near her. (Going to add "rooms with lots of floorboards" to [...] Click to continue reading this post

The post Where I’d Rather Be…? appeared first on Asymptotia.

by Clifford at September 26, 2016 08:51 PM

astrobites - astro-ph reader's digest

Write for Astrobites in Spanish

We are looking for enthusiastic students to join the “Astrobites en Español” team.

Requirements: Preferably master or PhD students in physics or astronomy, fluent in Spanish and English. We ask you to submit:

  • One “astrobito” with original content in Spanish (for example, something like this). You should choose a paper that appeared on astro-ph in the last three months and summarise it at an appropriate level for undergraduate students. We ask you that it is not in your specific area of expertise and we allow a maximum of 1000 words.
  • A brief (200 word maximum) note, also in Spanish, where you explain your motivation to write for Astrobitos.

Commitment: We will ask you to write a post about once per month, and to edit on a similar frequency. You would also have the opportunity to represent Astrobitos in conferences.  Our authors dedicate a couple of hours a month developing material for Astrobitos.

(There is no monetary compensation for writing for Astrobitos. Our work is ad honorem.)

If you are interested, please send us the material to with subject “Material para Astrobitos”. The deadline is November 1st, 2016. Thanks!

by Astrobites at September 26, 2016 03:37 PM

The n-Category Cafe

Euclidean, Hyperbolic and Elliptic Geometry

There are two famous kinds of non-Euclidean geometry: hyperbolic geometry and elliptic geometry (which almost deserves to be called ‘spherical’ geometry, but not quite because we identify antipodal points on the sphere).

In fact, these two kinds of geometry, together with Euclidean geometry, fit into a unified framework with a parameter <semantics>s<annotation encoding="application/x-tex">s \in \mathbb{R}</annotation></semantics> that tells you the curvature of space:

  • when <semantics>s>0<annotation encoding="application/x-tex">s \gt 0</annotation></semantics> you’re doing elliptic geometry

  • when <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics> you’re doing Euclidean geometry

  • when <semantics>s<0<annotation encoding="application/x-tex">s \lt 0</annotation></semantics> you’re doing hyperbolic geometry.

This is all well-known, but I’m trying to explain it in a course I’m teaching, and there’s something that’s bugging me.

It concerns the precise way in which elliptic and hyperbolic geometry reduce to Euclidean geometry as <semantics>s0<annotation encoding="application/x-tex">s \to 0</annotation></semantics>. I know this is a problem of deformation theory involving a group contraction, indeed I know all sorts of fancy junk, but my problem is fairly basic and this junk isn’t helping.

Here’s the nice part:

Give <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> a bilinear form that depends on the parameter <semantics>s<annotation encoding="application/x-tex">s \in \mathbb{R}</annotation></semantics>:

<semantics>v sw=v 1w 1+v 2w 2+sv 3w 3<annotation encoding="application/x-tex"> v \cdot_s w = v_1 w_1 + v_2 w_2 + s v_3 w_3 </annotation></semantics>

Let <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> be the group of linear transformations <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having determinant 1 that preserve <semantics> s<annotation encoding="application/x-tex">\cdot_s</annotation></semantics>. Then:

  • when <semantics>s>0<annotation encoding="application/x-tex">s \gt 0</annotation></semantics>, <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> is isomorphic to the symmetry group of elliptic geometry,

  • when <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics>, <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> is isomorphic to the symmetry group of Euclidean geometry,

  • when <semantics>s<0<annotation encoding="application/x-tex">s \lt 0</annotation></semantics>, <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> is isomorphic to the symmetry group of hyperbolic geometry.

This is sort of obvious except for <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics>. The cool part is that it’s still true in the case <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics>! The linear transformations having determinant 1 that preserve the bilinear form

<semantics>v 0w=v 1w 1+v 2w 2<annotation encoding="application/x-tex"> v \cdot_0 w = v_1 w_1 + v_2 w_2 </annotation></semantics>

look like this:

<semantics>(cosθ sinθ 0 sinθ cosθ 0 a b 1)<annotation encoding="application/x-tex">\left( \begin{array}{ccc} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ a & b & 1 \end{array} \right) </annotation></semantics>

And these form a group isomorphic to the Euclidean group — the group of transformations of the plane generated by rotations and translations!

So far, everything sounds pleasantly systematic. But then things get a bit quirky:

  • Elliptic case. When <semantics>s>0<annotation encoding="application/x-tex">s \gt 0</annotation></semantics>, the space <semantics>X={v sv=1}<annotation encoding="application/x-tex">X = \{v \cdot_s v = 1\}</annotation></semantics> is an ellipsoid. The 1d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the points of elliptic geometry. The 2d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the lines. The group <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> acts on the space of points and the space of lines, preserving the obvious incidence relation.

Why not just use <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> as our space of points? This would give a sphere, and we could use great circles as our lines—but then distinct lines would always intersect in two points, and two points would not determine a unique line. So we want to identify antipodal points on the sphere, and one way is to do what I’ve done.

  • Hyperbolic case. When <semantics>s<0<annotation encoding="application/x-tex">s \lt 0</annotation></semantics>, the space <semantics>X={v sv=1}<annotation encoding="application/x-tex">X = \{v \cdot_s v = -1\}</annotation></semantics> is a hyperboloid with two sheets. The 1d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the points of hyperbolic geometry. The 2d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X s<annotation encoding="application/x-tex">X_s</annotation></semantics> are the lines. The group <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> acts on the space of points and the space of lines, preserving the obvious incidence relation.

This time <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is hyperboloid with two sheets, but my procedure identifies antipodal points, leaving us with a single sheet. That’s nice.

But the obnoxious thing is that in the hyperbolic case I took <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to be the set of points with <semantics>v sv=1<annotation encoding="application/x-tex">v \cdot_s v = -1</annotation></semantics>, instead of <semantics>v sv=1<annotation encoding="application/x-tex">v \cdot_s v = 1</annotation></semantics>. If I hadn’t switched the sign like that, <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> would be the hyperboloid with one sheet. Maybe there’s a version of hyperbolic geometry based on the one-sheeted hyperboloid (with antipodal points identified), but nobody seems to talk about it! Have you heard about it? If not, why not?


  • Euclidean case. When <semantics>s=0<annotation encoding="application/x-tex">s = 0</annotation></semantics>, the space <semantics>X={v sv=1}<annotation encoding="application/x-tex">X = \{v \cdot_s v = 1\}</annotation></semantics> is a cylinder. The 1d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the lines of Euclidean geometry. The 2d linear subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> having nonempty intersection with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are the points. The group <semantics>SO s(3)<annotation encoding="application/x-tex">SO_s(3)</annotation></semantics> acts on the space of points and the space of lines, preserving their incidence relation.

Yes, any point <semantics>(a,b,c)<annotation encoding="application/x-tex">(a,b,c)</annotation></semantics> on the cylinder

<semantics>X 0={(a,b,c):a 2+b 2=1}<annotation encoding="application/x-tex"> X_0 = \{(a,b,c) : \; a^2 + b^2 = 1 \} </annotation></semantics>

determines a line in the Euclidean plane, namely the line

<semantics>ax+by+c=0<annotation encoding="application/x-tex"> a x + b y + c = 0 </annotation></semantics>

and antipodal points on the cylinder determine the same line. I’ll let you figure out the rest, or tell you if you’re curious.

The problem with the Euclidean case is that points and lines are getting switched! Points are corresponding to certain 2d subspaces of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics>, and lines are corresponding to certain 1d subspaces.

You may just tell me I just got the analogy backwards. Indeed, in elliptic geometry every point has a line orthogonal to it, and vice versa. So we can switch what counts as points and what counts as lines in that case, without causing trouble. Unfortunately, it seem for hyperbolic geometry this is not true.

There’s got to be some way to smooth things down and make them nice. I could explain my favorite option, and why it doesn’t quite work, but I shouldn’t pollute your brain with my failed ideas. At least not until you try the exact same ideas.

I’m sure someone has figured this out already, somewhere.

by john ( at September 26, 2016 03:23 PM

ZapperZ - Physics and Physicists

10 Years Of Not Even Wrong
Physics World has a provocative article and podcast to commemorate the 10-year anniversary of Peter Woit's devastating criticism of String Theory in his book "Not Even Wrong".

Not Even Wrong coincided with the publication of another book – The Trouble with Physics – that had a similar theme and tone, penned by Woit’s friend and renowned physicist Lee Smolin. Together, the two books put the theory and its practitioners under a critical spotlight and took string theory’s supposed inadequacies to task. The books sparked a sensation both in the string-theory community and in the wider media, which until then had heard only glowing reports of the theory’s successes. 

Interestingly enough, the few students that I've encountered who told me that they want to go into String Theory have never heard or were not aware of Woit's book. I can understand NOT WANTING to read it, but to not even be aware of it and what it is about sounds rather .... naive. This is a prominent physicist who produced a series of undeniable criticism of a particular field of study that you want to go into. Not only should you be aware of it, but you need to read it and figure it out.

It is still a great book to read even if it is 10 years old now.


by ZapperZ ( at September 26, 2016 02:05 PM

September 25, 2016

Sean Carroll - Preposterous Universe

Live Q&As, Past and Future

On Friday I had a few minutes free, and did an experiment: put my iPhone on a tripod, pointed it at myself, and did a live video on my public Facebook page, taking questions from anyone who happened by. There were some technical glitches, as one might expect from a short-notice happening. The sound wasn’t working when I first started, and in the recording below the video fails (replacing the actual recording with a still image of me sideways, for inexplicable reasons) just when the sound starts working. (I don’t think this happened during the actual event, but maybe it did and everyone was too polite to mention it.) And for some reason the video keeps going long after the 20-some minutes for which I was actually recording.

But overall I think it was fun and potentially worth repeating. If I were to make this an occasional thing, how best to do it? This time around I literally just read off a selection of questions that people were typing into the Facebook comment box. Alternatively, I could just talk on some particular topic, or I could solicit questions ahead of time and pick out some good ones to answer in detail.

What do you folks think? Also — is Facebook Live the right tool for this? I know the kids these days use all sorts of different technologies. No guarantees that I’ll have time to do this regularly, but it’s worth contemplating.

What makes the most sense to talk about in live chats?

by Sean Carroll at September 25, 2016 08:32 PM

Lubos Motl - string vacua and pheno

Chen-Ning Yang against Chinese colliders
The plans to build the world's new greatest collider in China have many prominent supporters – including Shing-Tung Yau, Nima Arkani-Hamed, David Gross, Edward Witten – but SixthTone and South China Morning Post just informed us about a very prominent foe: Chen-Ning Yang, the more famous part of Lee-Yang and Yang-Mills.

He is about 94 years old now but his brain is very active and his influence may even be enough to kill the project(s).

The criticism is mainly framed as a criticism of CEPC (Circular Electron-Positron Collider), a 50-70-kilometer-long [by circumference] lepton accelerator. But I guess that if the relevant people decided to build another hadron machine in China, and recall that SPPC (Super Proton-Proton Collider) is supposed to be located in the same tunnel, his criticism would be about the same. In other words, Yang is against all big Chinese colliders. If you have time, read these 403 pages on the CEPC-SPPC project. Yang may arguably make all this work futile by spitting a few milliliters of saliva.

He wrote his essay for a Chinese newspaper 3 days ago,
China shouldn't build big colliders today (autom. EN; orig. CN)
The journalists frame this opinion as an exchange with Shing-Tung Yau who famously co-wrote a pro-Chinese-collider book.

My Chinese isn't flawless. Also, his opinions and arguments aren't exactly innovative. But let me sketch what he's saying.

He says that Yau has misinterpreted his views when he said that Yang was incomprehensibly against the further progress in high-energy physics. Yang claims to be against the Chinese colliders only. Well, I wouldn't summarize his views in this way after I have read the whole op-ed.

His reasons to oppose the accelerator are:
  1. In Texas, the SSC turned out to be painful and a "bottomless pit" or a "black hole". Yang suggests it must always be the case – well, it wasn't really the case of the LHC. And he suggests that $10-$20 billion is too much.
  2. China is just a developing country. Its GDP per capita is below that of Brazil, Mexico, or Malaysia. There are poor farmers, need to pay for the environment(alism), health, medicine etc. and those should be problems of a higher priority.
  3. The collider would also steal the money from other fields of science.
  4. Supporters of the collider argue that the fundamental theory isn't complete – because gravity is missing and unification hasn't been understood; and they want to find evidence of SUSY. However, Yang is eager to say lots of the usual anti-SUSY and anti-HEP clichés. SUSY has no experimental evidence – funny, that's exactly why people keep on dreaming about more powerful experiments.
  5. High-energy physics hasn't improved human lives in the last 70 years and won't do so. This item is the main one – but not only one – suggesting that the Chinese project isn't the only problem for Yang.
  6. China and IHEP in particular hasn't achieved anything in high-energy physics. Its contributions remain below 1% of the world. Also, if someone gets the Nobel prize for a discovery, he will probably be a non-Chinese.
  7. He recommends cheaper investments – to new ways to accelerate particles; and to work on theory, e.g. string theory.
You can see that it's a mixed bag with some (but not all) of the anti-HEP slogans combined with some left-wing propaganda. I am sorry but especially the social arguments are just bogus.

What decides about a country's ability to make a big project is primarily the total GDP, not the GDP per capita. Ancient China built the Great Chinese Wall despite the fact that the GDP per capita was much lower than the today's GDP per capita. Those people couldn't buy a single Xiaomi Redmi 3 Android smartphone for their salary (I am considering this octa-core $150 smartphone – which seems to be the #1 bestselling Android phone in Czechia now – as a gift now). But they still built the wall. And today, Chinese companies are among the most important producers of many high-tech products; I just mentioned one example. As you may see with your naked eyes, this capability in no way contradicts China's low GDP per capita.

The idea that a country makes much social progress by redistributing the money it has among all the people is just a communist delusion. That's how China worked before it started to grow some 30 years ago. You just shouldn't spend or devour all this money – for healthcare of the poor citizens etc. – if you want China to qualitatively improve. You need to invest into sufficiently well-defined things. You may take those $10-$20 billion for the Chinese collider projects and spread them among the Chinese citizens. But that will bring some $10-$20 to each person – perhaps one dinner in a fancier restaurant or one package of good cigarettes. It's normal for the poor people to spend the money in such a way that the wealth quickly evaporates. The concentration of the capital is even more needed in poor countries that want to grow.

Also, China's contribution to HEP physics – and other fields – is limited now. But that's mostly because similar moves and investments that would integrate China to the world's scientific community weren't done in the past or at least they were not numerous.

Yang's remarks about the hypothetical Nobel prizes are specious, too. I don't know who will get Nobel prizes for discoveries at Chinese colliders, if anyone, so it's a pure speculation. But the Nobel prize money is clearly not why colliders are being built. Higgs and Englert got some $1 million from the Nobel committee while the LHC cost $10 billion or so. The prizes can in no way be considered the "repayment of the investments". What the experiments like that bring to science and the mankind is much more than some personal wealth for several people.

You may see that regardless of the recipients of the prize money (and regardless of the disappointing pro-SM results coming from the LHC), everyone understands that because of the LHC and its status, Europe has become essential in the state-of-the-art particle physics. Many peope may like to say unfriendly things about particle physics but at the end, I think that they also understand that at least among well-defined and concentrated disciplines, particle physics is the royal discipline of science. A "center of mass" of this discipline is located on the Swiss-French border. In ten years, China could take this leadership from Europe. This would be a benefit for China that is far more valuable than $10-$20 billion. China – whose annual GDP was some $11 trillion in 2015 – is paying much more money for various other things.

Off-topic: Some news reports talk about a new "Madala boson". It seems to be all about this 2-weeks-old 5-page-long hep-ph preprint presenting a two-Higgs-doublet model that also claims to say something about the composition of dark matter (which is said to be composed of a new scalar \(\chi\)). I've seen many two-Higgs-doublet papers and papers about dark matter and I don't see a sense in which this paper is more important or more persuasive.

The boson should already be seen in the LHC data but it's not.

Update Chinese collider:

On September 25th or so, Maria Spiropulu linked to this new Chinese article where 2+2 scholars support/dismiss the Chinese collider plans. David Gross' pro-collider story is the most detailed argumentation.

by Luboš Motl ( at September 25, 2016 05:08 AM

John Baez - Azimuth

Struggles with the Continuum (Part 8)

We’ve been looking at how the continuum nature of spacetime poses problems for our favorite theories of physics—problems with infinities. Last time we saw a great example: general relativity predicts the existence of singularities, like black holes and the Big Bang. I explained exactly what these singularities really are. They’re not points or regions of spacetime! They’re more like ways for a particle to ‘fall off the edge of spacetime’. Technically, they are incomplete timelike or null geodesics.

The next step is to ask whether these singularities rob general relativity of its predictive power. The ‘cosmic censorship hypothesis’, proposed by Penrose in 1969, claims they do not.

In this final post I’ll talk about cosmic censorship, and conclude with some big questions… and a place where you can get all these posts in a single file.

Cosmic censorship

To say what we want to rule out, we must first think about what behaviors we consider acceptable. Consider first a black hole formed by the collapse of a star. According to general relativity, matter can fall into this black hole and ‘hit the singularity’ in a finite amount of proper time, but nothing can come out of the singularity.

The time-reversed version of a black hole, called a ‘white hole’, is often considered more disturbing. White holes have never been seen, but they are mathematically valid solutions of Einstein’s equation. In a white hole, matter can come out of the singularity, but nothing can fall in. Naively, this seems to imply that the future is unpredictable given knowledge of the past. Of course, the same logic applied to black holes would say the past is unpredictable given knowledge of the future.

If white holes are disturbing, perhaps the Big Bang should be more so. In the usual solutions of general relativity describing the Big Bang, all matter in the universe comes out of a singularity! More precisely, if one follows any timelike geodesic back into the past, it becomes undefined after a finite amount of proper time. Naively, this may seem a massive violation of predictability: in this scenario, the whole universe ‘sprang out of nothing’ about 14 billion years ago.

However, in all three examples so far—astrophysical black holes, their time-reversed versions and the Big Bang—spacetime is globally hyperbolic. I explained what this means last time. In simple terms, it means we can specify initial data at one moment in time and use the laws of physics to predict the future (and past) throughout all of spacetime. How is this compatible with the naive intuition that a singularity causes a failure of predictability?

For any globally hyperbolic spacetime M, one can find a smoothly varying family of Cauchy surfaces S_t (t \in \mathbb{R}) such that each point of M lies on exactly one of these surfaces. This amounts to a way of chopping spacetime into ‘slices of space’ for various choices of the ‘time’ parameter t. For an astrophysical black hole, the singularity is in the future of all these surfaces. That is, an incomplete timelike or null geodesic must go through all these surfaces S_t before it becomes undefined. Similarly, for a white hole or the Big Bang, the singularity is in the past of all these surfaces. In either case, the singularity cannot interfere with our predictions of what occurs in spacetime.

A more challenging example is posed by the Kerr–Newman solution of Einstein’s equation coupled to the vacuum Maxwell equations. When

e^2 + (J/m)^2 < m^2

this solution describes a rotating charged black hole with mass m, charge e and angular momentum J in units where c = G = 1. However, an electron violates this inequality. In 1968, Brandon Carter pointed out that if the electron were described by the Kerr–Newman solution, it would have a gyromagnetic ratio of g = 2, much closer to the true answer than a classical spinning sphere of charge, which gives g = 1. But since

e^2 + (J/m)^2 > m^2

this solution gives a spacetime that is not globally hyperbolic: it has closed timelike curves! It also contains a ‘naked singularity’. Roughly speaking, this is a singularity that can be seen by arbitrarily faraway observers in a spacetime whose geometry asymptotically approaches that of Minkowski spacetime. The existence of a naked singularity implies a failure of global hyperbolicity.

The cosmic censorship hypothesis comes in a number of forms. The original version due to Penrose is now called ‘weak cosmic censorship’. It asserts that in a spacetime whose geometry asymptotically approaches that of Minkowski spacetime, gravitational collapse cannot produce a naked singularity.

In 1991, Preskill and Thorne made a bet against Hawking in which they claimed that weak cosmic censorship was false. Hawking conceded this bet in 1997 when a counterexample was found. This features finely-tuned infalling matter poised right on the brink of forming a black hole. It almost creates a region from which light cannot escape—but not quite. Instead, it creates a naked singularity!

Given the delicate nature of this construction, Hawking did not give up. Instead he made a second bet, which says that weak cosmic censorshop holds ‘generically’ — that is, for an open dense set of initial conditions.

In 1999, Christodoulou proved that for spherically symmetric solutions of Einstein’s equation coupled to a massless scalar field, weak cosmic censorship holds generically. While spherical symmetry is a very restrictive assumption, this result is a good example of how, with plenty of work, we can make progress in rigorously settling the questions raised by general relativity.

Indeed, Christodoulou has been a leader in this area. For example, the vacuum Einstein equations have solutions describing gravitational waves, much as the vacuum Maxwell equations have solutions describing electromagnetic waves. However, gravitational waves can actually form black holes when they collide. This raises the question of the stability of Minkowski spacetime. Must sufficiently small perturbations of the Minkowski metric go away in the form of gravitational radiation, or can tiny wrinkles in the fabric of spacetime somehow amplify themselves and cause trouble—perhaps even a singularity? In 1993, together with Klainerman, Christodoulou proved that Minkowski spacetime is indeed stable. Their proof fills a 514-page book.

In 2008, Christodoulou completed an even longer rigorous study of the formation of black holes. This can be seen as a vastly more detailed look at questions which Penrose’s original singularity theorem addressed in a general, preliminary way. Nonetheless, there is much left to be done to understand the behavior of singularities in general relativity.


In this series of posts, we’ve seen that in every major theory of physics, challenging mathematical questions arise from the assumption that spacetime is a continuum. The continuum threatens us with infinities! Do these infinities threaten our ability to extract predictions from these theories—or even our ability to formulate these theories in a precise way?

We can answer these questions, but only with hard work. Is this a sign that we are somehow on the wrong track? Is the continuum as we understand it only an approximation to some deeper model of spacetime? Only time will tell. Nature is providing us with plenty of clues, but it will take patience to read them correctly.

For more

To delve deeper into singularities and cosmic censorship, try this delightful book, which is free online:

• John Earman, Bangs, Crunches, Whimpers and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, Oxford U. Press, Oxford, 1993.

To read this whole series of posts in one place, with lots more references and links, see:

• John Baez, Struggles with the continuum.

by John Baez at September 25, 2016 01:00 AM

September 24, 2016

Tommaso Dorigo - Scientificblogging

A Book By Guido Tonelli
Yesterday I read with interest and curiosity some pages of a book on the search and discovery of the Higgs boson, which was published last March by Rizzoli (in Italian only, at least for the time being). The book, authored by physics professor and ex CMS spokesperson Guido Tonelli, is titled "La nascita imperfetta delle cose" ("The imperfect birth of things"). 

read more

by Tommaso Dorigo at September 24, 2016 04:38 PM

September 23, 2016

ZapperZ - Physics and Physicists

Without Direction, or Has No Prefered Direction?
This is why popular news coverage of science can often make subtle mistakes that might change the meaning of something.

This UPI news coverage talks about a recent publication in PRL that studied the CMB and found no large-scale anisotropy in our universe. What this means is that our universe, based on the CMB, is isotropic, i.e. the same in all direction, and that our universe has no detectable rotation.

However, instead of saying that, it keeps harping on the idea that the universe "has no direction". It has directions. In fact, it has infinite directions. It is just that it looks the same in all of these directions. Not having a preferred direction, or being isotropic, is not exactly the same as "having no direction".

If you read the APS Physics article accompanying this paper, you'll notice that such a phrase was never used.

I don't know. As a layperson, if you read that UPI news article, what impression does that leave you? Or am I making a mountain out of a mole hill here?


by ZapperZ ( at September 23, 2016 11:47 AM

John Baez - Azimuth

Struggles with the Continuum (Part 7)

Combining electromagnetism with relativity and quantum mechanics led to QED. Last time we saw the immense struggles with the continuum this caused. But combining gravity with relativity led Einstein to something equally remarkable: general relativity.

In general relativity, infinities coming from the continuum nature of spacetime are deeply connected to its most dramatic successful predictions: black holes and the Big Bang. In this theory, the density of the Universe approaches infinity as we go back in time toward the Big Bang, and the density of a star approaches infinity as it collapses to form a black hole. Thus we might say that instead of struggling against infinities, general relativity accepts them and has learned to live with them.

General relativity does not take quantum mechanics into account, so the story is not yet over. Many physicists hope that quantum gravity will eventually save physics from its struggles with the continuum! Since quantum gravity far from being understood, this remains just a hope. This hope has motivated a profusion of new ideas on spacetime: too many to survey here. Instead, I’ll focus on the humbler issue of how singularities arise in general relativity—and why they might not rob this theory of its predictive power.

General relativity says that spacetime is a 4-dimensional Lorentzian manifold. Thus, it can be covered by patches equipped with coordinates, so that in each patch we can describe points by lists of four numbers. Any curve \gamma(s) going through a point then has a tangent vector v whose components are v^\mu = d \gamma^\mu(s)/ds. Furthermore, given two tangent vectors v,w at the same point we can take their inner product

g(v,w) = g_{\mu \nu} v^\mu w^\nu

where as usual we sum over repeated indices, and g_{\mu \nu} is a 4 \times 4 matrix called the metric, depending smoothly on the point. We require that at any point we can find some coordinate system where this matrix takes the usual Minkowski form:

\displaystyle{  g = \left( \begin{array}{cccc} -1 & 0 &0 & 0 \\ 0 & 1 &0 & 0 \\ 0 & 0 &1 & 0 \\ 0 & 0 &0 & 1 \\ \end{array}\right). }

However, as soon as we move away from our chosen point, the form of the matrix g in these particular coordinates may change.

General relativity says how the metric is affected by matter. It does this in a single equation, Einstein’s equation, which relates the ‘curvature’ of the metric at any point to the flow of energy-momentum through that point. To define the curvature, we need some differential geometry. Indeed, Einstein had to learn this subject from his mathematician friend Marcel Grossman in order to write down his equation. Here I will take some shortcuts and try to explain Einstein’s equation with a bare minimum of differential geometry. For how this approach connects to the full story, and a list of resources for further study of general relativity, see:

• John Baez and Emory Bunn, The meaning of Einstein’s equation.

Consider a small round ball of test particles that are initially all at rest relative to each other. This requires a bit of explanation. First, because spacetime is curved, it only looks like Minkowski spacetime—the world of special relativity—in the limit of very small regions. The usual concepts of ’round’ and ‘at rest relative to each other’ only make sense in this limit. Thus, all our forthcoming statements are precise only in this limit, which of course relies on the fact that spacetime is a continuum.

Second, a test particle is a classical point particle with so little mass that while it is affected by gravity, its effects on the geometry of spacetime are negligible. We assume our test particles are affected only by gravity, no other forces. In general relativity this means that they move along timelike geodesics. Roughly speaking, these are paths that go slower than light and bend as little as possible. We can make this precise without much work.

For a path in space to be a geodesic means that if we slightly vary any small portion of it, it can only become longer. However, a path \gamma(s) in spacetime traced out by particle moving slower than light must be ‘timelike’, meaning that its tangent vector v = \gamma'(s) satisfies g(v,v) < 0. We define the proper time along such a path from s = s_0 to s = s_1 to be

\displaystyle{  \int_{s_0}^{s_1} \sqrt{-g(\gamma'(s),\gamma'(s))} \, ds }

This is the time ticked out by a clock moving along that path. A timelike path is a geodesic if the proper time can only decrease when we slightly vary any small portion of it. Particle physicists prefer the opposite sign convention for the metric, and then we do not need the minus sign under the square root. But the fact remains the same: timelike geodesics locally maximize the proper time.

Actual particles are not test particles! First, the concept of test particle does not take quantum theory into account. Second, all known particles are affected by forces other than gravity. Third, any actual particle affects the geometry of the spacetime it inhabits. Test particles are just a mathematical trick for studying the geometry of spacetime. Still, a sufficiently light particle that is affected very little by forces other than gravity can be approximated by a test particle. For example, an artificial satellite moving through the Solar System behaves like a test particle if we ignore the solar wind, the radiation pressure of the Sun, and so on.

If we start with a small round ball consisting of many test particles that are initially all at rest relative to each other, to first order in time it will not change shape or size. However, to second order in time it can expand or shrink, due to the curvature of spacetime. It may also be stretched or squashed, becoming an ellipsoid. This should not be too surprising, because any linear transformation applied to a ball gives an ellipsoid.

Let V(t) be the volume of the ball after a time t has elapsed, where time is measured by a clock attached to the particle at the center of the ball. Then in units where c = 8 \pi G = 1, Einstein’s equation says:

\displaystyle{  \left.{\ddot V\over V} \right|_{t = 0} = -{1\over 2} \left( \begin{array}{l} {\rm flow \; of \;} t{\rm -momentum \; in \; the \;\,} t {\rm \,\; direction \;} + \\ {\rm flow \; of \;} x{\rm -momentum \; in \; the \;\,} x {\rm \; direction \;} + \\ {\rm flow \; of \;} y{\rm -momentum \; in \; the \;\,} y {\rm \; direction \;} + \\ {\rm flow \; of \;} z{\rm -momentum \; in \; the \;\,} z {\rm \; direction} \end{array} \right) }

These flows here are measured at the center of the ball at time zero, and the coordinates used here take advantage of the fact that to first order, at any one point, spacetime looks like Minkowski spacetime.

The flows in Einstein’s equation are the diagonal components of a 4 \times 4 matrix T called the ‘stress-energy tensor’. The components T_{\alpha \beta} of this matrix say how much momentum in the \alpha direction is flowing in the \beta direction through a given point of spacetime. Here \alpha and \beta range from 0 to 3, corresponding to the t,x,y and z coordinates.

For example, T_{00} is the flow of t-momentum in the t-direction. This is just the energy density, usually denoted \rho. The flow of x-momentum in the x-direction is the pressure in the x direction, denoted P_x, and similarly for y and z. You may be more familiar with direction-independent pressures, but it is easy to manufacture a situation where the pressure depends on the direction: just squeeze a book between your hands!

Thus, Einstein’s equation says

\displaystyle{ {\ddot V\over V} \Bigr|_{t = 0} = -{1\over 2} (\rho + P_x + P_y + P_z) }

It follows that positive energy density and positive pressure both curve spacetime in a way that makes a freely falling ball of point particles tend to shrink. Since E = mc^2 and we are working in units where c = 1, ordinary mass density counts as a form of energy density. Thus a massive object will make a swarm of freely falling particles at rest around it start to shrink. In short, gravity attracts.

Already from this, gravity seems dangerously inclined to create singularities. Suppose that instead of test particles we start with a stationary cloud of ‘dust’: a fluid of particles having nonzero energy density but no pressure, moving under the influence of gravity alone. The dust particles will still follow geodesics, but they will affect the geometry of spacetime. Their energy density will make the ball start to shrink. As it does, the energy density \rho will increase, so the ball will tend to shrink ever faster, approaching infinite density in a finite amount of time. This in turn makes the curvature of spacetime become infinite in a finite amount of time. The result is a ‘singularity’.

In reality, matter is affected by forces other than gravity. Repulsive forces may prevent gravitational collapse. However, this repulsion creates pressure, and Einstein’s equation says that pressure also creates gravitational attraction! In some circumstances this can overwhelm whatever repulsive forces are present. Then the matter collapses, leading to a singularity—at least according to general relativity.

When a star more than 8 times the mass of our Sun runs out of fuel, its core suddenly collapses. The surface is thrown off explosively in an event called a supernova. Most of the energy—the equivalent of thousands of Earth masses—is released in a ten-minute burst of neutrinos, formed as a byproduct when protons and electrons combine to form neutrons. If the star’s mass is below 20 times that of our the Sun, its core crushes down to a large ball of neutrons with a crust of iron and other elements: a neutron star.

However, this ball is unstable if its mass exceeds the Tolman–Oppenheimer–Volkoff limit, somewhere between 1.5 and 3 times that of our Sun. Above this limit, gravity overwhelms the repulsive forces that hold up the neutron star. And indeed, no neutron stars heavier than 3 solar masses have been observed. Thus, for very heavy stars, the endpoint of collapse is not a neutron star, but something else: a black hole, an object that bends spacetime so much even light cannot escape.

If general relativity is correct, a black hole contains a singularity. Many physicists expect that general relativity breaks down inside a black hole, perhaps because of quantum effects that become important at strong gravitational fields. The singularity is considered a strong hint that this breakdown occurs. If so, the singularity may be a purely theoretical entity, not a real-world phenomenon. Nonetheless, everything we have observed about black holes matches what general relativity predicts. Thus, unlike all the other theories we have discussed, general relativity predicts infinities that are connected to striking phenomena that are actually observed.

The Tolman–Oppenheimer–Volkoff limit is not precisely known, because it depends on properties of nuclear matter that are not well understood. However, there are theorems that say singularities must occur in general relativity under certain conditions.

One of the first was proved by Raychauduri and Komar in the mid-1950’s. It applies only to ‘dust’, and indeed it is a precise version of our verbal argument above. It introduced the Raychauduri’s equation, which is the geometrical way of thinking about spacetime curvature as affecting the motion of a small ball of test particles. It shows that under suitable conditions, the energy density must approach infinity in a finite amount of time along the path traced out out by a dust particle.

The first required condition is that the flow of dust be initally converging, not expanding. The second condition, not mentioned in our verbal argument, is that the dust be ‘irrotational’, not swirling around. The third condition is that the dust particles be affected only by gravity, so that they move along geodesics. Due to the last two conditions, the Raychauduri–Komar theorem does not apply to collapsing stars.

The more modern singularity theorems eliminate these conditions. But they do so at a price: they require a more subtle concept of singularity! There are various possible ways to define this concept. They’re all a bit tricky, because a singularity is not a point or region in spacetime.

For our present purposes, we can define a singularity to be an ‘incomplete timelike or null geodesic’. As already explained, a timelike geodesic is the kind of path traced out by a test particle moving slower than light. Similarly, a null geodesic is the kind of path traced out by a test particle moving at the speed of light. We say a geodesic is ‘incomplete’ if it ceases to be well-defined after a finite amount of time. For example, general relativity says a test particle falling into a black hole follows an incomplete geodesic. In a rough-and-ready way, people say the particle ‘hits the singularity’. But the singularity is not a place in spacetime. What we really mean is that the particle’s path becomes undefined after a finite amount of time.

We need to be a bit careful about what we mean by ‘time’ here. For test particles moving slower than light this is easy, since we can parametrize a timelike geodesic by proper time. However, the tangent vector v = \gamma'(s) of a null geodesic has g(v,v) = 0, so a particle moving along a null geodesic does not experience any passage of proper time. Still, any geodesic, even a null one, has a family of preferred parametrizations. These differ only by changes of variable like this: s \mapsto as + b. By ‘time’ we really mean the variable s in any of these preferred parametrizations. Thus, if our spacetime is some Lorentzian manifold M, we say a geodesic \gamma \colon [s_0, s_1] \to M is incomplete if, parametrized in one of these preferred ways, it cannot be extended to a strictly longer interval.

The first modern singularity theorem was proved by Penrose in 1965. It says that if space is infinite in extent, and light becomes trapped inside some bounded region, and no exotic matter is present to save the day, either a singularity or something even more bizarre must occur. This theorem applies to collapsing stars. When a star of sufficient mass collapses, general relativity says that its gravity becomes so strong that light becomes trapped inside some bounded region. We can then use Penrose’s theorem to analyze the possibilities.

Shortly thereafter Hawking proved a second singularity theorem, which applies to the Big Bang. It says that if space is finite in extent, and no exotic matter is present, generically either a singularity or something even more bizarre must occur. The singularity here could be either a Big Bang in the past, a Big Crunch in the future, both—or possibly something else. Hawking also proved a version of his theorem that applies to certain Lorentzian manifolds where space is infinite in extent, as seems to be the case in our Universe. This version requires extra conditions.

There are some undefined phrases in this summary of the Penrose–Hawking singularity theorems, most notably these:

• ‘exotic matter’

• ‘singularity’

• ‘something even more bizarre’.

So, let me say a bit about each.

These singularity theorems precisely specify what is meant by ‘exotic matter’. This is matter for which

\rho + P_x + P_y + P_z < 0

at some point, in some coordinate system. By Einstein’s equation, this would make a small ball of freely falling test particles tend to expand. In other words, exotic matter would create a repulsive gravitational field. No matter of this sort has ever been found; the matter we know obeys the so-called ‘dominant energy condition’

\rho + P_x + P_y + P_z \ge 0

The Penrose–Hawking singularity theorems also say what counts as ‘something even more bizarre’. An example would be a closed timelike curve. A particle following such a path would move slower than light yet eventually reach the same point where it started—and not just the same point in space, but the same point in spacetime! If you could do this, perhaps you could wait, see if it would rain tomorrow, and then go back and decide whether to buy an umbrella today. There are certainly solutions of Einstein’s equation with closed timelike curves. The first interesting one was found by Einstein’s friend Gödel in 1949, as part of an attempt to probe the nature of time. However, closed timelike curves are generally considered less plausible than singularities.

In the Penrose–Hawking singularity theorems, ‘something even more bizarre’ means that spacetime is not ‘globally hyperbolic’. To understand this, we need to think about when we can predict the future or past given initial data. When studying field equations like Maxwell’s theory of electromagnetism or Einstein’s theory of gravity, physicists like to specify initial data on space at a given moment of time. However, in general relativity there is considerable freedom in how we choose a slice of spacetime and call it ‘space’. What should we require? For starters, we want a 3-dimensional submanifold S of spacetime that is ‘spacelike’: every vector v tangent to S should have g(v,v) > 0. However, we also want any timelike or null curve to hit S exactly once. A spacelike surface with this property is called a Cauchy surface, and a Lorentzian manifold containing a Cauchy surface is said to be globally hyperbolic. There are many theorems justifying the importance of this concept. Globally hyperbolicity excludes closed timelike curves, but also other bizarre behavior.

By now the original singularity theorems have been greatly generalized and clarified. Hawking and Penrose gave a unified treatment of both theorems in 1970. The 1973 textbook by Hawking and Ellis gives a systematic introduction to this subject. Hawking gave an elegant informal overview of the key ideas in 1994, and a paper by Garfinkle and Senovilla reviews the subject and its history up to 2015.

If we accept that general relativity really predicts the existence of singularities in physically realistic situations, the next step is to ask whether they rob general relativity of its predictive power. I’ll talk about that next time!

by John Baez at September 23, 2016 01:00 AM

September 22, 2016

CERN Bulletin

CERN Bulletin Issue No. 38-39/2016
Link to e-Bulletin Issue No. 38-39/2016Link to all articles in this issue No.

September 22, 2016 08:51 AM

September 21, 2016

Clifford V. Johnson - Asymptotia

Super Nailed It…

quick_sketch_of_black_pantherOn the sofa, during a moment while we watched Captain America: Civil War over the weekend:

Amy: Wait, what...? Why's Cat-Woman in this movie?
Me: Er... (hesitating, not wanting to spoil what is to come...)
Amy: Isn't she a DC character?
Me: Well... (still hesitating, but secretly impressed by her awareness of the different universes... hadn't realized she was paying attention all these years.)
Amy: So who's going to show up next? Super-Dude? Bat-Fella? Wonder-Lady? (Now she's really showing off and poking fun.)
Me: We'll see... (Now choking with laughter on dinner...)

I often feel bad subjecting my wife to this stuff, but this alone was worth it.

For those who know the answers and are wondering, I held off on launching into a discussion about the fascinating history of Marvel, representation of people of African descent in superhero comics (and now movies and TV), the [...] Click to continue reading this post

The post Super Nailed It… appeared first on Asymptotia.

by Clifford at September 21, 2016 07:00 PM

Symmetrybreaking - Fermilab/SLAC

Small cat, big science

The proposed International Linear Collider has a fuzzy new ally.

Hello Kitty is known throughout Japan as the poster girl (poster cat?) of kawaii, a segment of pop culture built around all things cute.

But recently she took on a new job: representing the proposed International Linear Collider.

At the August International Conference on High Energy Physics in Chicago, ILC boosters passed out folders featuring the white kitty wearing a pair of glasses, a shirt with pens in the pocket and a bow with an L for “Lagrangian,” the name of the long equation in the background. Some picture the iconic cat sitting on an ILC cryomodule.

Hello Kitty has previously tried activities such as cooking, photography and even scuba diving. This may be her first foray into international research.

Japan is considering hosting the ILC, a proposed accelerator that could mass-produce Higgs bosons and other fundamental particles. Japan’s Advanced Accelerator Association partnered with the company Sanrio to create the special kawaii gear in the hopes of drawing attention to the large-scale project.

The ILC: Science you’ll want to snuggle.

by Ricarda Laasch at September 21, 2016 04:15 PM

ZapperZ - Physics and Physicists

Recap of ICHEP 2016
If you missed the recent brouhaha about the missing 750 GeV bump, here is the recap of ICHEP conference held recently in Chicago.


by ZapperZ ( at September 21, 2016 01:21 PM

Lubos Motl - string vacua and pheno

Nanopoulos' and pals' model is back to conquer the throne
Once upon a time, there was an evil witch-and-bitch named Cernette whose mass was \(750\GeV\) and who wanted to become the queen instead of the beloved king.

Fortunately, that witch-and-bitch has been killed and what we're experiencing is
The Return of the King: No-Scale \({\mathcal F}\)-\(SU(5)\),
Li, Maxin, and Nanopoulous point out. It's great news that the would-be \(750\GeV\) particle has been liquidated. They revisited the predictions of their class of F-theory-based, grand unified, no-scale models and found some consequences that they surprisingly couldn't have told us about in the previous 10 papers and that we should be happy about, anyway.

First, they suddenly claim that the theoretical considerations within their scheme are enough to assert that the mass of the gluino exceeds \(1.9\TeV\),\[

m_{\tilde g} \geq 1.9\TeV.

\] This is an excellent, confirmed prediction of a supersymmetric theory because the LHC experiments also say that with these conventions, the mass of the gluino exceeds \(1.9\TeV\). ;-)

Just to be sure, I did observe the general gradual increase of the masses predicted by their models so I don't take the newest ones too seriously. But I believe that there is still some justification so the probability could be something like 0.1% that in a year or two, we will consider their model to be a strong contender that has been partly validated by the experiments.

In the newest paper, they want the Higgs and top mass to be around\[

m_h\approx 125\GeV, \quad m_{\rm top} \approx 174\GeV

\] while the new SUSY-related parameters are\[

\tan\beta &\approx 25\\
M_V^{\rm flippon}&\approx (30-80)\TeV\\
M_{\chi^1_0}&\approx 380\GeV\\
M_{\tilde \tau^\pm} &\approx M_{\chi^1_0}+1 \GeV\\
M_{\tilde t_1} &\approx 1.7\TeV\\
M_{\tilde u_R} &\approx 2.7\TeV\\
M_{\tilde g} &\approx 2.1\TeV

\] while the cosmological parameter \(\Omega h^2\approx 0.118\), the anomalous muon's magnetic moment \(\Delta a_\mu\approx 2\times 10^{-10}\), the branching ratio of a bottom decay \(Br(b\to s\gamma)\approx 0.00035\), the muon pair branching ratio for a B-meson \(Br(B^0_s\to \mu^+\mu^-)\approx 3.2\times 10^{-9}\), the spin-independent cross section \(\sigma_{SI}\approx (1.0-1.5)\times 10^{-11}\,{\rm pb}\) and \(\sigma_{SD} \approx (4-6)\times 10^{-9}\,{\rm pb}\), and the proton lifetime\[

\tau (p\to e^+ \pi^0) \approx 1.3\times 10^{35}\,{\rm years}.

\] Those are cool, specific predictions that are almost independent of the choice of the point in their parameter space. If one takes those claims seriously, theirs is a highly predictive theory.

But one reason I wrote this blog post was their wonderfully optimistic, fairy-tale-styled rhetoric. For example, the second part of their conclusions says:
While SUSY enthusiasts have endured several setbacks over the prior few years amidst the discouraging results at the LHC in the search for supersymmetry, it is axiomatic that as a matter of course, great triumph emerges from momentary defeat. As the precession of null observations at the LHC has surely dampened the spirits of SUSY proponents, the conclusion of our analysis here indicates that the quest for SUSY may just be getting interesting.
So dear SUSY proponents, just don't despair, return to your work, and get ready for the great victory.

Off-topic: Santa Claus is driving a Škoda and he parks on the roofs whenever he brings gifts to kids in the Chinese countryside. What a happy driver.

by Luboš Motl ( at September 21, 2016 10:54 AM

CERN Bulletin

There’s more to particle physics at CERN than colliders

CERN’s scientific programme must be compelling, unique, diverse, and integrated into the global landscape of particle physics. One of the Laboratory’s primary goals is to provide a diverse range of excellent physics opportunities and to put its unique facilities to optimum use, maximising the scientific return.


In this spirit, we have recently established a Physics Beyond Colliders study group with a mandate to explore the unique opportunities offered by the CERN accelerator complex to address some of today’s outstanding questions in particle physics through projects complementary to high-energy colliders and other initiatives in the world. The study group will provide input to the next update of the European Strategy for Particle Physics.

The process kicked off with a two-day workshop at CERN on 6 and 7 September, organised by the study group conveners: Joerg Jaeckel (Heidelberg), Mike Lamont (CERN) and Claude Vallée (CPPM Marseille and DESY). Its purpose was to present experimental and theoretical ideas, and to hear proposals for compelling experiments that can be done at the extremely versatile CERN accelerator complex. From the linacs to the SPS, CERN accelerators are able to deliver high-intensity beams across a broad range of energies, particle types and time structure.

Over 300 people attended the workshop, some three quarters coming from outside CERN. The call for proposals resulted in around 30 submissions for talks, with about two third of those being discussed at the workshop. It was interesting to see a spirit of collaborative competition, the hallmark of our field, building up as the workshop progressed. The proposals addressed questions of fundamental physics using approaches complementary to those for which colliders are best adapted. They covered, among others, searches for dark-sector particles, measurements of the proton electric dipole moment, studies of ultra-rare decays, searches for axions, and many more.

The next step for the study group is to organise the work to develop and consolidate the ideas that were heard at the workshop and others that can be put forward in the coming months. Working groups will examine the physics case and technical feasibility in the global context: indeed, carrying out research here that could be done elsewhere does not allow for the best use of the discipline’s resources globally.

I’m looking forward to following the interactions and activities that these working groups will foster over the coming years, and to reading the report that will be delivered in 2018 to inform the next European Strategy update. There’s a bright future, I’m sure, for physics beyond - and alongside - colliders at CERN.

Fabiola Gianotti

September 21, 2016 09:09 AM

John Baez - Azimuth

Struggles with the Continuum (Part 6)

Last time I sketched how physicists use quantum electrodynamics, or ‘QED’, to compute answers to physics problems as power series in the fine structure constant, which is

\displaystyle{ \alpha = \frac{1}{4 \pi \epsilon_0} \frac{e^2}{\hbar c} \approx \frac{1}{137.036} }

I concluded with a famous example: the magnetic moment of the electron. With a truly heroic computation, physicists have used QED to compute this quantity up to order \alpha^5. If we also take other Standard Model effects into account we get agreement to roughly one part in 10^{12}.

However, if we continue adding up terms in this power series, there is no guarantee that the answer converges. Indeed, in 1952 Freeman Dyson gave a heuristic argument that makes physicists expect that the series diverges, along with most other power series in QED!

The argument goes as follows. If these power series converged for small positive \alpha, they would have a nonzero radius of convergence, so they would also converge for small negative \alpha. Thus, QED would make sense for small negative values of \alpha, which correspond to imaginary values of the electron’s charge. If the electron had an imaginary charge, electrons would attract each other electrostatically, since the usual repulsive force between them is proportional to e^2. Thus, if the power series converged, we would have a theory like QED for electrons that attract rather than repel each other.

However, there is a good reason to believe that QED cannot make sense for electrons that attract. The reason is that it describes a world where the vacuum is unstable. That is, there would be states with arbitrarily large negative energy containing many electrons and positrons. Thus, we expect that the vacuum could spontaneously turn into electrons and positrons together with photons (to conserve energy). Of course, this not a rigorous proof that the power series in QED diverge: just an argument that it would be strange if they did not.

To see why electrons that attract could have arbitrarily large negative energy, consider a state \psi with a large number N of such electrons inside a ball of radius R. We require that these electrons have small momenta, so that nonrelativistic quantum mechanics gives a good approximation to the situation. Since its momentum is small, the kinetic energy of each electron is a small fraction of its rest energy m_e c^2. If we let \langle \psi, E \psi\rangle be the expected value of the total rest energy and kinetic energy of all the electrons, it follows that \langle \psi, E\psi \rangle is approximately proportional to N.

The Pauli exclusion principle puts a limit on how many electrons with momentum below some bound can fit inside a ball of radius R. This number is asymptotically proportional to the volume of the ball. Thus, we can assume N is approximately proportional to R^3. It follows that \langle \psi, E \psi \rangle is approximately proportional to R^3.

There is also the negative potential energy to consider. Let V be the operator for potential energy. Since we have N electrons attracted by an 1/r potential, and each pair contributes to the potential energy, we see that \langle \psi , V \psi \rangle is approximately proportional to -N^2 R^{-1}, or -R^5. Since R^5 grows faster than R^3, we can make the expected energy \langle \psi, (E + V) \psi \rangle arbitrarily large and negative as N,R \to \infty.

Note the interesting contrast between this result and some previous ones we have seen. In Newtonian mechanics, the energy of particles attracting each other with a 1/r potential is unbounded below. In quantum mechanics, thanks the uncertainty principle, the energy is bounded below for any fixed number of particles. However, quantum field theory allows for the creation of particles, and this changes everything! Dyson’s disaster arises because the vacuum can turn into a state with arbitrarily large numbers of electrons and positrons. This disaster only occurs in an imaginary world where \alpha is negative—but it may be enough to prevent the power series in QED from having a nonzero radius of convergence.

We are left with a puzzle: how can perturbative QED work so well in practice, if the power series in QED diverge?

Much is known about this puzzle. There is an extensive theory of ‘Borel summation’, which allows one to extract well-defined answers from certain divergent power series. For example, consider a particle of mass m on a line in a potential

V(x) = x^2 + \beta x^4

When \beta \ge 0 this potential is bounded below, but when \beta < 0 it is not: classically, it describes a particle that can shoot to infinity in a finite time. Let H = K + V be the quantum Hamiltonian for this particle, where K is the usual operator for the kinetic energy and V is the operator for potential energy. When \beta \ge 0, the Hamiltonian H is essentially self-adjoint on the set of smooth wavefunctions that vanish outside a bounded interval. This means that the theory makes sense. Moreover, in this case H has a ‘ground state’: a state \psi whose expected energy \langle \psi, H \psi \rangle is as low as possible. Call this expected energy E(\beta). One can show that E(\beta) depends smoothly on \beta for \beta \ge 0, and one can write down a Taylor series for E(\beta).

On the other hand, when \beta < 0 the Hamiltonian H is not essentially self-adjoint. This means that the quantum mechanics of a particle in this potential is ill-behaved when \beta < 0. Heuristically speaking, the problem is that such a particle could tunnel through the barrier given by the local maxima of V(x) and shoot off to infinity in a finite time.

This situation is similar to Dyson’s disaster, since we have a theory that is well-behaved for \beta \ge 0 and ill-behaved for \beta < 0. As before, the bad behavior seems to arise from our ability to convert an infinite amount of potential energy into other forms of energy. However, in this simpler situation one can prove that the Taylor series for E(\beta) does not converge. Barry Simon did this around 1969. Moreover, one can prove that Borel summation, applied to this Taylor series, gives the correct value of E(\beta) for \beta \ge 0. The same is known to be true for certain quantum field theories. Analyzing these examples, one can see why summing the first few terms of a power series can give a good approximation to the correct answer even though the series diverges. The terms in the series get smaller and smaller for a while, but eventually they become huge.

Unfortunately, nobody has been able to carry out this kind of analysis for quantum electrodynamics. In fact, the current conventional wisdom is that this theory is inconsistent, due to problems at very short distance scales. In our discussion so far, we summed over Feynman diagrams with \le n vertices to get the first n terms of power series for answers to physical questions. However, one can also sum over all diagrams with \le n loops. This more sophisticated approach to renormalization, which sums over infinitely many diagrams, may dig a bit deeper into the problems faced by quantum field theories.

If we use this alternate approach for QED we find something surprising. Recall that in renormalization we impose a momentum cutoff \Lambda, essentially ignoring waves of wavelength less than \hbar/\Lambda, and use this to work out a relation between the the electron’s bare charge e_\mathrm{bare}(\Lambda) and its renormalized charge e_\mathrm{ren}. We try to choose e_\mathrm{bare}(\Lambda) that makes e_\mathrm{ren} equal to the electron’s experimentally observed charge e. If we sum over Feynman diagrams with \le n vertices this is always possible. But if we sum over Feynman diagrams with at most one loop, it ceases to be possible when \Lambda reaches a certain very large value, namely

\displaystyle{  \Lambda \; = \; \exp\left(\frac{3 \pi}{2 \alpha} + \frac{5}{6}\right) m_e c \; \approx \; e^{647} m_e c}

According to this one-loop calculation, the electron’s bare charge becomes infinite at this point! This value of \Lambda is known as a ‘Landau pole’, since it was first noticed in about 1954 by Lev Landau and his colleagues.

What is the meaning of the Landau pole? We said that poetically speaking, the bare charge of the electron is the charge we would see if we could strip off the electron’s virtual particle cloud. A somewhat more precise statement is that e_\mathrm{bare}(\Lambda) is the charge we would see if we collided two electrons head-on with a momentum on the order of \Lambda. In this collision, there is a good chance that the electrons would come within a distance of \hbar/\Lambda from each other. The larger \Lambda is, the smaller this distance is, and the more we penetrate past the effects of the virtual particle cloud, whose polarization ‘shields’ the electron’s charge. Thus, the larger \Lambda is, the larger e_\mathrm{bare}(\Lambda) becomes.

So far, all this makes good sense: physicists have done experiments to actually measure this effect. The problem is that according to a one-loop calculation, e_\mathrm{bare}(\Lambda) becomes infinite when \Lambda reaches a certain huge value.

Of course, summing only over diagrams with at most one loop is not definitive. Physicists have repeated the calculation summing over diagrams with \le 2 loops, and again found a Landau pole. But again, this is not definitive. Nobody knows what will happen as we consider diagrams with more and more loops. Moreover, the distance \hbar/\Lambda corresponding to the Landau pole is absurdly small! For the one-loop calculation quoted above, this distance is about

\displaystyle{  e^{-647} \frac{\hbar}{m_e c} \; \approx \; 6 \cdot 10^{-294}\, \mathrm{meters} }

This is hundreds of orders of magnitude smaller than the length scales physicists have explored so far. Currently the Large Hadron Collider can probe energies up to about 10 TeV, and thus distances down to about 2 \cdot 10^{-20} meters, or about 0.00002 times the radius of a proton. Quantum field theory seems to be holding up very well so far, but no reasonable physicist would be willing to extrapolate this success down to 6 \cdot 10^{-294} meters, and few seem upset at problems that manifest themselves only at such a short distance scale.

Indeed, attitudes on renormalization have changed significantly since 1948, when Feynman, Schwinger and Tomonoga developed it for QED. At first it seemed a bit like a trick. Later, as the success of renormalization became ever more thoroughly confirmed, it became accepted. However, some of the most thoughtful physicists remained worried. In 1975, Dirac said:

Most physicists are very satisfied with the situation. They say: ‘Quantum electrodynamics is a good theory and we do not have to worry about it any more.’ I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small—not neglecting it just because it is infinitely great and you do not want it!

As late as 1985, Feynman wrote:

The shell game that we play [. . .] is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.

By now renormalization is thoroughly accepted among physicists. The key move was a change of attitude emphasized by Kenneth Wilson in the 1970s. Instead of treating quantum field theory as the correct description of physics at arbitrarily large energy-momenta, we can assume it is only an approximation. For renormalizable theories, one can argue that even if quantum field theory is inaccurate at large energy-momenta, the corrections become negligible at smaller, experimentally accessible energy-momenta. If so, instead of seeking to take the \Lambda \to \infty limit, we can use renormalization to relate bare quantities at some large but finite value of \Lambda to experimentally observed quantities.

From this practical-minded viewpoint, the possibility of a Landau pole in QED is less important than the behavior of the Standard Model. Physicists believe that the Standard Model would suffer from Landau pole at momenta low enough to cause serious problems if the Higgs boson were considerably more massive than it actually is. Thus, they were relieved when the Higgs was discovered at the Large Hadron Collider with a mass of about 125 GeV/c2. However, the Standard Model may still suffer from a Landau pole at high momenta, as well as an instability of the vacuum.

Regardless of practicalities, for the mathematical physicist, the question of whether or not QED and the Standard Model can be made into well-defined mathematical structures that obey the axioms of quantum field theory remain open problems of great interest. Most physicists believe that this can be done for pure Yang–Mills theory, but actually proving this is the first step towards winning $1,000,000 from the Clay Mathematics Institute.

by John Baez at September 21, 2016 01:00 AM

September 19, 2016

Clifford V. Johnson - Asymptotia

Kitchen Design…

(Click for larger view.)
sample_panel_dialogues_19_09_2016Apparently I was designing a kitchen recently. Yes, but not one I intend to build in the physical world. It's the setting (in part) for a new story I'm working on for the book. The everyday household is a great place to have a science conversation, by the way, and this is what we will see in this story. It might be one of the most important conversations in the book in some sense.

This story is meant to be done in a looser, quicker style, and there I go again with the ridiculous level of detail... Just to get a sense of how ridiculous I'm being, note that this is not a page, but a small panel within a page of several.

The page establishes the overall setting, and hopefully roots you [...] Click to continue reading this post

The post Kitchen Design… appeared first on Asymptotia.

by Clifford at September 19, 2016 10:32 PM

Robert Helling - atdotde

Brute forcing Crazy Game Puzzles
In the 1980s, as a kid I loved my Crazy Turtles Puzzle ("Das verrückte Schildkrötenspiel"). For a number of variations, see here or here.

I had completely forgotten about those, but a few days ago, I saw a self-made reincarnation when staying at a friends' house:

I tried a few minutes to solve it, unsuccessfully (in case it is not clear: you are supposed to arrange the nine tiles in a square such that they form color matching arrows wherever they meet).

So I took the picture above with the plan to either try a bit more at home or write a program to solve it. Yesterday, I had about an hour and did the latter. I am a bit proud of the implementation I came up with and in particular the fact that I essentially came up with a correct program: It came up with the unique solution the first time I executed it. So, here I share it:


# 1 rot 8
# 2 gelb 7
# 3 gruen 6
# 4 blau 5

@karten = (7151, 6754, 4382, 2835, 5216, 2615, 2348, 8253, 4786);

foreach $karte(0..8) {
$farbe[$karte] = [split //,$karten[$karte]];

sub ausprobieren {
my $pos = shift;

foreach my $karte(0..8) {
next if $benutzt[$karte];
$benutzt[$karte] = 1;
foreach my $dreh(0..3) {
if ($pos % 3) {
# Nicht linke Spalte
$suche = 9 - $farbe[$gelegt[$pos - 1]]->[(1 - $drehung[$gelegt[$pos - 1]]) % 4];
next if $farbe[$karte]->[(3 - $dreh) % 4] != $suche;
if ($pos >= 3) {
# Nicht oberste Zeile
$suche = 9 - $farbe[$gelegt[$pos - 3]]->[(2 - $drehung[$gelegt[$pos - 3]]) % 4];
next if $farbe[$karte]->[(4 - $dreh) % 4] != $suche;

$benutzt[$karte] = 1;
$gelegt[$pos] = $karte;
$drehung[$karte] = $dreh;
#print @gelegt[0..$pos]," ",@drehung[0..$pos]," ", 9 - $farbe[$gelegt[$pos - 1]]->[(1 - $drehung[$gelegt[$pos - 1]]) % 4],"\n";

if ($pos == 8) {
print "Fertig!\n";
for $l(0..8) {
print "$gelegt[$l] $drehung[$gelegt[$l]]\n";
} else {
&ausprobieren($pos + 1);
$benutzt[$karte] = 0;

Sorry for variable names in German, but the idea should be clear. Regarding the implementation: red, yellow, green and blue backs of arrows get numbers 1,2,3,4 respectively and pointy sides of arrows 8,7,6,5 (so matching combinations sum to 9).

It implements depth first tree search where tile positions (numbered 0 to 8) are tried left to write top to bottom. So tile $n$ shares a vertical edge with tile $n-1$ unless it's number is 0 mod 3 (leftist column) and it shares a horizontal edge with tile $n-3$ unless $n$ is less than 3, which means it is in the first row.

It tries rotating tiles by 0 to 3 times 90 degrees clock-wise, so finding which arrow to match with a neighboring tile can also be computed with mod 4 arithmetic.

by Robert Helling ( at September 19, 2016 07:43 PM

Clifford V. Johnson - Asymptotia

Breaking, not Braking

Well, that happened. I’ve not, at least as I recollect, written a breakup letter before…until now. It had the usual “It’s not you it’s me…”, “we’ve grown apart…” sorts of phrases. And they were all well meant. This was written to my publisher, I hasten to add! Over the last … Click to continue reading this post

The post Breaking, not Braking appeared first on Asymptotia.

by Clifford at September 19, 2016 07:02 PM

The n-Category Cafe

Logical Uncertainty and Logical Induction

Quick - what’s the <semantics>10 100<annotation encoding="application/x-tex">10^{100}</annotation></semantics>th digit of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>?

If you’re anything like me, you have some uncertainty about the answer to this question. In fact, your uncertainty probably takes the following form: you assign a subjective probability of about <semantics>110<annotation encoding="application/x-tex">\frac{1}{10}</annotation></semantics> to this digit being any one of the possible values <semantics>0,1,2,9<annotation encoding="application/x-tex">0, 1, 2, \dots 9</annotation></semantics>. This is despite the fact that

  • the normality of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> in base <semantics>10<annotation encoding="application/x-tex">10</annotation></semantics> is a wide open problem, and
  • even if it weren’t, nothing random is happening; the <semantics>10 100<annotation encoding="application/x-tex">10^{100}</annotation></semantics>th digit of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> is a particular digit, not a randomly selected one, and it being a particular value is a mathematical fact which is either true or false.

If you’re bothered by this state of affairs, you could try to resolve it by computing the <semantics>10 100<annotation encoding="application/x-tex">10^{100}</annotation></semantics>th digit of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>, but as far as I know nobody has the computational resources to do this in a reasonable amount of time.

Because of this lack of computational resources, among other things, you and I aren’t logically omniscient; we don’t have access to all of the logical consequences of our beliefs. The kind of uncertainty we have about mathematical questions that are too difficult for us to settle one way or another right this moment is logical uncertainty, and standard accounts of how to have uncertain beliefs (for example, assign probabilities and update them using Bayes’ theorem) don’t capture it.

Nevertheless, somehow mathematicians manage to have lots of beliefs about how likely mathematical conjectures such as the Riemann hypothesis are to be true, and even about simpler but still difficult mathematical questions such as how likely some very large complicated number <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> is to be prime (a reasonable guess, before we’ve done any divisibility tests, is about <semantics>1lnN<annotation encoding="application/x-tex">\frac{1}{\ln N}</annotation></semantics> by the prime number theorem). In some contexts we have even more sophisticated guesses like the Cohen-Lenstra heuristics for assigning probabilities to mathematical statements such as “the class number of such-and-such complicated number field has <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>-part equal to so-and-so.”

In general, what criteria might we use to judge an assignment of probabilities to mathematical statements as reasonable or unreasonable? Given some criteria, how easy is it to find a way to assign probabilities to mathematical statements that actually satisfies them? These fundamental questions are the subject of the following paper:

Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor, Logical Induction. ArXiv:1609.03543.

Loosely speaking, in this paper the authors

  • describe a criterion called logical induction that an assignment of probabilities to mathematical statements could satisfy,
  • show that logical induction implies many other desirable criteria, some of which have previously appeared in the literature, and
  • prove that a computable logical inductor (an algorithm producing probability assignments satisfying logical induction) exists.

Logical induction is a weak “no Dutch book” condition; the idea is that a logical inductor makes bets about which statements are true or false, and does so in a way that doesn’t lose it too much money over time.

A warmup

Before describing logical induction, let me describe a different and more naive criterion you could ask for, but in fact don’t want to ask for because it’s too strong. Let <semantics>φ(φ)<annotation encoding="application/x-tex">\varphi \mapsto \mathbb{P}(\varphi)</annotation></semantics> be an assignment of probabilities to statements in some first-order language; for example, we might want to assign probabilities to statements in the language of Peano arithmetic (PA), conditioned on the axioms of PA being true (which means having probability <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>). Say that such an assignment <semantics>φ(φ)<annotation encoding="application/x-tex">\varphi \mapsto \mathbb{P}(\varphi)</annotation></semantics> is coherent if

  • <semantics>()=1<annotation encoding="application/x-tex">\mathbb{P}(\top) = 1</annotation></semantics>.
  • If <semantics>φ 1<annotation encoding="application/x-tex">\varphi_1</annotation></semantics> is equivalent to <semantics>φ 2<annotation encoding="application/x-tex">\varphi_2</annotation></semantics>, then <semantics>(φ 1)=(φ 2)<annotation encoding="application/x-tex">\mathbb{P}(\varphi_1) = \mathbb{P}(\varphi_2)</annotation></semantics>.
  • <semantics>(φ 1)=(φ 1φ 2)+(φ 1¬φ 2)<annotation encoding="application/x-tex">\mathbb{P}(\varphi_1) = \mathbb{P}(\varphi_1 \wedge \varphi_2) + \mathbb{P}(\varphi_1 \wedge \neg \varphi_2)</annotation></semantics>.

These axioms together imply various other natural-looking conditions; for example, setting <semantics>φ 1=<annotation encoding="application/x-tex">\varphi_1 = \top</annotation></semantics> in the third axiom, we get that <semantics>(φ 2)+(¬φ 2)=1<annotation encoding="application/x-tex">\mathbb{P}(\varphi_2) + \mathbb{P}(\neg \varphi_2) = 1</annotation></semantics>. Various other axiomatizations of coherence are possible.

Theorem: A probability assignment such that <semantics>(φ)=1<annotation encoding="application/x-tex">\mathbb{P}(\varphi) = 1</annotation></semantics> for all statements <semantics>φ<annotation encoding="application/x-tex">\varphi</annotation></semantics> in a first-order theory <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is coherent iff there is a probability measure on models of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> such that <semantics>(φ)<annotation encoding="application/x-tex">\mathbb{P}(\varphi)</annotation></semantics> is the probability that <semantics>φ<annotation encoding="application/x-tex">\varphi</annotation></semantics> is true in a random model.

This theorem is a logical counterpart of the Riesz-Markov-Kakutani representation theorem relating probability distributions to linear functionals on spaces of functions; I believe it is due to Gaifman.

For example, if <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is PA, then the sort of uncertainty that a coherent probability assignment conditioned on PA captures is uncertainty about which of the various first-order models of PA is the “true” natural numbers. However, coherent probability assignments are still logically omniscient: syntactically, every provable statement is assigned probability <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> because they’re all equivalent to <semantics><annotation encoding="application/x-tex">\top</annotation></semantics>, and semantically, provable statements are true in every model. In particular, coherence is too strong to capture uncertainty about the digits of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>.

Coherent probability assignments can update over time whenever they learn that some statement is true which they haven’t assigned probability <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> to; for example, if you start by believing PA and then come to also believe that PA is consistent, then conditioning on that belief will cause your probability distribution over models to exclude models of PA where PA is inconsistent. But this doesn’t capture the kind of updating a non-logically omniscient reasoner like you or me actually does, where our beliefs about mathematics can change solely because we’ve thought a bit longer and proven some statements that we didn’t previously know (for example, about the values of more and more digits of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>).

Logical induction

The framework of logical induction is for describing the above kind of updating, based solely on proving more statements. It takes as input a deductive process which is slowly producing proofs of statements over time (for example, of theorems in PA), and assigns probabilities to statements that haven’t been proven yet. Remarkably, it’s able to do this in a way that eventually outpaces the deductive process, assigning high probabilities to true statements long before they are proven (see Theorem 4.2.1).

So how does logical induction work? The coherence axioms above can be justified by Dutch book arguments, following Ramsey and de Finetti, which loosely say that a bookie can’t offer a coherent reasoner a bet about mathematical statements which they will take but which is in fact guaranteed to lose them money. But this is much too strong a requirement for a reasoner who is not logically omniscient. The logical induction criterion is a weaker version of this condition; we only require that an efficiently computable bookie can’t make arbitrarily large amounts of money by betting with a logical inductor about mathematical statements unless it’s willing to take on arbitrarily large amounts of risk (see Definition 3.0.1).

This turns out to be a surprisingly useful condition to require, loosely speaking because it corresponds to being able to “notice patterns” in mathematical statements even if we can’t prove anything about them yet. A logical inductor has to be able to notice patterns that could otherwise be used by an efficiently computable bookie to exploit the inductor; for example, a logical inductor eventually assigns probability about <semantics>110<annotation encoding="application/x-tex">\frac{1}{10}</annotation></semantics> to claims that a very large digit of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> has a particular value, intuitively because otherwise a bookie could continue to bet with the logical inductor about more and more digits of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>, making money each time (see Theorem 4.4.2).

Logical induction has many other desirable properties, some of which are described in this blog post. One of the more remarkable properties is that because logical inductors are computable, they can reason about themselves, and hence assign probabilities to statements about the probabilities they assign. Despite the possibility of running into self-referential paradoxes, logical inductors eventually have accurate beliefs about their own beliefs (see Theorem 4.11.1).

Overall I’m excited about this circle of ideas and hope that they get more attention from the mathematical community. Speaking very speculatively, it would be great if logical induction shed some light on the role of probability in mathematics more generally - for example, in the use of informal probabilistic arguments for or against difficult conjectures. A recent example is Boklan and Conway’s probabilistic arguments in favor of the conjecture that there are no Fermat primes beyond those currently known.

I’ve made several imprecise claims about the contents of the paper above, so please read it to get the precise claims!

by qchu ( at September 19, 2016 06:18 PM

Lubos Motl - string vacua and pheno

Anti-string crackpots being emulated by a critic of macroeconomics
While only a few thousand people in the world – about one part per million – have some decent idea about what string theory is, the term "string theory" has undoubtedly penetrated the mass culture. The technical name of the theory of everything is being used to promote concerts, "string theory" is used in the title of books about tennis, and visual arts have lots of "string theory" in them, too.

But the penetration is so deep that even the self-evidently marginal figures such as the anti-string crackpots have inspired some followers in totally different parts of the human activity. In particular, five days ago, a man named Paul Romer wrote a 26-page-long rant named
The Trouble With Macroeconomics
See also and Power Line Blog for third parties' perspective.

If you think that the title is surprisingly similar to the title of a book against physics, "The Trouble With Physics", well, you are right. It's no coincidence. Building on the example of the notorious anti-physics jihadist named Lee Smolin, Paul Romer attacks most of macroeconomics and what it's been doing since the 1970s.

To be sure, Paul Romer is a spoiled brat from the family of a left-wing Colorado governor. Probably because he grew into just another economist who always and "flawlessly" advocates the distortion of the markets by the governments and international organizations as well as unlimited loose monetary policies, he was named the chief economist of the World Bank two months ago.

Clearly, the opinion that macroeconomics is a "post-real" pile of šit is clearly not a problem for a pseudo-intellectual with the "desired" ideology who wants to be chosen as the chief economist of a world bank, in fact, The World Bank.

Now, I think that Paul Romer is pretty much a spherical aßhole – it's an aßhole who looks like that regardless of the direction from which you observe it. He is absolutely deluded about physical sciences and I happen to think that he is largely deluded about economics, too.

But unlike him, and I think it's even more important than that, most of this "spherical shape" is a coincidence. There exists no law of Nature that would guarantee that someone who is deluded about physical sciences must be deluded about economics, too – or vice versa. The correlation between the two is probably positive – because obnoxiously stupid people tend to be wrong about almost everything – but macroeconomics and particle physics are sufficiently far enough.

Does he really believe that he can find legitimate arguments about macroeconomics by basically copying a book about particle physics, especially when it is a crackpot's popular book? Even if he tried to be inspired by the best physics research out there, it would probably be impossible to use it in any substantially positive way to advance economics.

The abstract of Romer's paper leaves no doubt that the chief economist of the World Bank is proud to be a small piece of an excrement attached to the appendix of crackpot Lee Smolin:
For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as "tight monetary policy can cause a recession." Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes. A parallel with string theory from physics hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.
Concerning the first sentence, the idea that all of macroeconomics has gone "backwards" for over 30 years is laughable. There are many people doing macroeconomics and they may be more right and less right. But there exist good ones – even though there can't be a consensus who these people are – who focus on the papers of other good people and they obviously know that this set of good macroeconomists know everything important that was known 30+ years ago plus something more.

Be sure that my opinion about the value and reliability of economics and macroeconomics is much less enthusiastic than the opinions about physics and high-energy physics but at some very general level, these two cases undoubtedly are analogous. The available empirical dataset is more extensive than 30+ years ago, the mathematical and computational methods are richer, a longer time has been available to collect and compare competing hypotheses. Clearly, what is the actual source of Romer's "trouble with macroeconomics" is that it is often producing scholarly work that disagrees with his predetermined – and largely ideologically predetermined – prejudices. But it's not the purpose of science – not even economics – to confirm someone's prejudices.

The following sentence makes the true motive of Romer's jihad rather transparent:
Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as "tight monetary policy can cause a recession."
You don't need a PhD to see that what he would actually like would be to ban any research in economics that disputes the dogma that "monetary policy should never be tight". But as every sane economist knows, there are damn good reasons for the monetary policy of a central bank to tighten as a certain point.

The Federal Reserve is rather likely to continue its tightening in the coming year – and not necessarily just by the smallest 0.25% steps – and the main reason is clear: Inflation began to re-emerge in the U.S. Also, a healthy economy simply does lead to the companies' and people's desire to borrow which must be or may be responded to by increasing the interest rates – either centrally, which is a sensible policy, or at the commercial level, which reflects the lenders' desire to be profitable. After all, millions of people did feel that the very low or zero or negative rates were a sign of something unhealthy and the return to the safely positive territory may be good news for the sentiment of those folks. In Europe, something like that will surely happen at some point, perhaps a year after the U.S. An economist who thinks that "loose is good" and "tight is bad" is simply an unbalanced ideologue who must have misunderstood something very important.

And there are reasons why numerous macroeconomics papers dispute even the weaker dogma that "a tight monetary policy can cause a recession". Just to be sure, if you define recession in the standard way – as two quarters of a negative growth, measured in the usual ways – or something close to it, I do believe that loose monetary policy generally reduces the probability of a recession in coming quarters.

But a loose monetary policy always involves some deformation of the market and whenever it's the case, the GDP numbers – measured in the straightforward ways – can no longer be uncritically trusted as a reliable source of the health of the economy and of the people's well-being. These are subtle things. Economists may also have very good reasons not to be afraid of a few years of mild deflation etc. Most of the Western economies saw deflation in recent two years or so and I think that almost everyone sees that those seemed like healthy, sometimes wonderfully healthy, economic conditions. A very low inflation is great because you feel that for the same money, you will be always able to buy the same things – but you are likely to have more money in the future. It makes the optimistic planning for the future much more transparent. The idea that all people are eager to go to a shopping spree whenever inflation becomes substantial – because they feel very well – is at least oversimplified.

So Romer really wants to ban any support for "tight monetary policies" and he is only inventing illogical slurs to make their advocates look bad. He is analogous to Lee Smolin who is inventing illogical slurs, adjectives, and stories against those who want to do and who do high-energy physics right, with the help of the state-of-the-art mathematical and physical methods.

As we can see in the bulk of Romer's paper, he is mainly fighting against theories that certain changes of the economy were ignited by what he calls "imaginary shocks":
Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes.
If I simplify just a little bit, his belief – repeated often in the paper – is that the economy is a completely deterministic system that only depends on people's (and he really means powerful people's) decisions. But that is at least sometimes not the case. It's extremely important for economists – and macroeconomists – to consider various hypotheses that describe the observations. Some of the causes according to some of the theories may look "imaginary" to advocates of others. But that doesn't mean that they are wrong. The whole philosophies may be different (compare with natural and man-made climate change) and it's just wrong to pick the winner before the research is (and careful, impartial comparisons are) actually done.

There may be random events, random changes of the mood etc. that are the actual reasons of many things. One doesn't want his theory to be all about some arbitrary, unpredictable, almost supernatural causes. On the other hand, the assumption that all causes in economics are absolutely controllable, measurable, and predictable is rubbish. So a sane economist simply needs to operate somewhere in between. Hardly predictable events sometimes play the role but within some error margin, a big part of the economic events is predictable and a good economist simply has to master the causal forces.

I am convinced that every sane economist – and thinker – would agree with me. One wants to make the economic theories as "deterministic" or physics-like as possible; but they cannot be made entirely "deterministic", especially because the individual people's – and collective – behavior often depends on quirks, changes of the mood, emotions etc. After all, even physics – the most "clean" discipline of sciences about the world around us – has known that the phenomena aren't really deterministic, not even at the fundamental level, since 1925.

Paul Romer boasts about his view that everything is entirely deterministic – except that he obviously doesn't have any theory that could actually make such fully deterministic predictions. Instead of such a theory, he offers six slurs for those macroeconomists whom he dislikes:
  • A general type of phlogiston that increases the quantity of consumption goods
    produced by given inputs
  • An "investment-specific" type of phlogiston that increases the quantity of
    capital goods produced by given inputs
  • A troll who makes random changes to the wages paid to all workers
  • A gremlin who makes random changes to the price of output
  • Aether, which increases the risk preference of investors
  • Caloric, which makes people want less leisure
So his "knowledge" of physics amounts to six mostly physics-based words – namely phlogiston 1, phlogiston 2, a troll, a gremlin, aether, and caloric – which he uses as insults. Be sure that I could also offer 6 different colorful slurs for Romer but unlike him, I don't think that such slurs may represent the beef of a legitimate argument. The alternative theories also have some causes and we could call these causes "Gargamel" or "Rumpeltiltskin" but listeners above 6 years of age and 100 points of IQ sort of know why this is no true evidence in one way or another. Note that even if some economic changes are explained as consequences of particular people's decisions, that still usually fails to explain why the people made the decisions. Some uncertainty at least about some causes will always be present in social sciences – including quantitative social sciences such as economics.

Like Lee Smolin, what he's doing is just insulting people and inventing unflattering slogans – whose correlation with the truth is basically non-existent. The following pages are full of aether, phlogiston, trolls, and gremlins while claiming to decide about the fate of numerous serious papers on economics. Even those there are probably some memorable well-defined piece of šit with sharp edges over there, I don't plan to swim in that particular cesspool.

There are way too many things – mostly deep misconceptions – on those 26 pages of Romer's paper. Some of them are the same as those that I have been debunking over the years – both in the socio-philosophical blog posts as well as the physics-philosophical ones. But let me only pick a few examples.

On Page 5, Romer attacks Milton Friedman's F-twist. Romer just doesn't like this important idea that I described and defended back in 2009 (click at the previous sentence).
In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that "the more significant the theory, the more unrealistic the assumptions (p.14)."
By the way, is Milton Friedman himself an "unnamed" authority?

But this Friedman's point simply is true and important – and it's supported by quite some explanations, not by any "authority". When we do anything like science, the initial assumptions may be arbitrarily "crazy" according to some pre-existing prejudices and expectations and the only thing that determines the rating of the theories is the agreement of the final predictions with the observations.

And in fact, the more shocking, prejudices-breaking the assumptions are, Friedman said, and the less likely the agreement could have looked a priori, the more important the advance is and the more seriously we should treat it when the predictions happen to agree with the observations. That's also why sane physicists consider relativity and quantum mechanics to be true foundations of modern physics. They build on assumptions that may be said to be a priori bizarre. But when things are done carefully, the theories work extremely well, and this combination is what makes the theories even more important.

People like Romer and Smolin don't like this principle because they don't want to rate theories accoring to their predictions and achievements but according to the agreement of their assumptions with these mediocre apes' medieval, childish prejudices. Isn't the spacetime created out of a classical wooden LEGO? So Lee Smolin will dislike it. Isn't some economic development explained as a consequence of some decision of a wise global banker? Romer will identify the explanation with gremlins, trolls, phlogiston, and aether – even though, I am sure, he doesn't really know what the words mean, why they were considered, and why they are wrong.

The name of Lee Smolin appears 8 times in Romer's "paper". And in all cases, he quotes the crackpot completely uncritically, as if he were a top intellectual. Sorry, Mr Romer, even if you are just an economist, it is still true that if you can't solve the homework problem that asks you to explain why a vast majority of Lee Smolin's book is cr*p, then you are dumb as a doorknob.

A major example. Half of Page 15 of Romer's "paper" is copying some of those slurs by Smolin from Chapter 16. String theorists were said to suffer from:
  1. Tremendous self-confidence
  2. An unusually monolithic community
  3. A sense of identification with the group akin to identification with a religious
    faith or political platform
  4. A strong sense of the boundary between the group and other experts
  5. A disregard for and disinterest in ideas, opinions, and work of experts who are
    not part of the group
  6. A tendency to interpret evidence optimistically, to believe exaggerated or
    incomplete statements of results, and to disregard the possibility that the theory
    might be wrong
  7. A lack of appreciation for the extent to which a research program ought to
    involve risk
These are just insults or accusations and it's spectacularly obvious that all of them apply much more accurately to Romer and Smolin than to and string theorists – and, I am a bit less certain here, than to the macroeconomists who disagree with Romer.

First of all, almost all string theorists are extremely humble and usually shy people – which is the actual reason why many of them have had quite some problems to get jobs. The accusation #1 is self-evident rubbish.

The accusation #2 is nonsense, too. The string theory community has many overlapping subfields (phenomenology including its competing braneworld/heterotic/F-theory/G2 sub-subfields, formal, mathematically motivated, applications of AdS/CFT), significant differences about many issues – the anthropic principle and the existence and describability of the black hole interior are two major examples in the last 2 decades. It's a giant amount of intellectual diversity if you realize that there are less than 1,000 "currently paid professional" string theorists in the world. Less than 1 person among 7 million is a professional string theorist. On the other hand, there is some agreement about issues that can be seemingly reliably if not rigorously demonstrated. So science simply never has the "anything goes" postmodern attitude. But to single out string theorists (and I think that also macroeconomists) as examples of a "monolithic community" is just silly.

In the item #3, he talks about a fanatical religious identification with the community. People who know me a little bit – and who know that I almost certainly belong among the world's top 10 most individualistic and independent people – know quite some counterexample. But the identification is silly, too. Many string theorists also tend to identify with very different types of folks. And even the political diversity among the string theorists is a bit higher than in the general Academia. At least you know that I am not really left-wing, to put it mildly. But there are other, somewhat less spectacular, examples.

Concerning #4, it is true that there's a strong sense of a boundary between string theorists and non-string theorists. But this "sense" exists because the very sharp boundary indeed exists. String theory – and the expertise needed to understand and investigate it – is similar to a skyscraper with many floors. One really needs quite some talent and patience to get to build all of them (by the floors, I mean the general "physics subjects" that depend on each other; string theory is the "last one") and get to the roof. Once he's on the roof, he sees the difference between the skyscraper and the nearby valleys really sharply. The higher the skyscraper is, the more it differs from the lowland that surrounds it. String theory is the highest skyscraper in all of science so the "sense" of the boundary between it and the surrounding lowlands is naturally the strongest one, indeed.

Top string experts are generally uninterested in the work of non-members, as #5 says, because they can see that those just don't work. They are igloos – sometimes demolished igloos – that simply look like minor structures on the background from the viewpoint of the skyscraper's roof. A Romer or a Smolin may scream that it's politically incorrect to point out that string theory is more coherent and string theorists are smarter and have learned many more things that depend on each other etc. etc. – except that whether or not these things are politically incorrect, they're true – and this truth is as self-evident to the string theorists as the fact that you're pretty high if you're on the roof of the Empire State Building. String theorists usually don't emphasize those things – after all, I believe that I am the only person in the world who systematically does so – but what annoys people like Smolin and Romer is that these things are true and because these true facts imply that neither Smolin nor Romer are anywhere close to the smartest people on Earth, they attack string theorists because of this fact. But this fact isn't string theorists' fault.

He says in #6 that "evidence is interpreted optimistically". This whole term "optimistically" reflects Romer's complete misunderstanding how cutting-edge physics works. Physical sciences – like mathematics – work hard to separate statements to right and wrong, not pessimistic and optimistic. There's no canonical way to attach the label "optimistic" and "pessimistic" to scientific statements. If someone says that there exists a set of arguments that will be found that will invalidate string theory and explain the world using some alternative theory with a unique vacuum etc., Romer may call it a "pessimistic" for string theorists. Except that string theorists would be thrilled if this were possible. So making such a prediction would be immensely optimistic even according to almost all string theorists. The problem with this assertion is that it is almost certainly wrong. There doesn't exist a tiny glimpse of evidence that something like that is possible. String theorists would love to see some groundbreaking progress that totally changes the situation of the field except that changes of the most radical magnitude don't take place too often and when someone talks about those revolutions, it isn't the same as actually igniting such a revolution. So without something that totally disrupts the balance, string theorists – i.e. theoretical physicists who carefully study the foundations of physics beyond quantum field theory – continue to have the beliefs they have extracted from the evidence that has actually been presented. Of course that string theory's being the only "game in town" when it comes to the description of Nature including QFTs and gravity is one of these conclusions that the experts have drawn.

The point #7 says that string theorists don't appreciate the importance of risk. This is just an absolutely incredible lie, the converse of the truth. Throughout the 1970s, there was just a dozen of string theorists who did those spectacular things with the risk that they will die of hunger. This existential risk may have gone away in the 1980s and 1990s but it's largely back. Young ingenious people are studying string theory while being completely ignorant whether they will be able to feed themselves for another year. Some of them have worked – and hopefully are working at this moment, when I am typing this sentence – on some very ambitious projects. It's really the same ambition that Romer and Smolin criticize elsewhere – another reason to say that they're logically inconsistent cranks.

Surprisingly, the words "testable" and "falsifiable" haven't appeared in Romer's text. Well, those were favorite demagogic buzzwords of Mr Peter Woit, the world's second most notorious anti-string crackpot. But Smolin has said similar things himself, too. The final thing I want to say is that it's very ironic for Romer to celebrate this anti-physics demagogy which often complained about the absence of "falsifiability". Why?

Romer's most well-known contribution before he became a bureaucrat was his being one of a dozen of economists who advocated the endogenous growth theory, the statement that the growth arises from within, from investment to the human capital etc., not from external forces (Romer did those things around 1986). Great, to some extent it is obvious, it's hard to immediately see what they really proposed or discovered.

But it's funny to look at the criticisms of this endogenous theory. There are some "technical" complaints that it incorrectly accounts for the convergence or divergence of incomes in various countries. However, what's particularly amusing is the final paragraph:
Paul Krugman criticized endogenous growth theory as nearly impossible to check by empirical evidence; “too much of it involved making assumptions about how unmeasurable things affected other unmeasurable things.”
Just to be sure, I am in no way endorsing Krugman here. But you may see that Krugman has made the claim that "Romer's theory is unfalsifiable" using words that are basically identical to those used by the anti-string critics against string theory. However, for some reasons, Romer has 100% identified himself with the anti-string critics. We may also say that Krugman basically criticizes Romer for using "imaginary causes" – the very same criticism that Romer directs against others! You know, the truth is that every important enough theory contains some causes that may look imaginary to skeptics or those who haven't internalized or embraced the theory.

As I have emphasized for more than a decade, all the people who trust Smolin's or Woit's criticisms as criticisms that are particularly apt for string theory are brainwashed simpletons. Whenever there is some criticism that may be relevant for somebody, it's always spectacularly clear to any person with at least some observational skills and intelligence that the criticism applies much more accurately to the likes of Smolin, Woit, and indeed, Romer themselves, than it does to string theorists.

Smolins, Woits, and Romers don't do any meaningful research today and they know that they couldn't become influential using this kind of work. So they want to be leaders in the amount of slurs and accusations that they emit and throw at actual active researchers – even if these accusations actually describe themselves much more than they describe anyone else. The world is full of worthless parasites such as Smolin, Woit, and Romer who endorse each other across the fields – plus millions of f*cked-up gullible imbeciles who are inclined to take these offensive lies seriously. Because the amount of stupidity in the world is this overwhelming, one actually needs some love for risk to simply point these things out.

by Luboš Motl ( at September 19, 2016 02:42 PM

Tommaso Dorigo - Scientificblogging

Are There Two Higgses ? No, And I Won Another Bet!
The 2012 measurements of the Higgs boson, performed by ATLAS and CMS on 7- and 8-TeV datasets collected during Run 1 of the LHC, were a giant triumph of fundamental physics, which conclusively showed the correctness of the theoretical explanation of electroweak symmetry breaking conceived in the 1960s.

The Higgs boson signals found by the experiments were strong and coherent enough to convince physicists as well as the general public, but at the same time the few small inconsistencies unavoidably present in any data sample, driven by statistical fluctuations, were a stimulus for fantasy interpretations. Supersymmetry enthusiasts, in particular, saw the 125 GeV boson as the first found of a set of five. SUSY in fact requires the presence of at least five such states.

read more

by Tommaso Dorigo at September 19, 2016 12:06 PM

John Baez - Azimuth

Struggles with the Continuum (Part 5)

Quantum field theory is the best method we have for describing particles and forces in a way that takes both quantum mechanics and special relativity into account. It makes many wonderfully accurate predictions. And yet, it has embroiled physics in some remarkable problems: struggles with infinities!

I want to sketch some of the key issues in the case of quantum electrodynamics, or ‘QED’. The history of QED has been nicely told here:

• Silvian Schweber, QED and the Men who Made it: Dyson, Feynman, Schwinger, and Tomonaga, Princeton U. Press, Princeton, 1994.

Instead of explaining the history, I will give a very simplified account of the current state of the subject. I hope that experts forgive me for cutting corners and trying to get across the basic ideas at the expense of many technical details. The nonexpert is encouraged to fill in the gaps with the help of some textbooks.

QED involves just one dimensionless parameter, the fine structure constant:

\displaystyle{ \alpha = \frac{1}{4 \pi \epsilon_0} \frac{e^2}{\hbar c} \approx \frac{1}{137.036} }

Here e is the electron charge, \epsilon_0 is the permittivity of the vacuum, \hbar is Planck’s constant and c is the speed of light. We can think of \alpha^{1/2} as a dimensionless version of the electron charge. It says how strongly electrons and photons interact.

Nobody knows why the fine structure constant has the value it does! In computations, we are free to treat it as an adjustable parameter. If we set it to zero, quantum electrodynamics reduces to a free theory, where photons and electrons do not interact with each other. A standard strategy in QED is to take advantage of the fact that the fine structure constant is small and expand answers to physical questions as power series in \alpha^{1/2}. This is called ‘perturbation theory’, and it allows us to exploit our knowledge of free theories.

One of the main questions we try to answer in QED is this: if we start with some particles with specified energy-momenta in the distant past, what is the probability that they will turn into certain other particles with certain other energy-momenta in the distant future? As usual, we compute this probability by first computing a complex amplitude and then taking the square of its absolute value. The amplitude, in turn, is computed as a power series in \alpha^{1/2}.

The term of order \alpha^{n/2} in this power series is a sum over Feynman diagrams with n vertices. For example, suppose we are computing the amplitude for two electrons wth some specified energy-momenta to interact and become two electrons with some other energy-momenta. One Feynman diagram appearing in the answer is this:

Here the electrons exhange a single photon. Since this diagram has two vertices, it contributes a term of order \alpha. The electrons could also exchange two photons:

giving a term of \alpha^2. A more interesting term of order \alpha^2 is this:

Here the electrons exchange a photon that splits into an electron-positron pair and then recombines. There are infinitely many diagrams with two electrons coming in and two going out. However, there are only finitely many with n vertices. Each of these contributes a term proportional to \alpha^{n/2} to the amplitude.

In general, the external edges of these diagrams correspond to the experimentally observed particles coming in and going out. The internal edges correspond to ‘virtual particles’: that is, particles that are not directly seen, but appear in intermediate steps of a process.

Each of these diagrams is actually a notation for an integral! There are systematic rules for writing down the integral starting from the Feynman diagram. To do this, we first label each edge of the Feynman diagram with an energy-momentum, a variable p \in \mathbb{R}^4. The integrand, which we shall not describe here, is a function of all these energy-momenta. In carrying out the integral, the energy-momenta of the external edges are held fixed, since these correspond to the experimentally observed particles coming in and going out. We integrate over the energy-momenta of the internal edges, which correspond to virtual particles, while requiring that energy-momentum is conserved at each vertex.

However, there is a problem: the integral typically diverges! Whenever a Feynman diagram contains a loop, the energy-momenta of the virtual particles in this loop can be arbitrarily large. Thus, we are integrating over an infinite region. In principle the integral could still converge if the integrand goes to zero fast enough. However, we rarely have such luck.

What does this mean, physically? It means that if we allow virtual particles with arbitrarily large energy-momenta in intermediate steps of a process, there are ‘too many ways for this process to occur’, so the amplitude for this process diverges.

Ultimately, the continuum nature of spacetime is to blame. In quantum mechanics, particles with large momenta are the same as waves with short wavelengths. Allowing light with arbitrarily short wavelengths created the ultraviolet catastrophe in classical electromagnetism. Quantum electromagnetism averted that catastrophe—but the problem returns in a different form as soon as we study the interaction of photons and charged particles.

Luckily, there is a strategy for tackling this problem. The integrals for Feynman diagrams become well-defined if we impose a ‘cutoff’, integrating only over energy-momenta p in some bounded region, say a ball of some large radius \Lambda. In quantum theory, a particle with momentum of magnitude greater than \Lambda is the same as a wave with wavelength less than \hbar/\Lambda. Thus, imposing the cutoff amounts to ignoring waves of short wavelength—and for the same reason, ignoring waves of high frequency. We obtain well-defined answers to physical questions when we do this. Unfortunately the answers depend on \Lambda, and if we let \Lambda \to \infty, they diverge.

However, this is not the correct limiting procedure. Indeed, among the quantities that we can compute using Feynman diagrams are the charge and mass of the electron! Its charge can be computed using diagrams in which an electron emits or absorbs a photon:

Similarly, its mass can be computed using a sum over Feynman diagrams where one electron comes in and one goes out.

The interesting thing is this: to do these calculations, we must start by assuming some charge and mass for the electron—but the charge and mass we get out of these calculations do not equal the masses and charges we put in!

The reason is that virtual particles affect the observed charge and mass of a particle. Heuristically, at least, we should think of an electron as surrounded by a cloud of virtual particles. These contribute to its mass and ‘shield’ its electric field, reducing its observed charge. It takes some work to translate between this heuristic story and actual Feynman diagram calculations, but it can be done.

Thus, there are two different concepts of mass and charge for the electron. The numbers we put into the QED calculations are called the ‘bare’ charge and mass, e_\mathrm{bare} and m_\mathrm{bare}. Poetically speaking, these are the charge and mass we would see if we could strip the electron of its virtual particle cloud and see it in its naked splendor. The numbers we get out of the QED calculations are called the ‘renormalized’ charge and mass, e_\mathbb{ren} and m_\mathbb{ren}. These are computed by doing a sum over Feynman diagrams. So, they take virtual particles into account. These are the charge and mass of the electron clothed in its cloud of virtual particles. It is these quantities, not the bare quantities, that should agree with experiment.

Thus, the correct limiting procedure in QED calculations is a bit subtle. For any value of \Lambda and any choice of e_\mathrm{bare} and m_\mathrm{bare}, we compute e_\mathbb{ren} and m_\mathbb{ren}. The necessary integrals all converge, thanks to the cutoff. We choose e_\mathrm{bare} and m_\mathrm{bare} so that e_\mathbb{ren} and m_\mathbb{ren} agree with the experimentally observed charge and mass of the electron. The bare charge and mass chosen this way depend on \Lambda, so call them e_\mathrm{bare}(\Lambda) and m_\mathrm{bare}(\Lambda).

Next, suppose we want to compute the answer to some other physics problem using QED. We do the calculation with a cutoff \Lambda, using e_\mathrm{bare}(\Lambda) and m_\mathrm{bare}(\Lambda) as the bare charge and mass in our calculation. Then we take the limit \Lambda \to \infty.

In short, rather than simply fixing the bare charge and mass and letting \Lambda \to \infty, we cleverly adjust the bare charge and mass as we take this limit. This procedure is called ‘renormalization’, and it has a complex and fascinating history:

• Laurie M. Brown, ed., Renormalization: From Lorentz to Landau (and Beyond), Springer, Berlin, 2012.

There are many technically different ways to carry out renormalization, and our account so far neglects many important issues. Let us mention three of the simplest.

First, besides the classes of Feynman diagrams already mentioned, we must also consider those where one photon goes in and one photon goes out, such as this:

These affect properties of the photon, such as its mass. Since we want the photon to be massless in QED, we have to adjust parameters as we take \Lambda \to \infty to make sure we obtain this result. We must also consider Feynman diagrams where nothing comes in and nothing comes out—so-called ‘vacuum bubbles’—and make these behave correctly as well.

Second, the procedure just described, where we impose a ‘cutoff’ and integrate over energy-momenta p lying in a ball of radius \Lambda, is not invariant under Lorentz transformations. Indeed, any theory featuring a smallest time or smallest distance violates the principles of special relativity: thanks to time dilation and Lorentz contractions, different observers will disagree about times and distances. We could accept that Lorentz invariance is broken by the cutoff and hope that it is restored in the \Lambda \to \infty limit, but physicists prefer to maintain symmetry at every step of the calculation. This requires some new ideas: for example, replacing Minkowski spacetime with 4-dimensional Euclidean space. In 4-dimensional Euclidean space, Lorentz transformations are replaced by rotations, and a ball of radius \Lambda is a rotation-invariant concept. To do their Feynman integrals in Euclidean space, physicists often let time take imaginary values. They do their calculations in this context and then transfer the results back to Minkowski spacetime at the end. Luckily, there are theorems justifying this procedure.

Third, besides infinities that arise from waves with arbitrarily short wavelengths, there are infinities that arise from waves with arbitrarily long wavelengths. The former are called ‘ultraviolet divergences’. The latter are called ‘infrared divergences’, and they afflict theories with massless particles, like the photon. For example, in QED the collision of two electrons will emit an infinite number of photons with very long wavelengths and low energies, called ‘soft photons’. In practice this is not so bad, since any experiment can only detect photons with energies above some nonzero value. However, infrared divergences are conceptually important. It seems that in QED any electron is inextricably accompanied by a cloud of soft photons. These are real, not virtual particles. This may have remarkable consequences.

Battling these and many other subtleties, many brilliant physicists and mathematicians have worked on QED. The good news is that this theory has been proved to be ‘perturbatively renormalizable’:

• J. S. Feldman, T. R. Hurd, L. Rosen and J. D. Wright, QED: A Proof of Renormalizability, Lecture Notes in Physics 312, Springer, Berlin, 1988.

• Günter Scharf, Finite Quantum Electrodynamics: The Causal Approach, Springer, Berlin, 1995

This means that we can indeed carry out the procedure roughly sketched above, obtaining answers to physical questions as power series in \alpha^{1/2}.

The bad news is we do not know if these power series converge. In fact, it is widely believed that they diverge! This puts us in a curious situation.

For example, consider the magnetic dipole moment of the electron. An electron, being a charged particle with spin, has a magnetic field. A classical computation says that its magnetic dipole moment is

\displaystyle{ \vec{\mu} = -\frac{e}{2m_e} \vec{S} }

where \vec{S} is its spin angular momentum. Quantum effects correct this computation, giving

\displaystyle{ \vec{\mu} = -g \frac{e}{2m_e} \vec{S} }

for some constant g called the gyromagnetic ratio, which can be computed using QED as a sum over Feynman diagrams with an electron exchanging a single photon with a massive charged particle:

The answer is a power series in \alpha^{1/2}, but since all these diagrams have an even number of vertices, it only contains integral powers of \alpha. The lowest-order term gives simply g = 2. In 1948, Julian Schwinger computed the next term and found a small correction to this simple result:

\displaystyle{ g = 2 + \frac{\alpha}{\pi} \approx 2.00232 }

By now a team led by Toichiro Kinoshita has computed g up to order \alpha^5. This requires computing over 13,000 integrals, one for each Feynman diagram of the above form with up to 10 vertices! The answer agrees very well with experiment: in fact, if we also take other Standard Model effects into account we get agreement to roughly one part in 10^{12}.

This is the most accurate prediction in all of science.

However, as mentioned, it is widely believed that this power series diverges! Next time I’ll explain why physicists think this, and what it means for a divergent series to give such a good answer when you add up the first few terms.

by John Baez at September 19, 2016 01:00 AM

September 16, 2016

Symmetrybreaking - Fermilab/SLAC

The secret lives of long-lived particles

A theoretical species of particle might answer nearly every question about our cosmos—if scientists can find it.

The universe is unbalanced.

Gravity is tremendously weak. But the weak force, which allows particles to interact and transform, is enormously strong. The mass of the Higgs boson is suspiciously petite. And the catalog of the makeup of the cosmos? Ninety-six percent incomplete.

Almost every observation of the subatomic universe can be explained by the Standard Model of particle physics—a robust theoretical framework bursting with verifiable predictions. But because of these unsolved puzzles, the math is awkward, incomplete and filled with restrictions.

A few more particles would solve almost all of these frustrations. Supersymmetry (nicknamed SUSY for short) is a colossal model that introduces new particles into the Standard Model’s equations. It rounds out the math and ties up loose ends. The only problem is that after decades of searching, physicists have found none of these new friends.

But maybe the reason physicists haven’t found SUSY (or other physics beyond the Standard Model) is because they’ve been looking through the wrong lens.

“Beautiful sets of models keep getting ruled out,” says Jessie Shelton, a theorist at the University of Illinois, “so we’ve had to take a step back and consider a whole new dimension in our searches, which is the lifetime of these particles.”

In the past, physicists assumed that new particles produced in particle collisions would decay immediately, almost precisely at their points of origin. Scientists can catch particles that behave this way—for example, Higgs bosons—in particle detectors built around particle collision points. But what if new particles had long lifetimes and traveled centimeters—even kilometers—before transforming into something physicists could detect?

This is not unprecedented. Bottom quarks, for instance, can travel a few tenths of a millimeter before decaying into more stable particles. And muons can travel several kilometers (with the help of special relativity) before transforming into electrons and neutrinos. Many theorists are now predicting that there may be clandestine species of particles that behave in a similar fashion. The only catch is that these long-lived particles must rarely interact with ordinary matter, thus explaining why they’ve escaped detection for so long. One possible explanation for this aloof behavior is that long live particles dwell in a hidden sector of physics.

“Hidden-sector particles are separated from ordinary matter by a quantum mechanical energy barrier—like two villages separated by a mountain range,” says Henry Lubatti from the University of Washington. “They can be right next to each other, but without a huge boost in energy to get over the peak, they’ll never be able to interact with each other.”

High-energy collisions generated by the Large Hadron Collider could kick these hidden-sector particles over this energy barrier into our own regime. And if the LHC can produce them, scientists should be able to see the fingerprints of long-lived particles imprinted in their data.

Long-lived particles jolted into our world by the LHC would most likely fly at close to the speed of light for between a few micrometers and a few hundred thousand kilometers before transforming into ordinary and measurable matter. This incredibly generous range makes it difficult for scientists to pin down where and how to look for them.

But the lifetime of a subatomic particle is much like that of any living creature. Each type of particle has an average lifespan, but the exact lifetime of an individual particle varies. If these long-lived particles can travel thousands of kilometers before decaying, scientists are hoping that they’ll still be able to catch a few of the unlucky early-transformers before they leave the detector. Lubatti and his collaborators have also proposed a new LHC surface detector, which would extend their search range by many orders of magnitude.

Because these long-lived particles themselves don’t interact with the detector, their signal would look like a stream of ordinary matter spontaneously appearing out of nowhere.

“For instance, if a long lived particle decayed into quarks while inside the muon detector, it would mimic the appearance of several muons closely clustered together,” Lubatti says. “We are triggering on events like this in the ATLAS experiment.” After recording the events, scientists use custom algorithms to reconstruct the origins of these clustered particles to see if they could be the offspring of an invisible long-lived parent.

If discovered, this new breed of matter could help answer several lingering questions in physics.

“Long-lived particles are not a prediction of a single new theory, but rather a phenomenon that could fit into almost all of our frameworks for beyond-the-Standard-Model physics,” Shelton says.

In addition to rounding out the Standard Model’s mathematics, inert long-lived particles could be cousins of dark matter—an invisible form of matter that only interacts with the visible cosmos through gravity. They could also help explain the origin of matter after the Big Bang.

“So many of us have spent a lifetime studying such a tiny fraction of the universe,” Lubatti says. “We’ve understood a lot, but there’s still a lot we don’t understand—an enormous amount we don’t understand. This gives me and my colleagues pause.”

by Sarah Charley at September 16, 2016 01:00 PM

Lubos Motl - string vacua and pheno

String theory lives its first, exciting life
Gross, Dijkgraaf mostly join the sources of deluded anti-string vitriol

Just like the Czech ex-president has said that the Left has definitively won the war against the Right for any foreseeable future, I think it's true that the haters of modern theoretical physics have definitively won the war for the newspapers and the bulk of the information sources.

The Quanta Magazine is funded by the Simons Foundation. Among the owners of the media addressing non-experts, Jim Simons is as close to the high-energy theoretical physics research community as you can get. But the journalists are independent etc. and the atmosphere among the physics writers is bad so no one could prevent the creation of an unfortunate text
The Strange Second Life of String Theory
by Ms K.C. Cole. The text is a mixed, and I would say mostly negative, package of various sentiments concerning the state of string theory. Using various words, the report about an alleged "failure of string theory" is repeated about 30 times in that article. It has become nearly mandatory for journalists to write this spectacular lie to basically every new popular text about string theory. Only journalists who have some morality avoid this lie – and there aren't too many.

With an omnipresent negative accent, the article describes the richness or complexity of string theory as people have understood it in recent years and its penetration to various adjacent scientific disciplines. What I find really annoying is that some very powerful string theorists – David Gross and Robbert Dijkgraaf – have basically joined this enterprise.

They are still exceptions – I am pretty sure that Edward Witten, Cumrun Vafa, and many others couldn't be abused to write similar anti-string rants – but voices such as Gross' and Dijkgraaf's are the privileged exceptions among the corrupt class of journalist hyenas because they are willing to say something that fits the journalists' pre-determined "narrative".

OK, let me mention a few dozens of problems I have with that article.
The Strange Second Life of String Theory
It's the title. Well, it's nonsense. One could talk about a second life if either string theory had died at some moment in the past and was resuscitated; or if one could separate its aspects to two isolated categories, "lives".

It's spectacularly obvious that none of these conditions holds. String theory has never "died" so it couldn't have been resuscitated for a second life. And the applications here and there are continuously connected with all other, including the most formal, aspects of string theory.

So there's simply just one life and the claim about a "second life" is a big lie by itself. The subtitle is written down to emphasize the half-terrible, half-successful caricature of string theory that this particular writer, K.C. Cole, decided to advocate:
String theory has so far failed to live up to its promise as a way to unite gravity and quantum mechanics. At the same time, it has blossomed into one of the most useful sets of tools in science.
Well, string theory has been known to consistently unify gravity and quantum mechanics from the 1970s, and within fully realistic supersymmetric models, since the 1980s. Already in the 1970s, it was understood why string theory avoids the ultraviolet divergences that spoil the more straightforward attempts to quantize Einstein's gravity. In the 1980s, it was shown that (super)string theory has solutions that achieve this consistency; but they also contain forces, fields, particles, and processes of all qualitative types that are needed to explain all the observations that have ever been made. Whether or not we know a compactification that precisely matches Nature around us, we already know that string theory has proven that gravity and quantum mechanics are reconcilable.

So already decades ago, string theory has successfully unified gravity and quantum mechanics. No evidence whatsoever has ever emerged that something was wrong about these proofs of the consistency. So the claim about the "failure" to unify gravity and quantum mechanics is just a lie.

You may see that Cole's basic message is simple. She wants to claim that string theory is split to two parts, a failed one and a healthy one. Moreover, the failed one is all the core of string theory – all the conceptual and unification themes. The reality is that the split doesn't exist; and the formal, conceptual, unification theme in string theory is the essential and priceless one.

This deceitful theme is repeated many, many times by K.C. Cole in her text. There are lots of other deceptions, too:
To be sure, the theory came with unsettling implications. The strings were too small to be probed by experiment and lived in as many as 11 dimensions of space.
Both of these "unsettling implications of string theory" are just rubbish. It's very likely that strings are very small and the size is close to the fundamental Planck scale, \(10^{-35}\) meters. But this wasn't a new insight, as you might guess if you were capable of noticing the name "Planck" in the previous sentence. Max Planck determined that the fundamental processes of Nature probably take place at the distance of the Planck length more than 100 years ago.

It follows directly from dimensional analysis. There may be loopholes or not. The loopholes were imaginable without string theory and they are even more clearly imaginable and specific with string theory (old and large extra dimensions etc.). But string theory has only made the older ideas about the scale more specific. The ultrashort magnitude of the Planck length was in no way a new implication of string theory.

The existence of extra dimensions of space may be said to be string theory's implication. Older theories were trying to unify electricity and magnetism already in 1919 but string theory has indeed made those extra dimensions unavoidable. But what's wrong is that this implication is "unsettling". The existence of extra dimensions of some size – which may be as short as a "few Planck lengths" but may also be much longer – is a wonderful prediction of string theory that is celebrated as a great discovery.

Although it is not "experimentally proven as a must" at this point, it is compatible with all observations we know and people who understand the logic know very well that the extra dimensions wonderfully agree with the need for a structure that explains the – seemingly complicated – list of particles and interactions that has been discovered experimentally. This list and its complexity are in one-to-one correspondence with the shape and structure of the extra dimensions. This identification – the particle and fields spectrum is explained by the shape – sounds wonderful at the qualitative level. But calculations show that it actually works.

So when someone assigns negative sentiments to this groundbreaking advance, she is only exposing her idiocy.

It's more frustrating to see what David Gross is saying these days:
For a time, many physicists believed that string theory would yield a unique way to combine quantum mechanics and gravity. “There was a hope. A moment,” said David Gross, an original player in the so-called Princeton String Quartet, a Nobel Prize winner and permanent member of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. “We even thought for a while in the mid-’80s that it was a unique theory.”
What people believed for a few months in the mid 1980s was that there was a unique realistic compactified string theory to \(d=4\). It was understood early on that there existed compactified string theories that don't match the particle spectrum we have observed. But it was understood that there could be one that matches it perfectly. And it's still as true as it was 30 years ago. The theoretically acceptable list of solutions is not unique but the solution that describes Nature is unique.

Moreover, in the duality revolution of mid 1990s, people realized that all the seemingly inequivalent "string theories", as they would call them in the previous decades, are actually connected by dualities or adjustments of the moduli. So they're actually not separate theories but mutually connected solutions of one theory. That's why all competent experts stopped using the term "string theories" in the plural after the duality revolution.

David Gross likes to say that "string theory is a framework" – in the sense that it has many "specific models", just like quantum field theories may come in the form of "many inequivalent models", and these models share some mathematical methods and principles while we also need to find out which of them is relevant if we want to make a particular prediction.

So far so good. But there's also a difference between quantum field theory and string theory. Two different QFTs are really distinct, inequivalent, different theories – encoded in a different Lagrangian, for example, and there's no way to get the objects from one theory with one Lagrangian in the QFT of another theory with a different Lagrangian. But in string theory, one can always get objects of one kind by doing some (possibly extreme, but finite) change of the space or vacuum that starts in another solution. All the different vacua and physical objects and phenomena in them do follow from some exactly identical, complete laws – it's just the effective laws for an expansion around a (vacuum or another) state that may come in many forms.

String theory is one theory. No legitimate "counter-evidence" that would revert this insight of the 1990s has been found ever since. Gross adds some "slightly less annoying" comments as well. However, he also escalates the negative ones. "There was a hope," Gross said, later suggesting that there's no "hope" anymore. But there's one even bigger shocker from Gross:
After a certain point in the early ’90s, people gave up on trying to connect to the real world.
WTF!? Did Gross say that string phenomenology hasn't existed "from the early 1990s"?

Maybe you completely stopped doing and even following string phenomenology but that's just too bad. The progress has been substantial. Just one subdirection that was basically born in the last decade. Vafa's and pals' "F-theory with localized physics on singularities" models of particle physics. Let me pick e.g. Vafa-Heckman-Beasley 2008 as the starting point. 350 and 400 followups, respectively. Recent advances derive quite some interesting flavor and similar things – e.g. neutrino oscillations including the recently observed nonzero \(\theta_{13}\) angle – out of F-theory.

And this is just a fraction of string phenomenology. One could spend hours by descriptions how heterotic or M-theory phenomenology advanced e.g. from 2000. Did you really say that "people gave up on trying to connect to the real world" over 20 years ago, David? It sounds absolutely incredible to me. Maybe, while you were criticizing Joe and others for their inclinations to the anthropic principle and "giving up", you accepted this defeatist stuff yourself, and maybe even more so than Joe did, at least when it comes to your thinking or not thinking about everything else.

Maybe you – and Dijkgraaf – now think that you may completely ignore physicists like Vafa because you're directors and he isn't. But he's successfully worked on many things including the connections of string theory to the experimental data which is arguably much more important than your administrative jobs. This quote of Gross' sheds new light e.g. on his exchanges with Gordon Kane. Indeed, Gross looks like a guy who stopped thinking about some matters – string phenomenology, in this case – more than 20 years ago but who still wants to keep the illusion that he's at least as good as the most important contemporary researchers in the discipline. Sorry but if that's approximately true, then Gross is behaving basically like Peter W*it. There may be wrong statements and guesses in many string phenomenology papers but they're doing very real work and the progress in the understanding of the predictions of those string compactifications has been highly non-trivial.

Vafa and Kane were just two names I mentioned. The whole M-theory on \(G_2\) manifolds phenomenology was only started around 2000 – by Witten and others. Is this whole research sub-industry also non-existent according to Gross? What about the braneworlds? Old large and warped dimensions? Detailed stringy models of inflation and cosmology in general? Most of the research on all these topics and others took after the mid 1990s. Are you serious that people stopped thinking about the connections between strings and experiments?

But Robbert Dijkgraaf contributes to this production of toxic nonsense, too:
“We’ve been trying to aim for the successes of the past where we had a very simple equation that captured everything,” said Robbert Dijkgraaf, the director of the Institute for Advanced Study in Princeton, New Jersey. “But now we have this big mess.”
Speak for yourself, Robbert. The fundamental laws of a theory of everything – of string theory – may be given by a "very simple equation". And I've been attracted to this possibility, too, especially as a small kid. But as an adult, I've never believed it was realistic – and I am confident that the same holds for most of the currently active string theorists. In practice, the equations we had to use to study QFT or string theory were "never too simple". Well, when I liked string field theory and didn't appreciate its perturbative limited character, I liked the equations of motion of the background-independent version of string field theory,\[

A * A = 0.

\] That was a very attractive equation. The string field \(A\) acquires a vacuum condensate \(A_0\), i.e. \(A=A_0+a\), composed of some "nearly infinitesimal strings" such that \(A_0 * \Phi\) or an (anti)commutator is related to \(Q \Phi\) acting on a string and encodes the BRST operator \(Q\). The terms \(Qa\) impose the BRST-closedness of the string states. The equation above also contains the residual term \(a*a\) which is responsible for interactions. The part \(A_0*A_0=0\) of the equation is equivalent to the condition for the nilpotency of the BRST operator, \(Q^2=0\).

It's fun and (at least open, perturbative) string theory may be derived from this starting point. At the same moment, this starting point doesn't seem to allow us to calculate effects in string theory beyond perturbative expansions – at least, it doesn't seem more potent in this way than other perturbative approaches to string theory.

OK, I want to say that a vast majority of what string theorists have been doing since the very beginning of quantitative string theory, in 1968, had nothing to do with \(A*A=0\) or similar "very simple equations". Maybe Robbert Dijkgraaf was obsessed with this idea of "very simple equations" when he began to study things like that but I never was and I think that most string theorists haven't. Already when I was a kid, it was rather clear to me that one needs to deal with some "rather difficult equations" if he wants to address the most fundamental laws of physics. "Próstoj prostóty něbudět," ("There won't be any simple simplicity any longer") was a favorite quote I picked from Lev Okun's popular book on particle physics when I was 16 or so and since that moment, I was almost never believing something different.

There's still some kind of amazing "conceptual simplicity" in string theory but it's not a simplicity of the type that a very short equation could completely define everything about physics that we need and would be comprehensible to the people with some basic math training. A very simple equation like that could finally be found but the advances in string theory have never led to any significant evidence that a "very simple equation" like that should be behind everything. At least so far.

Nothing has changed about these basic qualitative matters since 1968. So Dijkgraaf's claim that string theorists have been doing research by looking for some "very simple equation" and only recently, they found reasons that this is silly, is simply a lie. This "very simple research" was never any substantial part of the string theory research and nothing has changed about these general matters in recent years or decades.

And what about the words "big mess"? What do you exactly count as parts of the "big mess"? Are the rules about the Calabi-Yau compactifications of heterotic string theory a part of the "big mess"? What about matrix string theory? AdS/CFT and its portions? Sorry, I would never use the term "big mess" for these concepts and dozens or hundreds of others. They're just parts of the paramount knowledge that was uncovered in recent decades.

Maybe, Robbert, you fell so much in love with your well-paid job of the director that you now consider the people in the IAS and elsewhere doing serious research to be inferior dirty animals that should be spitted upon. If that's the case, they should work hard to remove you. Or do it like the tribe in Papua-New Guinea.

To make things worse:
Its tentacles have reached so deeply into so many areas in theoretical physics, it’s become almost unrecognizable, even to string theorists. “Things have gotten almost postmodern,” said Dijkgraaf, who is a painter as well as mathematical physicist.
"Tentacles" don't exactly sound beautiful or friendly – well, they're still friendlier than when someone calls all these insights "tumors". But the claim that string theory has become "unrecognizable to string theorists" is just rubbish, too. Applications of string theory in some other disciplines – e.g. in condensed matter physics – may be hard for a pure string theorist. But that's because these applications are not just string theory. They are either "modified string theory" or "string theory mixed with other topics" etc.

Nothing has become "less recognizable" let alone "postmodern" about pure string theory itself. It's a theory including all physics that may be continuously connected to the general perturbative formula for S-matrix amplitudes that uses a conformal-invariant, modular-invariant theory on a two-dimensional world sheet. Period. The actual "idea" about the set of all these possible phenomena remains clear enough. There are six maximally decompactified vacua of string theory and a large number of compactified solutions that increases with the number of compactified dimensions.

The number of all such solutions and even of the elements of some subsets may be very large but there is nothing "postmodern" about large numbers. Mathematics works for small numbers as well as large numbers. Postmodernism never works. These – richness of a space of solution and postmodernism – are completely different concepts.

Now, boundaries between string and non-string theory.
“It’s hard to say really where you should draw the boundary around and say: This is string theory; this is not string theory,” said Douglas Stanford, a physicist at the IAS. “Nobody knows whether to say they’re a string theorist anymore,” said Chris Beem, a mathematical physicist at the University of Oxford. “It’s become very confusing.”
One interpretation of Beem's words is a worrisome one: He is a rat who wants to maximally lick the rectums of the powerful ones and because dishonest and generally f*cked-up string theory bashers became omnipresent and powerful, he is tempted to lick their rectums as well. So he may want to say he isn't a string theorist.

But even with the more innocent interpretation of the paragraph above, it's mostly nonsense. Just look at the list of Chris Beem's particular papers. It's very clear that he is mostly a quantum field theorist. Even though he co-authored papers with many string theorists – I know many of his co-authors – it isn't even clear from the papers whether all the authors had to be given the basic education in the subject.

It's not clear whether Chris Beem is a string theorist but it's not because string theory is ill-defined. It's because it's not clear what Chris Beem is actually interested in, what he knows, and what he works on.

There is work on the boundary of "being pure string theory" and "having no string theory at all". But there's nothing "pathological" about it. The situation is completely analogous to the questions in the 1930s whether some papers and physicists' work were on quantum mechanics as understood in the 1920s, or quantum field theory. Well, quantum field theory is just a more complete, specific, sophisticated layer of knowledge built on top of quantum mechanics – just like string theory is a more complete, specific, sophisticated layer of knowledge built on top of quantum field theory.

In the late 1920s and the 1930s, people would start to study many issues such as the corrections to magnetic moments, hydrogen energy levels from the Lamb shift (virtual photons) etc. They could have "complained" in exactly the same way: We don't know whether we're working on quantum mechanics or quantum field theory. Well, both. It's clear that you can get far enough if you think of your research as some "cleverly directed" research on some heuristic generalization of the old quantum mechanics. But you may also view it as a more rigorous derivation from the newer, more complete theory. Once the more complete, newer theory is sufficiently understood, people who understand it know exactly what they're doing. Some people don't know it as well.

Exactly the analogous statements may be made about the research on topics where the "usual QFT methods aren't enough" yet the goals look more QFT-like and less string-like than the goals of the most "purely stringy" papers. So why the hell are you trying to paint all trivial things negatively? There are many papers that have to employ many insights and many methods from various "subfields". And they often need to know many of these things just superficially. What's wrong about it? It's unsurprising that such papers can't be unambiguously categorized. Examples like that exist in (and in between) most disciplines of science.

What's actually wrong is that the number of people who do full-fledged string research has been reduced. I think that it has been reduced partly if not mostly due to the sentiment I previously attributed to Chris Beem – many people want to lick the aßes of the string-bashing scum that has penetrated many environments surrounding the research institutions. And the string-theory-bashing scum has tangibly reduced the funding etc.

David Simmons-Duffin, Eva Silverstein, and Juan Maldacena didn't say anything that could be interpreted as string-bashing in isolation. They explain that much of string theory is about interpolations of known theories or results; string theory has impact on cosmology and other fields; we don't know the role of the landscape in Nature around us (Maldacena also defines string theory as "solid theoretical research on natural geometric structures"). Nevertheless, K.C. Cole made their statements look like a part of the same string-bashing story.

There are lots of quotes and assertions in the article that are borderline and much less often completely correct but their "emotional presentation" is always bizarre in some way. But there are many additional statements that aren't right:
Toy models are standard tools in most kinds of research. But there’s always the fear that what one learns from a simplified scenario does not apply to the real world. “It’s a bit of a deal with the devil,” Beem said. “String theory is a much less rigorously constructed set of ideas than quantum field theory, so you have to be willing to relax your standards a bit,” he said. “But you’re rewarded for that. It gives you a nice, bigger context in which to work.”
Why does Mr Beem think that "string theory is a much less rigorously constructed set of ideas than QFT"? It's an atlas composed of "patches" that are as rigorously constructed as QFTs – because the patches are QFTs. So perturbative string theory is all about a proper analysis of two-dimensional conformal field theory. Everything about perturbative string theory is encoded in this subset of QFTs. Similarly, Matrix theory allow us to fully define physics of string/M-theory using some "world volume" QFTs and AdS/CFT allows us to define the gravitational physics in an AdS bulk using a boundary CFT which is, once again, exactly as rigorous as a QFT because it is a QFT. (Later in Cole's text, Dijkgraaf mentions that the right "picture" for a string theory could be an atlas, too.)

So what the hell is Beem talking about? And additional aßholes are being added to the article:
“Answering deep questions about quantum gravity has not really happened,” [Sean Carroll] said.
What? Does he say such a thing after the black hole thermodynamics was microscopically understood, not to mention lots of insights about topology change, mirror symmetry, tachyons on orbifold fixed points etc., and even after the people found the equivalence between quantum entanglement and non-traversable wormholes and many other things? Nothing of this kind has happened?

At the end, Nima Arkani-Hamed says:
If you’re excited about responsibly attacking the very biggest existential physics questions ever, then you should be excited. But if you want a ticket to Stockholm for sure in the next 15 years, then probably not.
I would agree with both sentences including the last one because it contains the word "probably". This prize is far more experimentally oriented and of course, many pieces of work (with lasers and other things) that are vastly less important than those in stringy and string-like theoretical physics have already been awarded by the Nobel prize. The Nobel prizes still look credible enough to me but I haven't been the child who's been parroting clichés that "it's great to get one" for over 25 years. It's simply not a goal of a mature physicist. On the other hand, I am not really certain that no one will get a Nobel prize for string theory in the following decade or two.

But I think it's no coincidence that just like the title, the last sentence of Cole's article is negative about string theory. Negativity about string theory is really "her main story". Too bad that numerous well-known people join this propaganda as if they were either deluded cranks or opportunity-seeking rats.

by Luboš Motl ( at September 16, 2016 11:48 AM

John Baez - Azimuth

The Circular Electron Positron Collider

Chen-Ning Yang is perhaps China’s most famous particle physicists. Together with Tsung-Dao Lee, he won the Nobel prize in 1957 for discovering that the laws of physics known the difference between left and right. He helped create Yang–Mills theory: the theory that describes all the forces in nature except gravity. He helped find the Yang–Baxter equation, which describes what particles do when they move around on a thin sheet of matter, tracing out braids.

Right now the world of particle physics is in a shocked, somewhat demoralized state because the Large Hadron Collider has not yet found any physics beyond the Standard Model. Some Chinese scientists want to forge ahead by building an even more powerful, even more expensive accelerator.

But Yang recently came out against this. This is a big deal, because he is very prestigious, and only China has the will to pay for the next machine. The director of the Chinese institute that wants to build the next machine, Wang Yifeng, issued a point-by-point rebuttal of Yang the very next day.

Over on G+, Willie Wong translated some of Wang’s rebuttal in some comments to my post on this subject. The real goal of my post here is to make this translation a bit easier to find—not because I agree with Wang, but because this discussion is important: it affects the future of particle physics.

First let me set the stage. In 2012, two months after the Large Hadron Collider found the Higgs boson, the Institute of High Energy Physics proposed a bigger machine: the Circular Electron Positron Collider, or CEPC.

This machine would be a ring 100 kilometers around. It would collide electrons and positrons at an energy of 250 GeV, about twice what you need to make a Higgs. It could make lots of Higgs bosons and study their properties. It might find something new, too! Of course that would be the hope.

It would cost $6 billion, and the plan was that China would pay for 70% of it. Nobody knows who would pay for the rest.

According to Science:

On 4 September, Yang, in an article posted on the social media platform WeChat, says that China should not build a supercollider now. He is concerned about the huge cost and says the money would be better spent on pressing societal needs. In addition, he does not believe the science justifies the cost: The LHC confirmed the existence of the Higgs boson, he notes, but it has not discovered new particles or inconsistencies in the standard model of particle physics. The prospect of an even bigger collider succeeding where the LHC has failed is “a guess on top of a guess,” he writes. Yang argues that high-energy physicists should eschew big accelerator projects for now and start blazing trails in new experimental and theoretical approaches.

That same day, IHEP’s director, Wang Yifang, posted a point-by-point rebuttal on the institute’s public WeChat account. He criticized Yang for rehashing arguments he had made in the 1970s against building the BECP. “Thanks to comrade [Deng] Xiaoping,” who didn’t follow Yang’s advice, Wang wrote, “IHEP and the BEPC … have achieved so much today.” Wang also noted that the main task of the CEPC would not be to find new particles, but to carry out detailed studies of the Higgs boson.

Yang did not respond to request for comment. But some scientists contend that the thrust of his criticisms are against the CEPC’s anticipated upgrade, the Super Proton-Proton Collider (SPPC). “Yang’s objections are directed mostly at the SPPC,” says Li Miao, a cosmologist at Sun Yat-sen University, Guangzhou, in China, who says he is leaning toward supporting the CEPC. That’s because the cost Yang cites—$20 billion—is the estimated price tag of both the CEPC and the SPPC, Li says, and it is the SPPC that would endeavor to make discoveries beyond the standard model.

Still, opposition to the supercollider project is mounting outside the high-energy physics community. Cao Zexian, a researcher at CAS’s Institute of Physics here, contends that Chinese high-energy physicists lack the ability to steer or lead research in the field. China also lacks the industrial capacity for making advanced scientific instruments, he says, which means a supercollider would depend on foreign firms for critical components. Luo Huiqian, another researcher at the Institute of Physics, says that most big science projects in China have suffered from arbitrary cost cutting; as a result, the finished product is often a far cry from what was proposed. He doubts that the proposed CEPC would be built to specifications.

The state news agency Xinhua has lauded the debate as “progress in Chinese science” that will make big science decision-making “more transparent.” Some, however, see a call for transparency as a bad omen for the CEPC. “It means the collider may not receive the go-ahead in the near future,” asserts Institute of Physics researcher Wu Baojun. Wang acknowledged that possibility in a 7 September interview with Caijing magazine: “opposing voices naturally have an impact on future approval of the project,” he said.

Willie Wong’s prefaced his translation of Wang’s rebuttal with this:

Here is a translation of the essential parts of the rebuttal; some standard Chinese language disclaimers of deference etc are omitted. I tried to make the translation as true to the original as possible; the viewpoints expressed are not my own.

Here is the translation:

Today (September 4) published the article by CN Yang titled “China should not build an SSC today”. As a scientist who works on the front line of high energy physics and the current director of the the high energy physics institute in the Chinese Academy of Sciences, I cannot agree with his viewpoint.

(A) The first reason to Dr. Yang’s objection is that a supercollider is a bottomless hole. His objection stemmed from the American SSC wasting 3 billion US dollars and amounted to naught. The LHC cost over 10 billion US dollars. Thus the proposed Chinese accelerator cannot cost less than 20 billion US dollars, with no guaranteed returns. [Ed: emphasis original]

Here, there are actually three problems. The first is “why did SSC fail”? The second is “how much would a Chinese collider cost?” And the third is “is the estimate reasonable and realistic?” Here I address them point by point.

(1) Why did the American SSC fail? Are all colliders bottomless pits?

The many reasons leading to the failure of the American SSC include the government deficit at the time, the fight for funding against the International Space Station, the party politics of the United States, the regional competition between Texas and other states. Additionally there are problems with poor management, bad budgeting, ballooning construction costs, failure to secure international collaboration. See references [2,3] [Ed: consult original article for references; items 1-3 are English language]. In reality, “exceeding the budget” is definitely not the primary reason for the failure of the SSC; rather, the failure should be attributed to some special and circumstantial reasons, caused mainly by political elements.

For the US, abandoning the SSC was a very incorrect decision. It lost the US the chance for discovering the Higgs Boson, as well as the foundations and opportunities for future development, and thereby also the leadership position that US has occupied internationally in high energy physics until then. This definitely had a very negative impact on big science initiatives in the US, and caused one generation of Americans to lose the courage to dream. The reasons given by the American scientific community against the SSC are very similar to what we here today against the Chinese collider project. But actually the cancellation of the SSC did not increase funding to other scientific endeavors. Of course, activation of the SSC would not have reduced the funding to other scientific endeavors, and many people who objected to the project are not regretting it.

Since then, LHC was constructed in Europe, and achieved great success. Even though its construction exceeded its original budget, but not by a lot. This shows that supercollider projects do not have to be bottomless, and has a chance to succeed.

The Chinese political landscape is entirely different from that of the US. In particular, for large scale constructions, the political system is superior. China has already accomplished to date many tasks which the Americans would not, or could not do; many more will happen in the future. The failure of SSC doesn’t mean that we cannot do it. We should scientifically analyze the situation, and at the same time foster international collaboration, and properly manage the budget.

(2) How much would it cost? Our planned collider (using circumference of 100 kilometers for computations) will proceed in two steps. [Ed: details omitted. The author estimated that the electron-positron collider will cost 40 Billion Yuan, followed by the proton-proton collider which will cost 100 billion Yuan, not accounting for inflation. With approximately 10 year construction time for each phase.] The two-phase planning is to showcase the scientific longevity of the project, especially entrainment of other technical development (e.g. high energy superconductors), and that the second phase [ed: the proton-proton collider] is complementary to the scientific and technical developments of the first phase. The reason that the second phase designs are incorporated in the discussion is to prevent the scenario where design elements of the first phase inadvertently shuts off possibility of further expansion in the second phase.

(3) Is this estimate realistic? Are we going to go down the same road as the American SSC?

First, note that in the past 50 years , there were many successful colliders internationally (LEP, LHC, PEPII, KEKB/SuperKEKB etc) and many unsuccessful ones (ISABELLE, SSC, FAIR, etc). The failed ones are all proton accelerators. All electron colliders have been successful. The main reason is that proton accelerators are more complicated, and it is harder to correctly estimate the costs related to constructing machines beyond the current frontiers.

There are many successful large-scale constructions in China. In the 40 years since the founding of the high energy physics institute, we’ve built [list of high energy experiment facilities, I don’t know all their names in English], each costing over 100 million Yuan, and none are more than 5% over budget, in terms of actual costs of construction, time to completion, meeting milestones. We have a well developed expertise in budget, construction, and management.

For the CEPC (electron-positron collider) our estimates relied on two methods:

(i) Summing of the parts: separately estimating costs of individual elements and adding them up.

(ii) Comparisons: using costs for elements derived from costs of completed instruments both domestically and abroad.

At the level of the total cost and at the systems level, the two methods should produce cost estimates within 20% of each other.

After completing the initial design [ref. 1], we produced a list of more than 1000 required equipments, and based our estimates on that list. The estimates are refereed by local and international experts.

For the SPPC (the proton-proton collider; second phase) we only used the second method (comparison). This is due to the second phase not being the main mission at hand, and we are not yet sure whether we should commit to the second phase. It is therefore not very meaningful to discuss its potential cost right now. We are committed to only building the SPPC once we are sure the science and the technology are mature.

(B) The second reason given by Dr. Yang is that China is still a developing country, and there are many social-economic problems that should be solved before considering a supercollider.

Any country, especially one as big as China, must consider both the immediate and the long-term in its planning. Of course social-economic problems need to be solved, and indeed solving them is taking currently the lions share of our national budget. But we also need to consider the long term, including an appropriate amount of expenditures on basic research, to enable our continuous development and the potential to lead the world. The China at the end of the Qing dynasty has a rich populace with the world’s highest GDP. But even though the government has the ability to purchase armaments, the lack of scientific understanding reduced the country to always be on the losing side of wars.

In the past few hundred years, developments into understanding the structure of matter, from molecules, atoms, to the nucleus, the elementary particles, all contributed and led the scientific developments of their era. High energy physics pursue the finest structure of matter and its laws, the techniques used cover many different fields, from accelerator, detectors, to low temperature, superconducting, microwave, high frequency, vacuum, electronic, high precision instrumentation, automatic controls, computer science and networking, in many ways led to the developments in those fields and their broad adoption. This is a indicator field in basic science and technical developments. Building the supercollider can result in China occupying the leadership position in such diverse scientific fields for several decades, and also lead to the domestic production of many of the important scientific and technical instruments. Furthermore, it will allow us to attract international intellectual capital, and allow the training of thousands of world-leading specialists in our institutes. How is this not an urgent need for the country?

In fact, the impression the Chinese government and the average Chinese people create for the world at large is a populace with lots of money, and also infatuated with money. It is hard for a large country to have a international voice and influence without significant contribution to the human culture. This influence, in turn, affects the benefits China receive from other countries. In terms of current GDP, the proposed project (including also the phase 2 SPPC) does not exceed that of the Beijing positron-electron collider completed in the 80s, and is in fact lower than LEP, LHC, SSC, and ILC.

Designing and starting the construction of the next supercollider within the next 5 years is a rare opportunity to let us achieve a leadership position internationally in the field of high energy physics. The newly discovered Higgs boson has a relatively low mass, which allows us to probe it further using a circular positron-electron collider. Furthermore, such colliders has a chance to be modified into proton colliders. This facility will have over 5 decades of scientific use. Furthermore, currently Europe, US, and Japan all already have scientific items on their agenda, and within 20 years probably cannot construct similar facilities. This gives us an advantage in competitiveness. Thirdly, we already have the experience building the Beijing positron-electron collider, so such a facility is in our strengths. The window of opportunity typically lasts only 10 years, if we miss it, we don’t know when the next window will be. Furthermore, we have extensive experience in underground construction, and the Chinese economy is currently at a stage of high growth. We have the ability to do the constructions and also the scientific need. Therefore a supercollider is a very suitable item to consider.

(C) The third reason given by Dr. Yang is that constructing a supercollider necessarily excludes funding other basic sciences.

China currently spends 5% of all R&D budget on basic research; internationally 15% is more typical for developed countries. As a developing country aiming to joint the ranks of developed country, and as a large country, I believe we should aim to raise the ratio to 10% gradually and eventually to 15%. In terms of numbers, funding for basic science has a large potential for growth (around 100 billion yuan per annum) without taking away from other basic science research.

On the other hand, where should the increased funding be directed? Everyone knows that a large portion of our basic science research budgets are spent on purchasing scientific instruments, especially from international sources. If we evenly distribute the increased funding amount all basic science fields, the end results is raising the GDP of US, Europe, and Japan. If we instead spend 10 years putting 30 billion Yuan into accelerator science, more than 90% of the money will remain in the country, and improve our technical development and market share of domestic companies. This will also allow us to raise many new scientists and engineers, and greatly improve the state of art in domestically produced scientific instruments.

In addition, putting emphasis into high energy physics will only bring us to the normal funding level internationally (it is a fact that particle physics and nuclear physics are severely underfunded in China). For the purposes of developing a world-leading big science project, CEPC is a very good candidate. And it does not contradict a desire to also develop other basic sciences.

(D) Dr. Yang’s fourth objection is that both supersymmetry and quantum gravity have not been verified, and the particles we hope to discover using the new collider will in fact be nonexistent.

That is of course not the goal of collider science. In [ref 1] which I gave to Dr. Yang myself, we clearly discussed the scientific purpose of the instrument. Briefly speaking, the standard model is only an effective theory in the low energy limit, and a new and deeper theory is need. Even though there are some experimental evidence beyond the standard model, more data will be needed to indicate the correct direction to develop the theory. Of the known problems with the standard model, most are related to the Higgs Boson. Thus a deeper physical theory should have hints in a better understanding of the Higgs boson. CEPC can probe to 1% precision [ed. I am not sure what this means] Higgs bosons, 10 times better than LHC. From this we have the hope to correctly identify various properties of the Higgs boson, and test whether it in fact matches the standard model. At the same time, CEPC has the possibility of measuring the self-coupling of the Higgs boson, of understanding the Higgs contribution to vacuum phase transition, which is important for understanding the early universe. [Ed. in this previous sentence, the translations are a bit questionable since some HEP jargon is used with which I am not familiar] Therefore, regardless of whether LHC has discovered new physics, CEPC is necessary.

If there are new coupling mechanisms for Higgs, new associated particles, composite structure for Higgs boson, or other differences from the standard model, we can continue with the second phase of the proton-proton collider, to directly probe the difference. Of course this could be due to supersymmetry, but it could also be due to other particles. For us experimentalists, while we care about theoretical predictions, our experiments are not designed only for them. To predict whether a collider can or cannot discover a hypothetical particle at this moment in time seems premature, and is not the view point of the HEP community in general.

(E) The fifth objection is that in the past 70 years high energy physics have not led to tangible improvements to humanity, and in the future likely will not.

In the past 70 years, there are many results from high energy physics, which led to techniques common to everyday life. [Ed: list of examples include sychrotron radiation, free electron laser, scatter neutron source, MRI, PET, radiation therapy, touch screens, smart phones, the world-wide web. I omit the prose.]

[Ed. Author proceeds to discuss hypothetical economic benefits from
a) superconductor science
b) microwave source
c) cryogenics
d) electronics
sort of the usual stuff you see in funding proposals.]

(F) The sixth reason was that the institute for High Energy Physics of the Chinese Academy of Sciences has not produced much in the past 30 years. The major scientific contributions to the proposed collider will be directed by non-Chinese, and so the nobel will also go to a non-Chinese.

[Ed. I’ll skip this section because it is a self-congratulatory pat on one’s back (we actually did pretty well for the amount of money invested), a promise to promote Chinese participation in the project (in accordance to the economic investment), and the required comment that “we do science for the sake of science, and not for winning the Nobel.”]

(G) The seventh reason is that the future in HEP is in developing a new technique to accelerate particles, and developing a geometric theory, not in building large accelerators.

A new method to accelerate particles is definitely an important aspect to accelerator science. In the next several decades this can prove useful for scattering experiments or for applied fields where beam confinement is not essential. For high energy colliders, in terms of beam emittance and energy efficiency, new acceleration principles have a long way to go. During this period, high energy physics cannot be simply put on hold. In terms of “geometric theory” or “string theory”, these are too far from experimentally approachable, and is not a problem we can consider currently.

People disagree on the future of high energy physics. Currently there are no Chinese winners of the Nobel prize in physics, but there are many internationally. Dr. Yang’s viewpoints are clearly out of mainstream. Not just currently, but also in the past several decades. Dr. Yang has been documented to have held a pessimistic view of higher energy physics and its future since the 60s, and that’s how he missed out on the discovery of the standard model. He is on record as being against Chinese collider science since the 70s. It is fortunate that the government supported the Institute of High Energy Physics and constructed various supporting facilities, leading to our current achievements in synchrotron radiation and neutron scattering. For the future, we should listen to the younger scientists at the forefront of current research, for that’s how we can gain international recognition for our scientific research.

It will be very interesting to see how this plays out.

by John Baez at September 16, 2016 01:00 AM

September 15, 2016

The n-Category Cafe

Disaster at Leicester

You’ve probably met mathematicians at the University of Leicester, or read their work, or attended their talks, or been to events they’ve organized. Their pure group includes at least four people working in categorical areas: Frank Neumann, Simona Paoli, Teimuraz Pirashvili and Andy Tonks.

Now this department is under severe threat. A colleague of mine writes:

24 members of the Department of Mathematics at the University of Leicester — the great majority of the members of the department — have been informed that their post is at risk of redundancy, and will have to reapply for their positions by the end of September. Only 18 of those applying will be re-appointed (and some of those have been changed to purely teaching positions).

It’s not only mathematics at stake. The university is apparently on a process of “institutional transformation”, involving:

the closure of departments, subject areas and courses, including the Vaughan Centre for Lifelong Learning and the university bookshop. Hundreds of academic, academic-related and support staff are to be made redundant, many of them compulsorily.

If you don’t like this, sign the petition objecting! You’ll see lots of familiar names already on the list (Tim Gowers, John Baez, Ross Street, …). As signatory David Pritchard wrote, “successful departments and universities are hard to build and easy to destroy.”

by leinster ( at September 15, 2016 03:56 PM

September 13, 2016

Jester - Resonaances

Next stop: tth
This was a summer of brutally dashed hopes for a quick discovery of many fundamental particles that we were imagining. For the time being we need  to focus on the ones that actually exist, such as the Higgs boson. In the Run-1 of the LHC, the Higgs existence and identity were firmly established,  while its mass and basic properties were measured. The signal was observed with large significance in 4 different decay channels (γγ, ZZ*, WW*, ττ), and two different production modes (gluon fusion, vector-boson fusion) have been isolated.  Still, there remains many fine details to sort out. The realistic goal for the Run-2 is to pinpoint the following Higgs processes:
  • (h→bb): Decays to b-quarks.
  • (Vh): Associated production with W or Z boson. 
  • (tth): Associated production with top quarks. 

It seems that the last objective may be achieved quicker than expected. The tth production process is very interesting theoretically, because its rate is proportional to the (square of the) Yukawa coupling between the Higgs boson and top quarks. Within the Standard Model, the value of this parameter is known to a good accuracy, as it is related to the mass of the top quark. But that relation can be  disrupted in models beyond the Standard Model, with the two-Higgs-doublet model and composite/little Higgs models serving as prominent examples. Thus, measurements of the top Yukawa coupling will provide a crucial piece of information about new physics.

In the Run-1, a not-so-small signal of tth production was observed by the ATLAS and CMS collaborations in several channels. Assuming that Higgs decays have the same branching fraction as in the Standard Model, the tth signal strength normalized to the Standard Model prediction was estimated as

At face value, a strong evidence for the tth production was obtained in the Run-1! This fact was not advertised by the collaborations because the measurement is not clean due to a large number of top quarks produced by other processes at the LHC. The tth signal is thus a small blip on top of a huge background, and it's not excluded that some unaccounted for systematic errors are skewing the measurements. The collaborations thus preferred to play it safe, and wait for more data to be collected.

In the Run-2 with 13 TeV collisions the tth production cross section is 4-times larger than in the Run-1, therefore the new data are coming at a fast pace. Both ATLAS and CMS presented their first Higgs results in early August, and the tth signal is only getting stronger.  ATLAS showed their measurements in the γγ, WW/ττ, and bb final states of Higgs decay, as well as their combination:
Most channels display a signal-like excess, which is reflected by the Run-2 combination being 2.5 sigma away from zero. A similar picture is emerging in CMS, with 2-sigma signals in the γγ and WW/ττ channels. Naively combining all Run-1 and and Run-2 results one then finds
At face value, this is a discovery! Of course, this number should be treated with some caution because, due to large systematic errors, a naive Gaussian combination may not represent very well the true likelihood. Nevertheless, it indicates that, if all goes well, the discovery of the tth production mode should be officially announced in the near future, maybe even this year.

Should we get excited that the measured tth rate is significantly larger than Standard Model one? Assuming  that the current central value remains, it would mean that  the top Yukawa coupling is 40% larger than that predicted by the Standard Model. This is not impossible, but very unlikely in practice. The reason is that the top Yukawa coupling also controls the gluon fusion - the main Higgs production channel at the LHC - whose rate is measured to be in perfect agreement with the Standard Model.  Therefore, a realistic model that explains the large tth rate would also have to provide negative contributions to the gluon fusion amplitude, so as to cancel the effect of the large top Yukawa coupling. It is possible to engineer such a cancellation in concrete models, but  I'm not aware of any construction where this conspiracy arises in a natural way. Most likely, the currently observed excess is  a statistical fluctuation (possibly in combination with  underestimated theoretical and/or  experimental errors), and the central value will drift toward μ=1 as more data is collected. 

by Jester ( at September 13, 2016 07:26 PM

Jester - Resonaances

Weekend Plot: update on WIMPs
There's been a lot of discussion on this blog about the LHC not finding new physics.  I should however give justice to other experiments that also don't find new physics, often in a spectacular way. One area where this is happening is direct detection of WIMP dark matter. This weekend plot summarizes the current limits on the spin-independent scattering cross-section of dark matter particles on nucleons:
For large WIMP masses, currently the most succesful detection technology is to fill up a tank with a ton of liquid xenon and wait for a passing dark matter particle to knock one of the nuclei. Recently, we have had updates from two such experiments: LUX in the US, and PandaX in China, whose limits now cut below zeptobarn cross sections (1 zb = 10^-9 pb = 10^-45 cm^2). These two experiments are currently going head-to-head, but  Panda, being larger, will ultimately overtake LUX.  Soon, however,  it'll have to face a new fierce competitor: the XENON1T experiment, and the plot will have to be updated next year.  Fortunately, we won't need to be learning another prefix soon. Once yoctobarn sensitivity is achieved by the experiments, we will hit the neutrino floor:  the non-reducible background from solar and atmospheric neutrinos (gray area at the bottom of the plot). This will make detecting a dark matter signal much more challenging, and will certainly slow down the progress for WIMP masses larger than ~5 GeV. For lower masses,  the distance to the floor remains large. Xenon detectors lose their steam there, and another technology is needed, like germanium detectors of CDMS and CDEX, or CaWO4 crystals of CRESST. Also on this front important progress is expected soon.

What does the theory say about when we will find dark matter? It is perfectly viable that the discovery is waiting for us just behind the corner in the remaining space above the neutrino floor, but currently there's no strong theoretical hints in favor of that possibility. Usually, dark matter experiments advertise that they're just beginning to explore the interesting parameter space predicted by theory models.This is not quite correct.  If the WIMP were true to its name, that is to say if it was interacting via the weak force (meaning, coupled to Z with order 1 strength), it would have order 10 fb scattering cross section on neutrons. Unfortunately, that natural possibility was excluded in the previous century. Years of experimental progress have shown that the WIMPs, if they exist, must be interacting super-weakly with matter. For example, for a 100 GeV fermionic dark matter with the vector coupling g to the Z boson, the current limits imply g ≲ 10^-4. The coupling can be larger if the Higgs boson is the mediator of interactions between the dark and visible worlds, as the Higgs already couples very weakly to nucleons. This construction is, arguably, the most plausible one currently probed by direct detection experiments.  For a scalar dark matter particle X with mass 0.1-1 TeV  coupled to the Higgs via the interaction  λ v h |X|^2 the experiments are currently probing the coupling λ in the 0.01-1 ballpark. In general, there's no theoretical lower limit on the dark matter coupling to nucleons. Nevertheless, the weak coupling implied by direct detection limits creates some tension for the thermal production paradigm, which requires a weak (that is order picobarn) annihilation cross section for dark matter particles. This tension needs to be resolved by more complicated model building,  e.g. by arranging for resonant annihilation or for co-annihilation.

by Jester ( at September 13, 2016 07:24 PM

Symmetrybreaking - Fermilab/SLAC

The hunt for the truest north

Many theories predict the existence of magnetic monopoles, but experiments have yet to see them.

If you chop a magnet in half, you end up with two smaller magnets. Both the original and the new magnets have “north” and “south” poles. 

But what if single north and south poles exist, just like positive and negative electric charges? These hypothetical beasts, known as “magnetic monopoles,” are an important prediction in several theories. 

Like an electron, a magnetic monopole would be a fundamental particle. Nobody has seen one yet, but many—maybe even most—physicists would say monopoles probably exist.

“The electric and magnetic forces are exactly the same force,” says Wendy Taylor of Canada’s York University. “Everything would be totally symmetric if there existed a magnetic monopole. There is a strong motivation by the beauty of the symmetry to expect that this particle exists.”


Illustration by Sandbox Studio, Chicago with Corinne Mucha

Dirac to the future

Combining the work of many others, nineteenth-century physicist James Clerk Maxwell showed that electricity and magnetism were two aspects of a single thing: the electromagnetic interaction. 

But in Maxwell’s equations, the electric and magnetic forces weren’t quite the same. The electrical force had individual positive and negative charges. The magnetic force didn’t. Without single poles—monopoles—Maxwell’s theory looked asymmetrical, which bugged him. Maxwell thought and wrote a lot about the problem of the missing magnetic charge, but he left it out of the final version of his equations.

Quantum pioneer Paul Dirac picked up the monopole mantle in the early 20th century. By Dirac’s time, physicists had discovered electrons and determined they were indivisible particles, carrying a fundamental unit of electric charge. 

Dirac calculated the behavior of an electron in the magnetic field of a monopole. He used the rules of quantum physics, which say an electron or any particle also behaves like a wave. For an electron sitting near another particle—including a monopole—those rules say the electron’s wave must go through one or more full cycles wrapping around the other particle. In other words, the wave must have at least one crest and one trough: no half crests or quarter-troughs.

For an electron in the presence of a proton, this quantum wave rule explains the colors of light emitted and absorbed by a hydrogen atom, which is made of one electron and one proton. But Dirac found the electron could only have the right wave behavior if the product of the monopole magnetic charge and the fundamental electric charge carried by an electron were a whole number. That means monopoles, like electrons, carry a fundamental, indivisible charge. Any other particle carrying the fundamental electric charge—protons, positrons, muons, and so forth—will follow the same rule.

Interestingly, the logic runs the other way too. Dirac’s result says if a single type of monopole exists, even if that type is very rare, it explains a very important property of matter: why electrically charged particles carry multiples of the fundamental electric charge. (Quarks carry a fraction—one-third or two-thirds—of the fundamental charge, but they always combine to make whole-number multiples of the same charge.) And if more than one type of monopole exists, it must carry a whole-number multiple of the fundamental magnetic charge.


Illustration by Sandbox Studio, Chicago with Corinne Mucha

The magnetic unicorn

Dirac’s discovery was really a plausibility argument: If monopoles existed, they would explain a lot, but nothing would crumble if they didn’t. 

Since Dirac’s day, many theories have made predictions about the properties of magnetic monopoles. Grand unified theories predict monopoles that would be over 10 quadrillion times more massive than protons. 

Producing such particles would require more energy than Earthly accelerators can reach, “but it’s the energy that was certainly available at the beginning of the universe,” says Laura Patrizii of the Italian National Institute for Nuclear Physics. 

Cosmic ray detectors around the world are looking for signs of these monopoles, which would still be around today, interacting with molecules in the air. The MACRO experiment at Gran Sasso in Italy also looked for primordial monopoles, and provided the best constraints we have at present. 

Luckily for scientists like Patrizii and Taylor, grand unified theories aren’t the only ones to predict monopoles. Other theories predict magnetic monopoles of lower masses that could feasibly be created in the Large Hadron Collider, and of course Dirac’s original model didn’t place any mass constraints on monopoles at all. That means physicists have to be open to discovering particles that aren’t part of any existing theory. 

Both of them look for monopoles created at the Large Hadron Collider, Patrizii using the MoEDAL detector and Taylor using ATLAS.

“I think personally there's lots of reasons to believe that monopoles are out there, and we just have to keep looking,” Taylor says. 

“Magnetic monopoles are probably my favorite particle. If we discovered the magnetic monopole, [the discovery would be] on the same scale as the Higgs particle.”

by Matthew R. Francis at September 13, 2016 04:31 PM

The n-Category Cafe

HoTT and Philosophy

I’m down in Bristol at a conference – HoTT and Philosophy. Slides for my talk – The modality of physical law in modal homotopy type theory – are here.

Perhaps ‘The modality of differential equations’ would have been more accurate as I’m looking to work through an analogy in modal type theory between necessity and the jet comonad, partial differential equations being the latter’s coalgebras.

The talk should provide some intuition for a pair of talks the following day:

  • Urs Schreiber & Felix Wellen: ‘Formalizing higher Cartan geometry in modal HoTT’
  • Felix Wellen: ‘Synthetic differential geometry in homotopy type theory via a modal operator’

I met up with Urs and Felix yesterday evening. Felix is coding up in Agda geometric constructions, such as frame bundles, using the modalities of differential cohesion.

by david ( at September 13, 2016 07:05 AM

The n-Category Cafe


I’m now trying to announce all my new writings in one place: on Twitter.

Why? Well…

Someone I respect said he’s been following my online writings, off and on, ever since the old days of This Week’s Finds. He wishes it were easier to find my new stuff all in one place. Right now it’s spread out over several locations:

Azimuth: serious posts on environmental issues and applied mathematics, fairly serious popularizations of diverse scientific subjects.

Google+: short posts of all kinds, mainly light popularizations of math, physics, and astronomy.

The n-Category Café: posts on mathematics, leaning toward category theory and other forms of pure mathematics that seem too intimidating for the above forums.

Visual Insight: beautiful pictures of mathematical objects, together with explanations.

Diary: more personal stuff, and polished versions of the more interesting Google+ posts, just so I have them on my own website.

It’s absurd to expect anyone to look at all these locations to see what I’m writing. Even more absurdly, I claimed I was going to quit posting on Google+, but then didn’t. So, I’ll try to make it possible to reach everything via Twitter.

by john ( at September 13, 2016 06:12 AM

September 12, 2016

Tommaso Dorigo - Scientificblogging

INFN Selections - A Last Batch Of Advices
Next Monday, the Italian city of Rome will swarm with about 700 young physicists. They will be there to participate to a selection of 58 INFN research scientists. In previous articles (see e.g.

read more

by Tommaso Dorigo at September 12, 2016 02:49 PM

September 11, 2016

Tommaso Dorigo - Scientificblogging

Statistics At A Physics Conference ?!
Particle physics conferences are a place where you can listen to many different topics - not just news about the latest precision tests of the standard model or searches for new particles at the energy frontier. If we exclude the very small, workshop-like events where people gather to focus on a very precise topic, all other events do allow for the contamination from reports of parallel fields of research. The reason is of course that there is a significant cross-fertilization between these fields. 

read more

by Tommaso Dorigo at September 11, 2016 01:44 PM

September 09, 2016

Symmetrybreaking - Fermilab/SLAC

A tale of two black holes

What can the surprisingly huge mass of the black holes detected by LIGO tell us about dark matter and the early universe?

The historic detection of gravitational waves announced earlier this year breathed new life into a theory that’s been around for decades: that black holes created in the first second of the universe might make up dark matter. It also inspired a new idea: that those so-called primordial black holes could be contributing to a diffuse background light.

The connection between these perhaps seemingly disparate areas of astronomy were tied together neatly in a theory from Alexander Kashlinsky, an astrophysicist at NASA’s Goddard Spaceflight Center. And while it’s an unusual idea, as he says, it could be proven true in only a few years.

Mapping the glow

Kashlinsky’s focus has been on a residual infrared glow in the universe, the accumulated light of the earliest stars. Unfortunately, all the stars, galaxies and other bright objects in the sky—the known sources of light—oversaturate this diffuse glow. That means that Kashlinsky and his colleagues have to subtract them out of infrared images to find the light that’s left behind.

They’ve been doing precisely that since 2005, using data from the Spitzer space telescope to arrive at the residual infrared glow: the cosmic infrared background (CIB).

Other astronomers followed a similar process using Chandra X-ray Observatory data to map the cosmic X-ray background (CXB), the diffuse glow of hotter cosmic material and more energetic sources.

In 2013, Kashlinsky and colleagues compared the CIB and CXB and found correlations between the patchy patterns in the two datasets, indicating that something is contributing to both types of background light. So what might be the culprit for both types of light?

“The only sources that could be coherent across this wide range of wavelengths are black holes,” he says.

To explain the correlation they found, roughly 1 in 5 of the sources had to be black holes that lived in the first few hundred million years of our universe. But that ratio is oddly large.

“For comparison,” Kashlinsky says, “in the present populations, we have 1 in 1000 of the emitting sources that are black holes. At the peak of star formation, it’s 1 in 100.”

He wasn’t sure how the universe could have ever had enough black holes to produce the patterns his team saw in the CIB and CXB. Then the Laser Interferometric Gravitational-wave Observatory (LIGO) discovered a pair of strange beasts: two roughly-30-solar-mass black holes merging and emitting gravitational waves.

A few months later, Kashlinsky saw a study led by Simeon Bird analyzing the possibility that the black holes LIGO had detected were primordial—formed in the universe’s first second. “And it just all came together,” Kashlinsky says.

Gravitational secrets

The crucial ripples in space-time picked up by the LIGO detector on September 14, 2015, came from the last dance of two black holes orbiting each other and colliding. One black hole was 36 times the sun’s mass, the other 29 times. Those black-hole weights aren’t easy to make.

The majority of the universe’s black holes are less than about 15 solar masses and form as massive stars collapse at the end of their lives. A black hole weighing 30 solar masses would have to start from a star closer to 100 times our sun’s mass—and nature seems to have a hard time making stars that enormous. To compound the strangeness of the situation, the LIGO detection is from a pair of those black holes. Scientists weren’t expecting such a system, but the universe has a tendency to surprise us.

Bird and his colleagues from Johns Hopkins University next looked at the possibility that those black holes formed not from massive stars but instead during the universe’s first fractions of a second. Astronomers haven’t yet seen what the cosmos looked like at that time, so they have to rely on theoretical models.

In all of these models, the early universe exists with density variations. If there were regions of very high-contrasting density, those could have collapsed into black holes in the universe’s first second. If those black holes were at least as heavy as mountains when they formed, they’d stick around until today, dark and seemingly invisible and acting through the gravitational force. And because these primordial black holes formed from density perturbations, they wouldn’t be comprised of protons and neutrons, the particles that make up you, me, stars and, thus, the material that leads to normal black holes.

All of those characteristics make primordial black holes a tempting candidate for the universe’s mysterious dark matter, which we believe makes up some 25 percent of the universe and reveals itself only through the gravitational force. This possible connection has been around since the 1970s, and astronomers have looked for hints of primordial black holes since. Even though they’ve slowly narrowed down the possibilities, there are a few remaining hiding spots—including the region where the black holes that LIGO detected fall, between about 20 and 1000 solar masses.

Astronomers have been looking for explanations of what dark matter is for decades. The leading theory is that it’s a new type of particle, but searches keep coming up empty. On the other hand, we know black holes exist; they stem naturally from the theory of gravity.

“They’re an aesthetically pleasing candidate because they don’t need any new physics,” Bird says.

A glowing contribution

Kashlinsky’s newest analysis took the idea of primordial black holes the size that LIGO detected and looked at what that population would do to the diffuse infrared light of the universe. He evolved a model of the early universe, looking at how the first black holes would congregate and grow into clumps. These black holes matched the residual glow of the CIB and, he found, “would be just right to explain the patchiness of infrared background by sources that we measured in the first couple hundred million years of the universe.”

This theory fits nicely together, but it’s just one analysis of one possible model that came out of an observation of one astrophysical system. Researchers need several more pieces of evidence to say whether primordial black holes are in fact the dark matter. The good news is LIGO will soon begin another observing run that will be able to see black hole collisions even farther away from Earth and thus further back in time. The European gravitational wave observatory VIRGO will also come online in January, providing more data and working in tandem with LIGO.

More cases of gravitational waves from black holes around this 30-solar-masses range could add evidence that there is a population of primordial black holes. Bird and his colleague Ilias Cholis suggest looking for a more unique signal, though, in future gravitational-wave data. For two primordial black holes to become locked in a binary system and merge, they would likely be gravitationally captured during a glancing interaction, which could result in a signal with multiple frequencies or tones at any one moment.

“This is a rare event, but it would be very characteristic of our scenario,” Cholis says. “In the next 5 to 10 years, we might see one.”

This smoking-gun signature, as they call it, would be a strong piece of evidence that primordial black holes exist. And if such objects are floating around our universe, it might not be such a stretch to connect them to dark matter.

Editor’s note: Theorists Sébastien Clesse and Juan García-Bellido predicted the existence of massive, merging primordial black holes in a paper published on the arXiv on January 29, 2015, more than seven months before the signal of two such giants reached the LIGO detector. In the paper, they claimed that primordial black holes could have been the seeds of galaxies and constitute all of the dark matter in the universe.

by Liz Kruesi at September 09, 2016 04:28 PM

September 08, 2016

Sean Carroll - Preposterous Universe

Consciousness and Downward Causation

For many people, the phenomenon of consciousness is the best evidence we have that there must be something important missing in our basic physical description of the world. According to this worry, a bunch of atoms and particles, mindlessly obeying the laws of physics, can’t actually experience the way a conscious creature does. There’s no such thing as “what it is to be like” a collection of purely physical atoms; it would lack qualia, the irreducibly subjective components of our experience of the world. One argument for this conclusion is that we can conceive of collections of atoms that behave physically in exactly the same way as ordinary humans, but don’t have those inner experiences — philosophical zombies. (If you think about it carefully, I would claim, you would realize that zombies are harder to conceive of than you might originally have guessed — but that’s an argument for another time.)

The folks who find this line of reasoning compelling are not necessarily traditional Cartesian dualists who think that there is an immaterial soul distinct from the body. On the contrary, they often appreciate the arguments against “substance dualism,” and have a high degree of respect for the laws of physics (which don’t seem to need or provide evidence for any non-physical influences on our atoms). But still, they insist, there’s no way to just throw a bunch of mindless physical matter together and expect it to experience true consciousness.

People who want to dance this tricky two-step — respect for the laws of physics, but an insistence that consciousness can’t reduce to the physical — are forced to face up to a certain problem, which we might call the causal box argument. It goes like this. (Feel free to replace “physical particles” with “quantum fields” if you want to be fastidious.)

  1. Consciousness cannot be accounted for by physical particles obeying mindless equations.
  2. Human beings seem to be made up — even if not exclusively — of physical particles.
  3. To the best of our knowledge, those particles obey mindless equations, without exception.
  4. Therefore, consciousness does not exist.

Nobody actually believes this argument, let us hasten to add — they typically just deny one of the premises.

But there is a tiny sliver of wiggle room that might allow us to salvage something special about consciousness without giving up on the laws of physics — the concept of downward causation. Here we’re invoking the idea that there are different levels at which we can describe reality, as I discussed in The Big Picture at great length. We say that “higher” (more coarse-grained) levels are emergent, but that word means different things to different people. So-called “weak” emergence just says the obvious thing, that higher-level notions like the fluidity or solidity of a material substance emerge out of the properties of its microscopic constituents. In principle, if not in practice, the microscopic description is absolutely complete and comprehensive. A “strong” form of emergence would suggest that something truly new comes into being at the higher levels, something that just isn’t there in the microscopic description.

Downward causation is one manifestation of this strong-emergentist attitude. It’s the idea that what happens at lower levels can be directly influenced (causally acted upon) by what is happening at the higher levels. The idea, in other words, that you can’t really understand the microscopic behavior without knowing something about the macroscopic.

There is no reason to think that anything like downward causation really happens in the world, at least not down to the level of particles and forces. While I was writing The Big Picture, I grumbled on Twitter about how people kept talking about it but how I didn’t want to discuss it in the book; naturally, I was hectored into writing something about it.

But you can see why the concept of downward causation might be attractive to someone who doesn’t think that consciousness can be accounted for by the fields and equations of the Core Theory. Sure, the idea would be, maybe electrons and nuclei act according to the laws of physics, but those laws need to include feedback from higher levels onto that microscopic behavior — including whether or not those particles are part of a conscious creature. In that way, consciousness can play a decisive, causal role in the universe, without actually violating any physical laws.

One person who thinks that way is John Searle, the extremely distinguished philosopher from Berkeley (and originator of the Chinese Room argument). I recently received an email from Henrik Røed Sherling, who took a class with Searle and came across this very issue. He sent me this email, which he was kind enough to allow me to reproduce here:

Hi Professor Carroll,

I read your book and was at the same time awestruck and angered, because I thought your entire section on the mind was both well-written and awfully wrong — until I started thinking about it, that is. Now I genuinely don’t know what to think anymore, but I’m trying to work through it by writing a paper on the topic.

I took Philosophy of Mind with John Searle last semester at UC Berkeley. He convinced me of a lot of ideas of which your book has now disabused me. But despite your occasionally effective jabs at Searle, you never explicitly refute his own theory of the mind, Biological Naturalism. I want to do that, using an argument from your book, but I first need to make sure that I properly understand it.

Searle says this of consciousness: it is caused by neuronal processes and realized in neuronal systems, but is not ontologically reducible to these; consciousness is not just a word we have for something else that is more fundamental. He uses the following analogy to visualize his description: consciousness is to the mind like fluidity is to water. It’s a higher-level feature caused by lower-level features and realized in a system of said lower-level features. Of course, for his version of consciousness to escape the charge of epiphenomenalism, he needs the higher-level feature in this analogy to act causally on the lower-level features — he needs downward causation. In typical fashion he says that “no one in their right mind” can say that solidity does not act causally when a hammer strikes a nail, but it appears to me that this is what you are saying.

So to my questions. Is it right to say that your argument against the existence of downward causation boils down to the incompatible vocabularies of lower-level and higher-level theories? I.e. that there is no such thing as a gluon in Fluid Dynamics, nor anything such as a fluid in the Standard Model, so a cause in one theory cannot have an effect in the other simply because causes and effects are different things in the different theories; gluons don’t affect fluidity, temperaturs and pressures do; fluids don’t affect gluons, quarks and fields do. If I have understood you right, then there couldn’t be any upward causation either. In which case Searle’s theory is not only epiphenomenal, it’s plain inaccurate from the get-go; he wants consciousness to both be a higher-level feature of neuronal processes and to be caused by them. Did I get this right?

Best regards,
Henrik Røed Sherling

Here was my reply:

Dear Henrik–

Thanks for writing. Genuinely not knowing what to think is always an acceptable stance!

I think your summary of my views are pretty accurate. As I say on p. 375, poetic naturalists tend not to be impressed by downward causation, but not by upward causation either! At least, not if your theory of each individual level is complete and consistent.

Part of the issue is, as often happens, an inconsistent use of a natural-language word, in this case “cause.” The kinds of dynamical, explain-this-occurrence causes that we’re talking about here are a different beast than inter-level implications (that one might be tempted to sloppily refer to as “causes”). Features of a lower level, like conservation of energy, can certainly imply or entail features of higher-level descriptions; and indeed the converse is also possible. But saying that such implications are “causes” is to mean something completely different than when we say “swinging my elbow caused the glass of wine to fall to the floor.”

So, I like to think I’m in my right mind, and I’m happy to admit that solidity acts causally when a hammer strikes a nail. But I don’t describe that nail as a collection of particles obeying the Core Theory *and* additionally as a solid object that a hammer can hit; we should use one language or the other. At the level of elementary particles, there’s no such concept as “solidity,” and it doesn’t act causally.

To be perfectly careful — all this is how we currently see things according to modern physics. An electron responds to the other fields precisely at its location, in quantitatively well-understood ways that make no reference to whether it’s in a nail, in a brain, or in interstellar space. We can of course imagine that this understanding is wrong, and that future investigations will reveal the electron really does care about those things. That would be the greatest discovery in physics since quantum mechanics itself, perhaps of all time; but I’m not holding my breath.

I really do think that enormous confusion is caused in many areas — not just consciousness, but free will and even more purely physical phenomena — by the simple mistake of starting sentences in one language or layer of description (“I thought about summoning up the will power to resist that extra slice of pizza…”) but then ending them in a completely different vocabulary (“… but my atoms obeyed the laws of the Standard Model, so what could I do?”) The dynamical rules of the Core Theory aren’t just vague suggestions; they are absolutely precise statements about how the quantum fields making up you and me behave under any circumstances (within the “everyday life” domain of validity). And those rules say that the behavior of, say, an electron is determined by the local values of other quantum fields at the position of the electron — and by nothing else. (That’s “locality” or “microcausality” in quantum field theory.) In particular, as long as the quantum fields at the precise position of the electron are the same, the larger context in which it is embedded is utterly irrelevant.

It’s possible that the real world is different, and there is such inter-level feedback. That’s an experimentally testable question! As I mentioned to Henrik, it would be the greatest scientific discovery of our lifetimes. And there’s basically no evidence that it’s true. But it’s possible.

So I don’t think downward causation is of any help to attempts to free the phenomenon of consciousness from arising in a completely conventional way from the collective behavior of microscopic physical constituents of matter. We’re allowed to talk about consciousness as a real, causally efficacious phenomenon — as long as we stick to the appropriate human-scale level of description. But electrons get along just fine without it.

by Sean Carroll at September 08, 2016 05:01 PM

September 06, 2016

Symmetrybreaking - Fermilab/SLAC

Turning on the cosmic microphone

A new tool lets astronomers listen to the universe for the first time.

When Galileo first introduced the telescope in the 1600s, astronomers gained the ability to view parts of the universe that were invisible to the naked eye. This led to centuries of discovery—as telescopes advanced, they exposed new planets, galaxies and even a glimpse of the very early universe. 

Last September, scientists gained yet another invaluable tool: the ability to hear the cosmos through gravitational waves.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Ripples in space-time

Newton described gravity as a force. Thinking about gravity this way can explain most of the phenomena that happens here on Earth. For example, the force of gravity acting on an apple makes it fall from a tree onto an unsuspecting person sitting below it. However, to understand gravity on a cosmic scale, we need to turn to Einstein, who described gravity as the bending of space-time itself. 

Some physicists describe this process using a bowling ball and a blanket. Imagine space-time as a blanket. A bowling ball placed at the center of the blanket bends the fabric around it. The heavier an object is, the further it sinks. As you move the ball along the fabric, it produces ripples, much like a boat travelling through water.

“The curvature is what makes the Earth orbit the sun—the sun is a bowling ball in a fabric and it's that bending in the fabric that makes the Earth go around,” explains Gabriela González, the spokesperson for the Laser Interferometer Gravitational-Wave Observatory (LIGO) collaboration. 

Everything with mass—planets, stars and people—pulls on the fabric of space-time and produces gravitational waves as they move through space. These are passing through us all time, but they are much too weak to detect. 

To find these elusive signals, physicists built LIGO, twin observatories in Louisiana and Washington. At each L-shaped detector, a laser beam is split and sent down two four-kilometer arms. The beams reflect off the mirrors at each end and travel back to reunite. A passing gravitational wave slightly alters the relative lengths of the arms, shifting the path of the laser beam, creating a change that physicists can detect.  

Unlike telescopes, which are pointed toward very specific parts of the sky, detectors like LIGO scan a much larger area of the universe and hear sources from all directions. “Gravitational waves detectors are like microphones,” says Laura Nuttall, a postdoctoral researcher at Syracuse University. 

Illustration by Sandbox Studio, Chicago with Lexi Fodor

First detections

On the morning of September 14, 2015, a gravitational wave from two black holes that collided 1.3 billion years ago passed through the two LIGO detectors, and an automatic alert system pinged LIGO scientists around the world. “It took us a good part of the day to convince ourselves that this was not a drill,” González says. 

Because LIGO was still preparing for an observing run—researchers were still running tests and diagnostics during the day—they needed to conduct a large number of checks and analyses to make sure the signal was real. 

Months later, once researchers had meticulously checked the data for errors or noise (such as lightning or earthquakes) the LIGO collaboration announced to the world that they had finally reached a long-anticipated goal: Almost 100 years after Einstein first predicted their existence, scientists had detected gravitational waves. 

A few months after the first signal arrived, LIGO detected yet another black hole collision. “Finding a second one proves that there's a population of sources that will produce detectible gravitational waves,” Nuttall says. “We are actually an observatory now.”

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Cosmic microphones

Many have dubbed the detection of gravitational waves as the dawn of the age of gravitational wave astronomy. Scientists expect to see hundreds, maybe even thousands, of these binary black holes in the years to come. Gravitational-wave detectors will also allow astronomers to look much more closely at other astronomical phenomena, such as neutron stars, supernovae and even the Big Bang.

One important next step is to detect the optical counterparts—such as light from the surrounding matter or gamma ray bursts—of the sources of gravitational waves. To do this, astronomers need to point their telescopes to the area of the sky where the gravitational waves came from to find any detectable light. 

Currently, this feat is like finding a needle in a haystack. Because the field of view of gravitational wave detectors is much, much larger than telescopes, it is extremely difficult to connect the two. “Connecting gravitational waves with light for the first time will be such an important discovery that it's definitely worth the effort,” says Edo Berger, an astronomy professor at Harvard University.

LIGO is also one of several gravitational wave observatories. Other ground-based observatories, such as Virgo in Italy, KAGRA in Japan and the future LIGO India have similar sensitivities to LIGO. There are also other approaches that scientists are using—and plan to use in the future—to detect gravitational waves at completely different frequencies. 

The evolved Laser Interferometer Space Antenna (eLISA), for example, is a gravitational wave detector that physicists plan to build in space. Once complete, eLISA will be composed of three spacecraft that are over a million kilometers apart, making it sensitive to much lower gravitational wave frequencies, where scientists expect to detect supermassive black holes.

Pulsar array timing is a completely different method of detection. Pulsars are natural timekeepers, regularly emitting beams of electromagnetic radiation. Astronomers carefully measure the arrival time of the pulses to find discrepancies, because when a gravitational wave passes by, space-time warps, changing the distance between us and the pulsar, causing the pulses to arrive slightly earlier or later. This method is sensitive to even lower frequencies than eLISA. 

These and many other observatories will reveal a new view of the universe, helping scientists to study phenomena such as merging black holes, to test theories of gravity and possibly even to discover something completely unexpected, says Daniel Holz, a professor of physics and astronomy at the University of Chicago. “Usually in science you're just pushing the boundaries a little bit, but in this case, we're opening up a whole new frontier.”

by Diana Kwon at September 06, 2016 02:04 PM

September 05, 2016

Tommaso Dorigo - Scientificblogging

Farewell, Gino
Gino Bolla was an Italian scientist and the head of the Silicon Detector Facility at Fermilab. And he was a friend and a colleague. He died yesterday in a home accident. Below I remember him by recalling some good times together. Read at your own risk. 

Dear Gino,

   news of your accident reach me as I am about to board a flight in Athens, headed back home after a conference in Greece. Like all unfiltered, free media, Facebook can be quite cruel as a means of delivering this kind of information, goddamnit.

read more

by Tommaso Dorigo at September 05, 2016 07:32 PM

September 04, 2016

Lubos Motl - string vacua and pheno

Serious neutrinoless double beta-decay experiment cools down
Data collection to begin in early 2017

The main topic of my term paper in a 1998 Rutgers Glennys Farrar course was the question "Are neutrinos Majorana or Dirac?". I found the neutrino oscillations more important which is why I internalized that topic more deeply – although it was supposed to be reserved by a classmate of mine (and for some Canadian and Japanese guys who got a recent Nobel prize for the experiments). At any rate, the question I was assigned may be experimentally answered soon. Or not. (You may also want to see a similarly old term paper on the Milky Way at the galactic center.)

Neutrinos are spin-1/2 fermions. Their masses may arise just like the masses of electrons or positrons. In that case, we need a full Dirac spinor, two 2-component spinors, distinct particles and antiparticles (neutrinos and antineutrinos), and everything about the mass works just like in the case of the electrons and positrons. The Dirac mass terms are schematically\[

{\mathcal L}_{\rm Dirac} = m\bar\Psi \Psi = m \epsilon^{AB} \eta_A \chi_B + {\rm h.c.}

\] If neutrinos were Dirac particles in this sense, it would mean that right-handed neutrinos and left-handed antineutrinos do exist, after all – just like the observed left-handed neutrinos and right-handed antineutrinos. They would just be decoupled i.e. hard to be created.

However, the mass of the observed neutrinos may also arise from the Majorana mass terms that don't need to introduce any new 2-component spinors, as I will discuss in a minute. Surprisingly, these two different possibilities look indistinguishable – even if we study the neutrino oscillations.

Note that the conservation of the angular momentum, and therefore the helicity, guarantees that neutrinos or antineutrinos in the real world – which move almost by the speed of light – can't change from left-handed ones to right-handed ones or vice versa. So even in the Majorana case, the "neutrino or antineutrino" identity of the particle is "effectively" preserved because of the angular momentum conservation law, even when oscillations are taken into account.

CUORE is one of the experiments (see the picture at the top) in Gran Sasso, central Italy. This particular experiment tries to find a neutrinoless double-decay. For some technical reasons I don't really understand (but I mostly believe that there are good reasons), tellurium – namely the isotope \({}^{130}{\rm Te}\) – is being used as the original nucleus that is supposed to decay.

Oxygen only plays the role of turning the tellurium-130 to an oxide, \({\rm TeO}_2\), which is a crystal. When the tellurium nucleus decays, it heats up some matter around it whose resistance changes proportionally to the absorbed heat and this change may be measured – this form of detection is known as the "bolometer".

If you look at the isotopes of the tellurium, you will learn that 38 isotopes (plus 17 nuclear isomers) of the element are known. Two of them, tellurium-128 and tellurium-130, decay by the double beta-decay. These two isotopes also happen to be the most widespread ones, accounting for 32% and 34% of "tellurium in Nature", respectively. Tellurium-130 decays some 3,000 times more quickly – the half-life is a bit below \(10^{21}\) years – which is why this isotope's decays are easier to be seen.

Well, what's easily seen is the decay through the "normal" double beta-decay: two electrons and two antineutrinos are being emitted, exactly twice the products in a normal beta-decay. The remaining nucleus is xenon-130.\[

{}^{130}{\rm Te} \to {}^{130}{\rm Xe} + e^- + e^- + \bar\nu_e + \bar\nu_e

\] Because the energy of the xenon is known, the two electrons and two antineutrinos divide the energy difference between the tellurium-130 and the xenon-130 nuclei. It's clear that the neutrinos devour a significant fraction of that energy.

However, when neutrinos are Majorana particle, the other 2-spinor doesn't really exist – or it doesn't exist in the low-energy effective theory. The mass terms for the neutrinos, including those that are responsible for the oscillations, are created from a single 2-component spinor, schematically\[

{\mathcal L}_{\rm mass} \sim m_{\rm Majorana} \epsilon^{AB} \eta_A \eta_B + {\rm h.c.}

\] I used some 2-component antisymmetric epsilon symbol for the 2-spinors. Note that none of the etas in the product has a bar or a star. So this mass term actually "creates a pair of neutrinos" or "creates a pair of antineutrinos" or "destroys a neutrino and creates an antineutrino instead" or vice versa. As a result, it violates the conservation of the lepton number by \(\Delta L=\pm 2\).

See a question and an answer about the two types of the mass terms.

Because the lepton number is no longer conserved, the neutrinos may in principle be created and destroyed in pairs – without any "anti". This process may occur in the beta-decay as well. So if the neutrino masses arise from the Majorana term, a simplified reaction – the neutrinoless beta-decay – must also be allowed:\[

{}^{130}{\rm Te} \to {}^{130}{\rm Xe} + e^- + e^-

\] You may obtain the Feynman diagrams simply by connecting the two lines of the antineutrinos, which used to be external lines, and turning these lines into an internal propagator of the Feynman diagram. This linking should be possible; you may always use the Majorana mass term as a "vertex" with two external legs. I think that if you know the mass of the neutrino and assume it's a Majorana mass, it should also be possible to calculate the branching ratio (or partial decay rate) of this neutrinoless double beta-decay.

And it's this process that CUORE will try to find. If the neutrinos are Majorana particles, the experiment should ultimately see this rare process. When they draw the graph of the total energies of 2 electrons in "all" kinds of the double beta-decay, there should be a small peak near the maximum value of the two electrons' energy – which is some \(2.5275\MeV\), if you want to know.
See the Symmetry Magazine for a rather fresh story. The cooling has begun. In 2 months, the temperature will be dragged below 10 millikelvins. The experiment will become the coldest cubic meter in the known Universe. In early 2017, CUORE will begin to take data.
As a high-energy formal top-down theorist, I think it's more likely that the known neutrino masses are indeed Majorana masses because nothing prohibits these masses. Neutrinos don't carry any conserved charges under known forces (analogous to the electric charge) – and probably any (including so far unknown) long-range forces. In grand unified theories etc., the seesaw mechanism (or other mechanisms producing interactions in effective theories) naturally produce Majorana masses for the known 2-component neutrino spinors only. The other two-spinors, the right-handed neutrinos, may exist but they may have huge, GUT-scale masses, so these particles "effectively" don't exist in doable experiments.

There can also be Dirac masses on top of these Majorana masses. But I think that the Majorana masses of the magnitude comparable to the known neutrino masses (well, we only know the differences between squared masses of neutrinos, from the oscillations) should exist, too. And that's why I expect the neutrinoless double beta-decay to exist, too.

However, I am not particularly certain about it. The experiment may very well show that the neutrino masses aren't Majorana – or most of them are Dirac masses. It doesn't really contradict any "universal law of physics" I am aware of. I can imagine that some unknown symmetries may basically ban the neutrino masses and make the Dirac masses involving the unknown components of the neutrinos mandatory.

Because a planned upgrade of CUORE is called CUPID (another one is ABSuRD), just like the Roman god of love, I embedded a Czech cover version "Amor Magor" ("Moronic Amor") of the 1958 Connie Francis' and 1959 Neil Sedaka's song Stupid Cupid (see also an English cover from the 1990s). Ewa Farna sings about the stupid cupid who isn't able to hit her sweetheart's (or any other male) heart because he's as blind as a bullet. This Czech version was first recorded by top singer Lucie Bílá in 1998.

Her experience and emotions are superior but I chose teenage Ewa Farna because she was cute and relatively innocent at that time which seems more appropriate to me for the lyrics. ;-) If you want to take this argument to the limit, you may prefer this excellent 10-year-old Kiki Petráková's cover. (It seems to be a favorite song of Czech girls. Decreasingly perfect covers by Nelly Řehořová, Kamila Nývltová, Vendula Příhodová, Tereza Drábková, Michaela Kulhavá, Karolína Repetná, Tereza Kollerová.)

P.S.: While searching for my 1998 term paper on the Internet, I could only find a 2013 paper by Alvarez-Gaume and Vazquez-Mozo that thanks me in a footnote for some insight I can no longer remember, at least not clearly. Maybe I remember things from 1998 and older more than I remember those from 2013. ;-)

Completely off-topic: TV Prima is now broadcasting a Czech edition of Fort Boyart, with Czech participants and Czech actors. It feels like a cute sign of prosperity if a bunch of Czechs rents a real French fortress. But I just learned that already 31 nations have filmed their Fort Boyart over there... At least, the Czech version is the only one called "Something Boyard" where "something" isn't the French word "fort". We call it a pevnost. This word for a fortress is derived from the adjective "pevný", i.e. firm/solid/robust/hard/tough/resilient/steady.

by Luboš Motl ( at September 04, 2016 06:52 AM

September 03, 2016

Jester - Resonaances

Plot for Weekend: new limits on neutrino masses
This weekend's plot shows the new limits on neutrino masses from the KamLAND-Zen experiment:

KamLAND-Zen is a group of buddhist monks studying a balloon filled with the xenon isotope Xe136. That isotope has a very long lifetime, of order 10^21 years, and undergoes the lepton-number-conserving double beta decay Xe136 → Ba136 2e- 2νbar. What the monks hope to observe is the lepton violating neutrinoless double beta decay Xe136 → Ba136+2e, which would show as a peak in the invariant mass distribution of the electron pairs near 2.5 MeV. No such signal has been observed, which sets the limit on the half-life for this decay at T>1.1*10^26 years.

The neutrinoless decay is predicted to occur if neutrino masses are of Majorana type, and the rate can be characterized by the effective mass Majorana mββ (y-axis in the plot). That parameter is a function of the masses and mixing angles of the neutrinos. In particular it depends on the mass of the lightest neutrino (x-axis in the plot) which is currently unknown. Neutrino oscillations experiments have precisely measured the mass^2 differences between neutrinos, which  are roughly (0.05 eV)^2 and (0.01 eV)^2. But oscillations are not sensitive to the absolute mass scale; in particular, the lightest neutrino may well be massless for all we know.  If the heaviest neutrino has a small electron flavor component, then we expect that the mββ parameter is below 0.01 eV.  This so-called normal hierarchy case is shown as the red region in the plot, and is clearly out of experimental reach at the moment. On the other hand, in the inverted hierarchy scenario (green region in the plot), it is the two heaviest neutrinos that have a significant electron component. In this case,  the effective Majorana mass mββ is around 0.05 eV.  Finally, there is also the degenerate scenario (funnel region in the plot) where all 3 neutrinos have very similar masses with small splittings, however this scenario is now strongly disfavored by cosmological limits on the sum of the neutrino masses (e.g. the Planck limit Σmν < 0.16 eV).

As can be seen in the plot, the results from KamLAND-Zen, when translated into limits on the effective Majorana mass, almost touch the inverted hierarchy region. The strength of this limit depends on some poorly known nuclear matrix elements (hence the width of the blue band). But even in the least favorable scenario future, more sensitive experiments should be able to probe that region. Thus, there is a hope  that within the next few years we may prove the Majorana nature of neutrinos, or at least disfavor the inverted hierarchy scenario.

by Jester ( at September 03, 2016 12:20 PM

Jester - Resonaances

After the hangover
The loss of the 750 GeV diphoton resonance is a big blow to the particle physics community. We are currently going through the 5 stages of grief, everyone at their own pace, as can be seen e.g. in this comments section. Nevertheless, it may already be a good moment to revisit the story one last time, so as  to understand what went wrong.

In the recent years, physics beyond the Standard Model has seen 2 other flops of comparable impact: the faster-than-light neutrinos in OPERA, and the CMB tensor fluctuations in BICEP.  Much as the diphoton signal, both of the above triggered a binge of theoretical explanations, followed by a massive hangover. There was one big difference, however: the OPERA and BICEP signals were due to embarrassing errors on the experiments' side. This doesn't seem to be the case for the diphoton bump at the LHC. Some may wonder whether the Standard Model background may have been slightly underestimated,  or whether one experiment may have been biased by the result of the other... But, most likely, the 750 GeV bump was just due to a random fluctuation of the background at this particular energy. Regrettably, the resulting mess cannot be blamed on experimentalists, who were in fact downplaying the anomaly in their official communications. This time it's the theorists who  have some explaining to do.

Why did theorists write 500 papers about a statistical fluctuation?  One reason is that it didn't look like one at first sight. Back in December 2015, the local significance of the diphoton  bump in ATLAS run-2 data was 3.9 sigma, which means the probability of such a fluctuation was 1 in 10000. Combining available run-1 and run-2 diphoton data in ATLAS and CMS, the local significance was increased to 4.4 sigma.  All in all, it was a very unusual excess, a 1-in-100000 occurrence! Of course, this number should be interpreted with care. The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them. This can be partly taken into account by calculating the global significance, which is the probability of finding a background fluctuation of the observed size anywhere in the diphoton spectrum. The global significance of the 750 GeV bump quoted by ATLAS was only about two sigma, the fact strongly emphasized by the collaboration.  However, that number can be misleading too.  One problem with the global significance is that, unlike for the local one, it cannot be  easily combined in the presence of separate measurements of the same observable. For the diphoton final state we  have ATLAS and CMS measurements in run-1 and run-2,  thus 4 independent datasets, and their robust concordance was crucial  in creating the excitement.  Note also that what is really relevant here is the probability of a fluctuation of a given size in any of the  LHC measurement, and that is not captured by the global significance.  For these reasons, I find it more transparent work with the local significance, remembering that it should not be interpreted as the probability that the Standard Model is incorrect. By these standards, a 4.4 sigma fluctuation in a combined ATLAS and CMS dataset is still a very significant effect which deserves a special attention. What we learned the hard way is that such large fluctuations do happen at the LHC...   This lesson will certainly be taken into account next time we encounter a significant anomaly.

Another reason why the 750 GeV bump was exciting is that the measurement is rather straightforward.  Indeed, at the LHC we often see anomalies in complicated final states or poorly controlled differential distributions, and we treat those with much skepticism.  But a resonance in the diphoton spectrum is almost the simplest and cleanest observable that one can imagine (only a dilepton or 4-lepton resonance would be cleaner). We already successfully discovered one particle this way - that's how the Higgs boson first showed up in 2011. Thus, we have good reasons to believe that the collaborations control this measurement very well.

Finally, the diphoton bump was so attractive because theoretical explanations were  plausible.  It was trivial to write down a model fitting the data, there was no need to stretch or fine-tune the parameters, and it was quite natural that the particle first showed in as a diphoton resonance and not in other final states. This is in stark contrast to other recent anomalies which typically require a great deal of gymnastics to fit into a consistent picture.   The only thing to give you a pause was the tension with the LHC run-1 diphoton data, but even that became  mild after the Moriond update this year.

So we got a huge signal of a new particle in a clean channel with plausible theoretic models to explain it...  that was a really bad luck.  My conclusion may not be shared by everyone but I don't think that the theory community committed major missteps  in this case.  Given that for 30 years we have been looking for a clue about the fundamental theory beyond the Standard Model, our reaction was not disproportionate once a seemingly reliable one had arrived.  Excitement is an inherent part of physics research. And so is disappointment, apparently.

There remains a question whether we really needed 500 papers...   Well, of course not: many of  them fill an important gap.  Yet many are an interesting read, and I personally learned a lot of exciting physics from them.  Actually, I suspect that the fraction of useless papers among the 500 is lower than for regular daily topics.  On a more sociological side, these papers exacerbate the problem with our citation culture (mass-grave references), which undermines the citation count as a means to evaluate the research impact.  But that is a wider issue which I don't know how to address at the moment.

Time to move on. The ICHEP conference is coming next week, with loads of brand new results based on up to 16 inverse femtobarns of 13 TeV LHC data.  Although the rumor is that there is no new exciting  anomaly at this point, it will be interesting to see how much room is left for new physics. The hope lingers on, at least until the end of this year.

In the comments section you're welcome to lash out on the entire BSM community - we made a wrong call so we deserve it. Please, however, avoid personal attacks (unless on me). Alternatively, you can also give us a hug :) 

by Jester ( at September 03, 2016 10:44 AM

Jester - Resonaances

Black hole dark matter
The idea that dark matter is made of primordial black holes is very old but has always been in the backwater of particle physics. The WIMP or asymmetric dark matter paradigms are preferred for several reasons such as calculability, observational opportunities, and a more direct connection to cherished theories beyond the Standard Model. But in the recent months there has been more interest, triggered in part by the LIGO observations of black hole binary mergers. In the first observed event, the mass of each of the black holes was estimated at around 30 solar masses. While such a system may well be of boring astrophysical origin, it is somewhat unexpected because typical black holes we come across in everyday life are either a bit smaller (around one solar mass) or much larger (supermassive black hole in the galactic center). On the other hand, if the dark matter halo were made of black holes, scattering processes would sometimes create short-lived binary systems. Assuming a significant fraction of dark matter in the universe is made of primordial black holes, this paper estimated that the rate of merger processes is in the right ballpark to explain the LIGO events.

Primordial black holes can form from large density fluctuations in the early universe. On the largest observable scales the universe is incredibly homogenous, as witnessed by the uniform temperature of the Cosmic Microwave Background over the entire sky. However on smaller scales the primordial inhomogeneities could be much larger without contradicting observations.  From the fundamental point of view, large density fluctuations may be generated by several distinct mechanism, for example during the final stages of inflation in the waterfall phase in the hybrid inflation scenario. While it is rather generic that this or similar process may seed black hole formation in the radiation-dominated era, severe fine-tuning is required to produce the right amount of black holes and ensure that the resulting universe resembles the one we know.

All in all, it's fair to say that the scenario where all or a significant fraction of  dark matter  is made of primordial black holes is not completely absurd. Moreover, one typically expects the masses to span a fairly narrow range. Could it be that the LIGO events is the first indirect detection of dark matter made of O(10)-solar-mass black holes? One problem with this scenario is that it is excluded, as can be seen in the plot.  Black holes sloshing through the early dense universe accrete the surrounding matter and produce X-rays which could ionize atoms and disrupt the Cosmic Microwave Background. In the 10-100 solar mass range relevant for LIGO this effect currently gives the strongest constraint on primordial black holes: according to this paper they are allowed to constitute  not more than 0.01% of the total dark matter abundance. In astrophysics, however, not only signals but also constraints should be taken with a grain of salt.  In this particular case, the word in town is that the derivation contains a numerical error and that the corrected limit is 2 orders of magnitude less severe than what's shown in the plot. Moreover, this limit strongly depends on the model of accretion, and more favorable assumptions may buy another order of magnitude or two. All in all, the possibility of dark matter made of  primordial black hole in the 10-100 solar mass range should not be completely discarded yet. Another possibility is that black holes make only a small fraction of dark matter, but the merger rate is faster, closer to the estimate of this paper.

Assuming this is the true scenario, how will we know? Direct detection of black holes is discouraged, while the usual cosmic ray signals are absent. Instead, in most of the mass range, the best probes of primordial black holes are various lensing observations. For LIGO black holes, progress may be made via observations of fast radio bursts. These are strong radio signals of (probably) extragalactic origin and millisecond duration. The radio signal passing near a O(10)-solar-mass black hole could be strongly lensed, leading to repeated signals detected on Earth with an observable time delay. In the near future we should observe hundreds of such repeated bursts, or obtain new strong constraints on primordial black holes in the interesting mass ballpark. Gravitational wave astronomy may offer another way.  When more statistics is accumulated, we will be able to say something about the spatial distributions of the merger events. Primordial black holes should be distributed like dark matter halos, whereas astrophysical black holes should be correlated with luminous galaxies. Also, the typical eccentricity of the astrophysical black hole binaries should be different.  With some luck, the primordial black hole dark matter scenario may be vindicated or robustly excluded  in the near future.

See also these slides for more details. 

by Jester ( at September 03, 2016 10:44 AM

September 02, 2016

Symmetrybreaking - Fermilab/SLAC

CUORE almost ready for first cool-down

The refrigerator that will become the coldest cubic meter in the universe is fully loaded and ready to go.

Deep within a mountain in Italy, scientists have finished the assembly of an experiment more than one decade in the making. The detector of CUORE, short for Cryogenic Underground Observatory for Rare Events, is ready to be cooled down to its operating temperature for the first time.

Ettore Fiorini, the founder of the collaboration, proposed the use of low temperature detectors to search for rare events in 1984 and started creating the first prototypes with his group in Milano. What began as a personal project involving a tiny crystal and a small commercial cooler has grown to a collaboration of 165 scientists loading almost one ton of crystals and several tons of refrigerator and shields.

The CUORE experiment is looking for a rare process that would be evidence that almost massless particles called neutrinos are their own antiparticles, something that would give scientists a clue as to how our universe came to be.

Oliviero Cremonesi, current spokesperson of the CUORE collaboration, joined the quest in 1988 and helped write the first proposal for the experiment. At first, funding agencies in Italy and the United States approved a smaller version: Cuoricino.

“We had five exciting years of measurements from 2003 to 2008 on this machine, but we knew that we wanted to go bigger. So we kept working on CUORE,” Cremonesi says.

In 2005 the collaboration got approval for the big detector, which they called CUORE. That started them on a whole new journey involving growing crystals in China, bringing them to Italy by boat, and negotiating with archeologists for the right to use 2000-year-old Roman lead as shielding material. 

“I imagine climbing Mount Everest is a little bit like this,” says Lindley Winslow, a professor at the Massachusetts Institute of Technology and group leader of the MIT activities on CUORE. “We can already see the top, but this last part is the hardest. The excitement is high, but also the fear that something goes wrong.”

The CUORE detector, assembled between 2012 and 2014, consists of 19 fragile copper towers that each host 52 tellurium oxide crystals connected by wires and sensors to measure their temperature.

For this final stage, scientists built a custom refrigerator from extremely pure materials. They shielded and housed it inside of a mountain at Gran Sasso, Italy. At the end of July, scientists began moving the detector to its new home. After a brief pause to ensure the site had not been affected by the 6.2-magnitude earthquake that hit central Italy on August 24, they finished the job on August 26.

The towers now reside in the largest refrigerator used for a scientific purpose. By the end of October, they will be cooled below 10 millikelvin (negative 460 Fahrenheit), colder than outer space.

Everything has to be this cold because the scientists are searching for minuscule temperature changes caused by an ultra-rare process called neutrinoless double beta decay.

During a normal beta decay, one atom changes from one chemical element into its daughter element and sends out one electron and one antineutrino. For the neutrinoless double beta decay, this would be different: The element would change into its granddaughter. Instead of one electron and one neutrino sharing the energy of the decay, only two electrons would leave, and an observer would see no neutrinos at all.

This would only happen if neutrinos were their own antiparticles. In that case, the two neutrinos would cancel each other out, and it would seem like they never existed in the first place.

If scientists measure this decay, it would change the current scientific thinking about the neutrino and give scientists clues about why there is so much more matter than anti-matter in the universe.  

“We are excited to start the cool-down, and if everything works according to plan, we can start measuring at the beginning of next year,” Winslow says.

by Ricarda Laasch at September 02, 2016 07:41 PM

September 01, 2016

Symmetrybreaking - Fermilab/SLAC

Universe steps on the gas

A puzzling mismatch is forcing astronomers to re-think how well they understand the expansion of the universe.

Astronomers think the universe might be expanding faster than expected.

If true, it could reveal an extra wrinkle in our understanding of the universe, says Nobel Laureate Adam Riess of the Space Telescope Science Institute and Johns Hopkins University. That wrinkle might point toward new particles or suggest that the strength of dark energy, the mysterious force accelerating the expansion of the universe, actually changes over time.

The result appears in a study published in The Astrophysical Journal this July, in which Riess’s team measured the current expansion rate of the universe, also known as the Hubble constant, better than ever before.

In theory, determining this expansion is relatively simple, as long as you know the distance to a galaxy and the rate at which it is moving away from us. But distance measurements are tricky in practice and require using objects of known brightness, so-called standard candles, to gauge their distances.

The use of Type Ia supernovae—exploding stars that shine with the same intrinsic luminosity—as standard candles led to the discovery that the universe was accelerating in the first place and earned Riess, as well as Saul Perlmutter and Brian Schmidt, a Nobel Prize in 2011.

The latest measurement builds on that work and indicates that the universe is expanding by 73.2 kilometers per second per megaparsec (a unit that equals 3.3 million light-years). Think about dividing the universe into grids that are each a megaparsec long. Every time you reach a new grid, the universe is expanding 73.2 kilometers per second faster than the grid before.

Although the analysis pegs the Hubble constant to within experimental errors of just 2.4 percent, the latest result doesn’t match the expansion rate predicted from the universe’s trajectory. Here, astronomers measure the expansion rate from the radiation released 380,000 years after the Big Bang and then run that expansion forward in order to calculate what today’s expansion rate should be.

It’s similar to throwing a ball in the air, Riess says. If you understand the state of the ball (how fast it's traveling and where it is) and the physics (gravity and drag), then you should be able to precisely predict how fast that ball is traveling later on.

“So in this case, instead of a ball, it's the whole universe, and we think we should be able to predict how fast it's expanding today,” Riess says. “But the caveat, I would say, is that most of the universe is in a dark form that we don't understand.”

The rates predicted from measurements made on the early universe with the Planck satellite are 9 percent smaller than the rates measured by Riess’ team—a puzzling mismatch that suggests the universe could be expanding faster than physicists think it should.

David Kaplan, a theorist at Johns Hopkins University who was not involved with the study, is intrigued by the discrepancy because it could be easily explained with the addition of a new theory, or even a slight tweak to a current theory.

“Sometimes there's a weird discrepancy or signal and you think 'holy cow, how am I ever going to explain that?'” Kaplan says. “You try to come up with some cockamamie theory. This, on the other hand, is something that lives in a regime where it's really easy to explain it with new degrees of freedom.”

Kaplan’s favorite explanation is that there’s an undiscovered particle, which would affect the expansion rate in the early universe. “If there are super light particles that haven't been taken into account yet and they make up some smallish fraction of the universe, it seems that can explain the discrepancy relatively comfortably,” he says.

But others disagree. “We understand so little about dark energy that it's tempting to point to something there,” says David Spergel, an astronomer from Princeton University who was also not involved in the study. One explanation is that dark energy, the cause of the universe’s accelerating expansion, is growing stronger with time.

“The idea is that if dark energy is constant, clusters of galaxies are moving apart from each other but the clusters of galaxies themselves will remain forever bound,” says Alex Filippenko, an astronomer at the University of California, Berkeley and a co-author on Riess’ paper. But if dark energy is growing in strength over time, then one day—far in the future—even clusters of galaxies will get ripped apart. And the trend doesn’t stop there, he says. Galaxies, clusters of stars, stars, planetary systems, planets, and then even atoms will be torn to shreds one by one.

The implications could—literally—be Earth-shattering. But it’s also possible that one of the two measurements is wrong, so both teams are currently working toward even more precise measurements. The latest discrepancy is also relatively minor compared to past disagreements.

“I'm old enough to remember when I was first a student and went to conferences and people argued over whether the Hubble constant was 50 or 100,” says Spergel. “We're now in a situation where the low camp is arguing for 67 and the high camp is arguing for 73. So we've made progress! And that's not to belittle this discrepancy. I think it's really interesting. It could be the signature of new physics.”

by Shannon Hall at September 01, 2016 03:48 PM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
October 01, 2016 03:06 AM
All times are UTC.

Suggest a blog: