# Particle Physics Planet

## July 01, 2016

### Emily Lakdawalla - The Planetary Society Blog

How to watch Juno's orbit insertion
The big day is almost here. Juno begins firing its main engine at 20:18 PT / 23:18 ET / 03:18 UT on July 4/5, and the maneuver should be over 35 minutes later at 20:53 / 23:53 / 03:53. Here's how you can follow the mission through its most hazardous event since launch.

### Christian P. Robert - xi'an's og

This year, a new stand at the farmers’ market offers local and unusual varieties of fruits and vegetables, including radishes of at least five different colours. When I bought a bunch yesterday morning, the seller gave me two additional bunches, which means I will be eating radishes the whole week (or until they get too peppery!).

Filed under: Kids Tagged: farmers' market, food, France, varieties of radishes

### Emily Lakdawalla - The Planetary Society Blog

What's up in the solar system, July 2016 edition: Juno to enter orbit, NASA missions all extended
Highlights this month include the impending arrival of Juno at Jupiter, the approval of extended missions for all of NASA's solar system spacecraft, and public data releases from Rosetta, New Horizons, and Cassini.

### astrobites - astro-ph reader's digest

Water worlds – self-arrests, thermostats and long-term climate stability
Authors: Cowan, Nicolas B.

First Author’s Institution: McGill University, Montreal, Canada

Status: To appear in Proceedings of the Comparative Climates of Terrestrial Planets II conference

Earth features a unique surface among Solar System planets; it is composed of both liquid water oceans and large continents. The continents float on top of the mantle rock material, while the water is continuously recycled by subducting slabs in the dynamic system of plate tectonics. The stability of the average sea level and the permanent flux of water in the tectonic system is by far not self-explanatory. That the continents are exposed proves to be quite good circumstances for us — they could very well reside below the ocean surface, which would prohibit any land-based life as we know it. For a deeper understanding of this conundrum we have to involve not only the study of our own planet, but we need to see its formation and evolution in comparison with other terrestrial planets and moons in the Solar System and even the galaxy.

### Long-term climate stability — self-regulation or runaway?

Earth’s climate stability seems to be regulated by the carbonate-silicate cycle, during which rocks are transformed by weathering, sedimentation and magmatism. This process dictates the pace of carbon release from the interior to the atmosphere and is crucial in regulating the global temperature (which is why excess release of, among others, carbon dioxide is generally considered to be a bad idea…). A good indication that the process works (on million year timescales!) is that we did have liquid water on the surface ~4.5 billion years ago, even though the Sun was much fainter than today and Earth should have been in a global snowball state at that time.

Such a self-regulation effect is a neat mechanism, as it might allow extrasolar planets to harbour liquid water in a relatively broad zone of separations to their host star – the so called habitable zone. However, the temperature thermostat of Earth’s climate stability not only depends on plate tectonics, but the exposure of land and the resulting weathering of rocks is crucial to this process. Therefore, on pure water worlds (which might be frequent), where the whole surface is covered with oceans, we might run into problems….

A true hero.

### How to get rid of too much water?

Hypothetically, waterworlds can enter a state of “self-arrest” if they do not feature a thermostat as discussed above: their host-star continually brightens and therefore the planet will heat up over time. Eventually, it will enter a stage where water vapour can be held in the atmosphere, which will subsequently be destroyed by UV light from the star. This process continues until continents are exposed to the atmosphere, at which point the planet enters a climate cycle as Earth. Hypothetically.

Obviously, we do not have any evidence that such a process can act and we do not understand the complicated interactions of the atmosphere and stellar irradiation well enough to make detailed predictions. However, if losing water through the atmosphere would not work, maybe terrestrial planets can incorporate more water into their mantles. As the author of today’s paper suggested in a former work, it is possible that the exact amount of water incorporated into the planetary interior crucially depends on the pressure at the seafloor. Earth’s mantle harbours ~3-11 oceans, thus there is much more water in the interior than exposed on the surface, and in general it could absorb even more – it just does not do it. So maybe we face a self-regulation here again.

### Columbus for exo-worlds

Nothing will probably resolve this enigma until we will be able to actually *see* land on other worlds, we could make a census of planetary properties and compare the results with the predictions by various theories. This might sound like wild science fiction, and so far it is. However, the author lists recent ideas of how observations of continents on other terrestrial planets could be done in the not-so-distant future. We’re talking about 10-30 years with new space facilities, which will give us high enough spectral resolution to decipher changes in brightness and color of the observed planets. These estimates can be converted into very rough ideas of how the surface of a planet looks like.

Another option is to analyse the pieces of shattered worlds which fall onto the surfaces of white dwarfs, dead stars at the end of their life cycles. This is actually done already and gives us clues about the elemental abundances in the broken-up pieces of former terrestrial planets around these stars. With further technical and theoretical development we might one day be able to understand the distribution of water in the universe — the no. 1 element important for complex chemistry and thus life.

### Lubos Motl - string vacua and pheno

Stanley Mandelstam: 1928-2016
A string theorist whose surname is known to every particle physicist, Stanley Mandelstam, Berkeley's emeritus professor, died on the Brexit referendum Thursday at age of 87.

He was born in Johannesburg, South Africa. His mother has lived there from the wealthy years of gold, his father Boris moved there from Latvia. Throughout his childhood, they lived in a small town but moved back to Johannesburg. He was forced to earn a practical – chemical – degree which he has never used in his life. But he returned to theoretical physics and got his degrees in 1954 (B.A.) and 1956 (PhD) in Cambridge and Birmingham before he joined the Berkeley faculty in 1963.

Mandelstam was one of the key forefathers of string theory and he has also gotten very far in some of the most up-to-date fancy projects in string theory.

Every particle physicist knows him for the Mandelstam invariants of the $$2\to 2$$ scattering,$s = (p_1+p_2)^2,\,\, t = (p_1-p_3)^2,\,\, u = (p_1-p_4)^2.$ assuming $$\sum_{j=1}^4 p_j^\mu = 0$$ and the "mostly minus" metric convention. They obey$s+t+u = \sum_{j=1}^4 m_i^2$ and are extremely useful to describe all the Lorentz-invariant data about the scattering energies and momenta (in Mandelstam's original language, they are variables to describe the "double dispersion relations"). Mandelstam introduced these symbols in 1958 and they're of course an important yet trivial thing. The bulk of his work was much less trivial.

Mandelstam was interested in quantization of gravity at least since 1962 and he was one of the fathers of the bootstrap program, a moral foundation for string theory that was born later, and Regge theory (with Tullio Regge), the ancestor of the first explicit stringy formulae. So he did spend a lot of time with the scattering of hadrons (and also the high-energy, special-angle behavior of the scattering amplitudes) in the 1960s.

A paper on vortices and confinement has surpassed 1,000 citations. Some other papers on solitons were close to this threshold. Independently of Sidney Coleman (building on ideas of Tony Skyrme), Mandelstam showed the kink-fermion equivalence of the sine-Gordon and Thirring models in 2D (a co-father of bosonization/fermionization).

In the mid 1970s, he was one in the extremely small group of physicists who have worked on string theory (almost) in the same sense that we know it today. So he reviewed the "dual resonance models", as string theory was called at the beginning, wrote down the "three-string vertex" (which was important for me when I was checking that the string field theory interactions follow from matrix string theory, and in our BMN interaction research), and gave the right stringy "geometric" interpretation to the previously "just algebraic" Virasoro symmetry. Mandelstam clarified the origin of factorization and helped to turn the path integrals into routine work of a string theorist. He summarized his contributions to the field in the 1970s in these 2008 memoirs.

He worked with the (world sheet) supersymmetric versions of string theory as soon as supersymmetry was discovered. So for example, Mandelstam was the first physicist who wrote the NSR generalizations of the Veneziano amplitude. Mandelstam was the main guy who began to use the conformal symmetry (that he extracted from the Virasoro algebra) to calculate the scattering amplitude integrand on the world sheet in many domains. He loved supersymmetry and did lots for Her, too. In 1983 or 1987, he proved the finiteness of the $$\NNN=4$$ gauge theory and its scale invariance to all orders of perturbation theory. He was also deeply interested in the Seiberg-Witten theory.

Stanley Mandelstam was also the man who constructed the first proof of the ultraviolet finiteness of the $$n$$-loop perturbative string theory amplitudes – he proved the finiteness of perturbative string theory (PDF). The main obstacle he has overcome was the proof that all "corners" of the moduli space of the Riemann surfaces that have the potential to produce divergences may be interpreted as regions corresponding to infrared divergences as interpreted in a low-energy field theory limit. Almost all later proofs of finiteness of string theory are simply Mandelstam's cornerstone trick enriched by some more or less tedious technicalities that depend on the precise formalism how the fermions are treated etc.

Mandelstam was a keen teacher. He has patiently taught lots of undergraduate courses and was a key person who has built our current knowledge base, as a grateful student put it. The list of Mandelstam's Berkeley students includes some highly familiar names such as Joseph Polchinski, Michio Kaku, Charles Thorn, and (my once co-author) Nathan Berkovits.

RIP, Prof Mandelstam.

### Peter Coles - In the Dark

The Flowers in the Field: The Somme Remembered

I’ve posted this at 7.20am on 1st July 2016. Precisely one hundred years ago, following a heavy artillery bombardment that had been going on for a week, an enormous mine was exploded  under a fortified position at Hawthorn Ridge near Beaumont Hamel on the River Somme in France. Here is footage of the actual explosion:

Ten minutes later, the first French and British troops went “over the top” on the first day of the Battle of the Somme. It was to be the bloodiest day in the history of the British Army.

Here is an edited version of a piece I wrote some time ago about this battle and its aftermath.

–0–

Twelve summers ago, in 2004, I spent an enjoyable day walking in the beautiful Peak District of Derbyshire followed by an evening at the opera in the pleasant spa town of Buxton, where there is an annual music festival. The opera I saw was A Turn of the Screw, by Benjamin Britten: a little incongruous for Buxton’s fine little Opera House which is decorated with chintzy Edwardiana and which was probably intended for performances of Gilbert & Sullivan light comic operettas rather than stark tales of psychological terror set to unsettling atonal music.

When Buxton’s theatre was built, in 1903, the town was a fashionable resort at which the well-to-do could take the waters and relax in the comfort of one of the many smart hotels.

Arriving over an hour before the opera started, I took a walk around the place and ended up on a small hill overlooking the town centre where I found the local war memorial. This is typical of the sort of thing one can see in small towns the length and breadth of Britain. It lists the names and dates of those killed during the “Great War” (1914-1918). Actually, it lists the names but mostly there is only one date, 1916.

The 1st Battalion of the Nottingham and Derbyshire Regiment (known as the Sherwood Foresters) took part in the Battle of the Somme that started on 1st July 1916. For many of them it ended that day too; some of their names are listed on Buxton’s memorial.

On the first day of this offensive, the British Army suffered 58,000 casualties as, all along the western front, troops walked slowly and defencelessly into concentrated fire from heavy machine guns that were supposed to have been knocked out by the artillery barrage that preceded the attack. The bombardment had been almost entirely ineffective, and it finished well before the British advance started, so the Germans had plenty of time to return to their positions and wait for the advancing British. It had also been believed that the artillery shells would have cut the barbed wire protecting German positions. It didn’t. British and French troops who got entangled were sitting ducks. Carnage ensued.

Rather than calling off the attack in the face of the horrific slaughter, the powers that be carried on sending troops over the top to their doom for months on end. By the end of the battle (in November that year) the British losses were a staggering 420,000, while those on the German side were estimated at half a million. The territory gained at such a heavy price was negligible.

These numbers are beyond comprehension, but their impact on places like Buxton was measurably real. Buxton became a town of widows. The loss of manpower made it impossible for many businesses to continue when peace returned in 1918 and a steep economic decline followed. It never fully recovered from the devastation of 1916 and its pre-war posterity never returned.

And the carnage didn’t end on the Somme. As the “Great War” stumbled on, battle after battle degenerated into bloody fiasco. Just a year later the Third Battle of Ypres saw another 310,000 dead on the British side as another major assault on the German defences faltered in the mud of Passchendaele. By the end of the War on 11th November 1918, losses on both sides were counted in millions.

–0–

I decided to end this piece with the following video featuring music by George Butterworth (A Shropshire Lad: Rhapsody for Orchestra, inspired by the poetry of A.E. Housman, and one of the few surviving complete works of this composer). Images of present-day Shropshire are interspersed with photographs taken on the Somme in 1916. I chose this because George Butterworth too lost his life in the Battle of the Somme (on 5th August 1916). Lest we forget.

## June 30, 2016

### Emily Lakdawalla - The Planetary Society Blog

Juno's first taste of science from Jupiter
Jupiter is growing in Juno's forward view as the spacecraft approaches for its orbit insertion July 5 (July 4 in the Americas). The mission has released images from JunoCam and sonifications of data from the plasma waves instrument as Juno begins to sense Jupiter.

### astrobites - astro-ph reader's digest

Guide to Empirical Velocity Laws

Key words: Faber-Jackson relation, Tully-Fisher relation, $M \propto \sigma$ relation, stellar velocities, empirical relations

Lost in the forest of articles

If you read this line chances are pretty high that you are interested in astrophysics. In case you are an astronomy student you might know this dilemma: You want to learn about astrophysics and dive into the literature, but you have no clue where to start and what articles are important to read. There are just so many articles published every day that there is obviously no chance to read all of them. What can you do about it? First of all: relax. You are already on the right track by reading the paper summaries on astrobites.org to get an idea of what is currently going on in research. However, judging the impact and importance of a paper usually requires some time. Generally, the impact of articles decays over time because the raised issues were either falsified or considered as being irrelevant. Only a tiny fraction of publications defies this trend and remains as milestones in the field. These are the classical articles, or more precisely the important results in them, you want to be aware of.

The magic key: Stellar velocities

Looking at important papers during the last ~50 years, the four milestones presented in this astrobite provide a striking similarity. Besides the large number of citations (all having more than 1000 citations), they have in common that all four present a relation based on the same physical quantity: velocity. The four papers present a relation between the velocity of stars at the center of galaxies and the luminosity of the host galaxies (Faber & Jackson, Tully & Fisher) or the mass of the supermassive black hole at the galaxy’s center (Ferrarese & Merritt, Gebhart et al.). While the Faber-Jackson relation applies to elliptical galaxies, the Tully-Fisher relation applies to spiral galaxies and the $M \propto \sigma$ (Ferrarese & Merritt and Gebhart et al.) relation to the galactic bulges. Let’s have a look what makes the three relations so important.

Faber-Jackson relation

Faber and Jackson measured velocities of stars at the center of elliptical galaxies. Based on these results they calculated how much the velocity of each star differs from the average velocity of all the stars at the galactic center. The average of the differences yields the so called velocity dispersion and is a measure of the scatter. Faber & Jackson found that the luminosity of elliptical galaxies scales with the velocity dispersion of central stars in elliptical galaxies as $L \propto \sigma^{\gamma}$, where $\gamma$ often is about 4 (see Figure 1).

The Faber-Jackson, Tully-Fisher and $M \propto \sigma$ relation all show the same dependency of $y \propto x^4$. In the case of Faber-Jackson, the y variable is luminosity of the center of an elliptical galaxy and x is the stellar velocity dispersion, for the Tully-Fisher relation y is the luminosity of the center of a spiral galaxy and x is the maximum rotational velocity of the stars at the center and for the $M \propto \sigma$ relation it y is the mass of the supermassive black hole and x is the stellar velocity dispersion at the center of the bulge.

Tully-Fisher relation

Similar to the work by Faber & Jackson, Tully & Fisher found that the luminosity of spiral galaxies scales with the maximum of the rotational velocity of the stars at the galactic center as $L \propto v_{\rm max}^{4}$. You can see that in both relations the luminosity scales with a velocity dependent quantity to the fourth power. This is not a coincidence, but rather the consequence of an approximately constant ratio of stellar mass to stellar light and the fact that the galactic center shines uniformly as can be seen in this brief theoretical explanation. (To remember which relation has to be used for what type of galaxy use this alphabetical crib: DEF: Dispersion in Elliptical galaxies yields Faber-Jackson, RSD: Rotational velocity in Spiral galaxies yields Tully-Fisher)

$M \propto \sigma$ relation

A more recent discovery is the $M \propto \sigma$ relation, which was revealed in the two remaining papers in 2000. The two groups measured the velocity dispersion of stars around a supermassive black hole and compared it to the mass of the supermassive black hole. In the same line as Faber & Jackson, they find a scaling of the velocity dispersion to the forth power. No surprise that people refer to the result also as the “Faber-Jackson relation for black holes”.

Measuring the luminosity of galaxies or the mass of supermassive black holes are observationally difficult tasks. However measuring stellar velocities is much easier and thanks to the relations you can retrieve information about the luminosity or supermassive black holes from velocity measurements. If it is the luminosity you determined, you know the amount of light produced by the galaxy. The galaxy emits in all directions and the farther you are from the light source the darker it appears. Therefore you can compare the computed luminosity with how bright the galaxy appears when observing it on Earth (apparent magnitude) and voilà: You estimated how far the galaxy is away from you!

The importance of the relation

It is especially the possibility to estimate distances of galaxies based on the Faber-Jackson and the Tully-Fisher relations that make these relations useful. The $M \propto \sigma$ relation instead might suggest that every galaxy contains a supermassive black hole, but the debate is still ongoing. Finally, be careful! The relation might be less general than presumably assumed and can be biased (for a critical discussion, please read this astrobite). Nevertheless, the relations are important and get frequently cited. Isn’t it great that you now know what they are about? Don’t fear them anymore when you stumble over them on your way through the forest of astrophysical articles.

### Peter Coles - In the Dark

The Day Sussex Died

In advance of tomorrow’s sombre commemorations of the centenary of the start of the Battle of the Somme, I thought I’d write a short preliminary piece about a related centenary to be marked today, 30th June 2016.

On this day in 1916 there took place the Battle of the Boar’s Head, the name given to a German salient near Richebourg-l’Avoué in the Pas de Calais region. This battle is remembered locally here in Brighton as The Day Sussex Died. The following brief account is based on the wikipedia article.

The attack on the Boar’s Head was launched on 30 June 1916, as an attempt to divert German attention from the Battle of the Somme which was to begin on the following day, 1 July. The attack was conducted by the 11th, 12th and 13th (Southdowns) Battalions of the Royal Sussex Regiment, part of the 116th Southdowns Brigade of the 39th Division.

A preliminary bombardment and wire-cutting by the artillery commenced on the afternoon of 29 June and was reported to be very effective. The final bombardment commenced shortly before 3:00 a.m. and the 12th and 13th battalions went over the top (most for the first time) shortly afterwards, the 11th Battalion providing carrying parties. The guns lifted their fire off the German front trench and put down an intense barrage in support. The infantry reached the German trenches, bombing and bayoneting their way into the German front line trench and held it for about four hours.

The second trench was captured and held for only half an hour, during which several counter-attacks were repulsed and then the raiders withdrew, because of a shortage of ammunition and mounting casualties. The German support position was not reached by the infantry, because the German defensive tactics included shelling trenches where the British had gained a foothold.

In fewer than five hours the three Southdowns Battalions of the Royal Sussex lost 17 officers and 349 men killed, including 12 sets of brothers, three from one family. A further 1,000 men were wounded or taken prisoner.

The corps commander looked upon the attack as a raid and considered it to be successful.

Here is a photograph of A Company of the 13th Battalion of the Royal Sussex Regiment, taken shortly before the battle; by its end, over 80% of these men were dead.

The losses at the Boar’s Head were shortly to be dwarfed by the horrendous scale of the slaughter on the Somme, but their impact  on families and indeed whole villages in rural Sussex was devastating.  Those who lost their lives in armed conflict should never been forgotten, nor should their sacrifices be appropriated for political ends.

Lest we forget.

### Peter Coles - In the Dark

The Habitability of the Universe

It’s important not to get carried away by the post-referendum doom and gloom. Abraham Loeb’s recent paper on the arXiv suggests the Universe will only be habitable for the next 10,000,000,000,000 years or so. This means that the current state of political chaos  won’t last for ever, though I wonder the paper doesn’t make it clear if Article 50 will have been triggered by the time the last star goes out.

Is life most likely to emerge at the present cosmic time near a star like the Sun? We consider the habitability of the Universe throughout cosmic history, and conservatively restrict our attention to the context of “life as we know it” and the standard cosmological model, LCDM. The habitable cosmic epoch started shortly after the first stars formed, about 30 Myr after the Big Bang, and will end about 10 Tyr from now, when all stars will die. We review the formation history of habitable planets and find that unless habitability around low mass stars is suppressed, life is most likely to exist near 0.1 solar mass stars ten trillion years from now. Spectroscopic searches for biosignatures in the atmospheres of transiting Earth-mass planets around low mass stars will determine whether present-day life is indeed premature or typical from a cosmic perspective.

### Christian P. Robert - xi'an's og

ABC random forests for Bayesian parameter inference [version 2.0]

Just mentioning that a second version of our paper has been arXived and submitted to JMLR, the main input being the inclusion of a reference to the abcrf package. And just repeating our best selling arguments that (i) forests do not require a preliminary selection of the summary statistics, since an arbitrary number of summaries can be used as input for the random forest, even when including a large number of useless white noise variables; (b) there is no longer a tolerance level involved in the process, since the many trees in the random forest define a natural if rudimentary distance that corresponds to being or not being in the same leaf as the observed vector of summary statistics η(y); (c) the size of the reference table simulated from the prior (predictive) distribution does not need to be as large as for in usual ABC settings and hence this approach leads to significant gains in computing time since the production of the reference table usually is the costly part! To the point that deriving a different forest for each univariate transform of interest is truly a minor drag in the overall computing cost of the approach.

Filed under: Books, Kids, pictures, Statistics, Travel, University life, Wines Tagged: abcrf, arXiv, Bayesian inference, JMLR, R, random forests, sunrise

## June 29, 2016

### Clifford V. Johnson - Asymptotia

Gauge Theories are Cool

That is all.

('fraid you'll have to wait for the finished book to learn why those shapes are relevant to the title...)

The post Gauge Theories are Cool appeared first on Asymptotia.

### Christian P. Robert - xi'an's og

A very interesting issue of Nature I read this morning while having breakfast. A post-brexit read of a pre-brexit issue. Apart from the several articles arguing against Brexit and its dire consequences on British science [but preaching to the converted for which percentage of the Brexit voters does read Nature?!], a short vignette on the differences between fields for the average time spent for refereeing a paper (maths takes twice as long as social sciences and academics older than 65 half the time of researchers under 36!). A letter calling for action against predatory publishers. And the first maths paper published since I started reading Nature on an almost-regular basis: it studies mean first-passage time for non-Markov random walks. Which are specified as time-homogeneous increments. It is sort of a weird maths paper in that I do not see where the maths novelty stands and why the paper only contains half a dozen formulas… Maybe not a maths paper after all.

Filed under: Books, Kids, University life Tagged: Brexit, Nature, predatory publishing, random walk, refereeing, stochastic processes

### Symmetrybreaking - Fermilab/SLAC

LHCb discovers family of tetraquarks

Researchers found four new particles made of the same four building blocks.

It’s quadruplets! Syracuse University researchers on the LHCb experiment confirmed the existence of a new four-quark particle and serendipitously discovered three of its siblings.

Quarks are the solid scaffolding inside composite particles like protons and neutrons. Normally quarks come in pairs of two or three, but in 2014 LHCb researchers confirmed the existence four-quark particles and, one year later, five-quark particles.

The particles in this new family were named based on their respective masses, denoted in mega-electronvolts: X(4140), X(4274), X(4500) and X(4700). Each particle contains two charm quarks and two strange quarks arranged in a unique way, making them the first four-quark particles composed entirely of heavy quarks. Researchers also measured each particle’s quantum numbers, which describe their subatomic properties. Theorists will use these new measurements to enhance their understanding of the formation of particles and the fundamental structures of matter.

“What we have discovered is a unique system,” says Tomasz Skwarnicki, a physics professor at Syracuse University. “We have four exotic particles of the same type; it’s the first time we have seen this and this discovery is already helping us distinguish between the theoretical models.”

Evidence of the lightest particle in this family of four and a hint of another were first seen by the CDF experiment at the US Department of Energy’s Fermi National Accelerator Lab in 2009. However, other experiments were unable to confirm this observation until 2012, when the CMS experiment at CERN reported seeing the same particle-like bumps with a much greater statistical certainty. Later, the D0 collaboration at Fermilab also reported another observation of this particle.

“It was a long road to get here,” says University of Iowa physicist Kai Yi, who works on both the CDF and CMS experiments. “This has been a collective effort by many complementary experiments. I’m very happy that LHCb has now reconfirmed this particle’s existence and measured its quantum numbers.”

The US contribution to the LHCb experiment is funded by the National Science Foundation.

LHCb researcher Thomas Britton performed this analysis as his PhD thesis at Syracuse University.

“When I first saw the structures jumping out of the data, little did I know this analysis would be such an aporetic saga,” Britton says. “We looked at every known particle and process to make sure these four structures couldn’t be explained by any pre-existing physics. It was like baking a six-dimensional cake with 98 ingredients and no recipe—just a picture of a cake.”

Even though the four new particles all contain the same quark composition, they each have a unique internal structure, mass and their own sets of quantum numbers. These characteristics are determined by the internal spatial configurations of the quarks.

“The quarks inside these particles behave like electrons inside atoms,” Skwarnicki says. “They can be ‘excited’ and jump into higher energy orbitals. The energy configuration of the quarks gives each particle its unique mass and identity.”

According to theoretical predictions, the quarks inside could be tightly bound (like three quarks packed inside a single proton) or loosely bound (like two atoms forming a molecule.) By closely examining each particle’s quantum numbers, scientists were able to narrow down the possible structures.

“The molecular explanation does not fit with the data,” Skwarnicki says. “But I personally would not conclude that these are definitely tightly bound states of four quarks. It could be possible that these are not even particles. The result could show the complex interplays of known particle pairs flippantly changing their identities.”

Theorists are currently working on models to explain these new results—be it a family of four new particles or bizarre ripple effects from known particles. Either way, this study will help shape our understanding of the subatomic universe.

“The huge amount of data generated by the LHC is enabling a resurgence in searches for exotic particles and rare physical phenomena,” Britton says. “There’s so many possible things for us to find and I’m happy to be a part of it.”

### Peter Coles - In the Dark

The BrExit Threat to British Science

After a couple of days away dealing with some personal business I’ve now time to make a few comments about the ongoing repercussions following last week’s referendum vote to Leave the European Union.

First of all on the general situation. Legally speaking the referendum decision by itself changes nothing at all. Referendums have no constitutional status in the United Kingdom and are not legally binding. The Prime Minister David Cameron has declined to activate (the now famous) Article 50 of the Lisbon Treaty which would initiate a two-year negotiated withdrawal, preferring to leave this to whomever succeeds him following his resignation. None of the likely contenders for the unenviable position of next Prime Minister seems keen to pull the trigger very quickly either. The United Kingdom therefore remains a member of the European Union and there is no clear picture of when that might change.

The rest of the European Union obviously wants the UK to leave as soon as possible, not just because we’ve indicated that we want to, but because  we have always been never been very committed or reliable partners. In the words of Jean-Claude Juncker: ‘It is not an amicable divorce, but it was not an intimate love affair anyway.’

I don’t blame the 27 remaining members for wanting us to get on with getting out, because uncertainty is bad for business. Two years is more than enough time for big European businesses to write British producers out of their supply chains and for international companies now based in the United Kingdom to relocate to continental Europe. The current gridlock at Westminster merely defers this inevitable exodus. In the meantime inward investment is falling as companies defer decisions on future plans, casting a planningblight over the UK economy.

My own view, however, is that the longer the UK waits before invoking Article 50 the greater the probability that it will never be invoked at all.  This is because the next PM – probably Boris Johnson – surely knows that he will simply not be able to deliver on any of the promises he has made.

For example, there will be no access to the single market post-BrExit without free movement of people. There won’t be £350 million per week extra for the NHS either, because our GDP is falling and we never sent £350 million anyway.  All the possible deals will be so obviously far worse than the status quo that I don’t think Parliament will ever pass legislation to accept a situation is so clearly against the national interest. I may be wrong, of course, but I think the likeliest scenario is that the referendum decision is kicked into the long grass for at least the duration of the current Parliament.

That doesn’t solve the issue of BrExit blight, however. Which brings me to British science in a possible post-BrExit era. It’s all very uncertain, of course, but it seems to me that as things stand, any deal that involves free movement within Europe would be unacceptable to the powerful  UK anti-immigration lobby. This rules out a “Norway” type deal, among others, and almost certainly means there will be no access to any science EU funding schemes post 2020. Free movement is essential to the way most of these schemes operate anyway.

It has been guaranteed that funding commitments will be honoured until the end of Horizon 2020, but that assumes that holders of such grants don’t leave the UK taking the grants with them. I know of four cases of this happening already. They won’t come back even if we’re still in the European Union then.

Another probable outcomes are that:

1. the shrinking economy will cause the UK government to abandon its ring-fence on science funding, which will  lead to cuts in domestic provision also;
2. a steep decline in EU students (and associated income) will halt the expansion of UK science departments, and may cause some to shrink or even close;
3. non-UK EU scientists working in the UK decide to leave anyway because the atmosphere of this country has already been poisoned by xenophobic rhetoric.

British science may “endure” after BrExit but it definitely won’t prosper. What is the least bad solution, if we cannot remain?

### CERN Bulletin

CERN Bulletin Issue No. 26-27/2016
Link to e-Bulletin Issue No. 26-27/2016Link to all articles in this issue No.

### Christian P. Robert - xi'an's og

Statistics & Computing [toc]

The latest [June] issue of Statistics & Computing is full of interesting Bayesian and Monte Carlo entries, some of which are even open access!

Filed under: Books, Statistics Tagged: academic journals, Bayesian computation, Monte Carlo Statistical Methods, Springer-Verlag, Statistics and Computing

### astrobites - astro-ph reader's digest

Probing the Galaxy with Dead Stars

Authors: P.-E. Tremblay, J. Cummings, J. S. Kalirai, B. T. Gaensicke, N. Gentile-Fusillo, R. Raddi

First author’s institution: Department of Physics, University of Warwick, UK

Status: accepted for publication in MNRAS

Our Sun is a main sequence star. This is the evolutionary phase where stars spend most of their lives, burning hydrogen into helium in their cores. When this hydrogen is exhausted, they evolve to the next evolutionary phase, in which they fuse hydrogen in a shell outside the core.  Meanwhile, the core contracts until helium burning temperature is reached in it. When helium runs out in the core, hydrogen and helium fusion continue in shells around a hot core of carbon and oxygen. Most stars will never reach temperature high enough to burn these two elements, ending their lives with this composition. The characteristics of these phases depend strongly on the initial mass and metallicity of the star. However, one thing is common to over 95% of them: they will eject their external layers on a planetary nebula and end their lives as white dwarf stars.

The structure of a white dwarf is quite simple: a core usually composed of carbon and oxygen (which are the heaviest elements most stars can synthesize), a thin layer of helium, and a thin outer layer of hydrogen. Moreover, white dwarfs have a peculiar but well-defined mass-radius relation, due to the fact that matter is mostly degenerate in its core. All that makes them quite easy to model, allowing us to obtain physical parameters from observations, such as mass, temperature, and age, which are all correlated.

Joining the facts that most stars become white dwarfs and that they are easy to model, one could get the idea to study our Galaxy solely by modeling the characteristics of the white dwarf population. That’s exactly what the authors of today’s paper did, as others have done before. The authors selected two well-determined white dwarf mass distributions, one limited by distance and the other by magnitude, and compared it to simulated distributions with ingredients reflecting our knowledge of our Galaxy’s formation, evolution, and current structure.

The distance limited sample contains white dwarfs up to 20 pc, and it’s about 90 % complete, meaning we have detected about 9 out of every 10 white dwarfs in this region. That’s an advantage because there should be no strong selection bias. However, the sample has little over a hundred objects, so it’s statistically poor. To compare their simulations also with a larger sample, they also selected bright white dwarfs detected with the Sloan Digital Sky Survey (SDSS). The sample is much larger, over a thousand objects, but in this case suffers from selection bias from the survey criteria and sky coverage.

The main ingredients to their simulations are a star formation history (SFH), an initial mass function (IMF), an initial-to-final mass relation (IFMR), and a description to the vertical scale height of the Galactic disk. With only these four ingredients, one can cook up a nice mass distribution to compare with observations. The SFH describes how many stars were formed and when, the IMF defines the fraction of stars at each given mass interval when formation happens, and the IFMR determines the masses and ages of the resulting white dwarfs. The structure of the Galactic disk, determined by its scale height, will describe how the formed white dwarfs are distributed around us, allowing us to estimate how likely we are to detect them.

The authors first simulated standard distributions with popular parameters from the literature: a constant SFH throughout 10 Gyr, which is the assumed age of the disk, a Salpeter IMF, a quadratic IFMR, and a variable scale height, increasing with the star’s age, i.e. allowing old stars to be further away from the plane of the disk than young stars. To compare with the 20 pc sample, they created stars in their simulation until they reached a significant number of detected stars within 20 pc. To compare with the SDSS sample, this was done until a fair number was reached in the SDSS covered region and within the upper and lower magnitude limits of the observational sample. They then built the mass distributions to their simulations and overplotted it, without any fit, over the observed distributions (Figs. 1 and 2).

Figure 1: Comparison between the observed (black) and simulated (filled blue) mass distributions for the sample limited to 20 pc. Objects with masses below 0.45 solar masses (shown in red) are neglected for the computation of the mean mass and dispersion because they are the result of binary evolution, which is not taken into account on the simulations. The overall shape of the distributions, and the mean mass and dispersion labeled on the panel agree remarkably well, considering no fit was done.

Figure 2: Observed (black) and simulated (filled blue) mass distributions for the white dwarfs in the SDSS sample. They are of types DA (hydrogen-dominated atmosphere) and DB (helium-dominated atmosphere). Binaries and magnetic white dwarfs were removed from the sample. Low-mass objects were neglected as mentioned on Fig. 1. A similar shape and agreeing mean mass and dispersion are obtained between simulation and observation, even without a fit.

The result is quite impressive: the overall shape, mean mass, and dispersion of their simulations agree remarkably well with the observations. This reflects the fact that, given the uncertainties in our observations, our models are good enough to describe then. However, not all is perfect: they notice that their simulations predicted a higher fraction of higher mass white dwarfs than the observations by a factor of about 1.5, making their simulated mean mass higher than the observed in both cases.

The authors then went one step further: they tweaked their ingredients to see how that would affect the obtained distribution, and whether it could result on a better agreement with the observations. Their main result is that a steeper IMF function could bring the fraction of higher mass stars closer to observed values. This would mean that the ever so popular Salpeter IMF may need a revision. The authors caution that, given the current uncertainties, we cannot rule out that the Salpeter function is correct and that what actually needs to improve is the IFMR. Changes to the SFH and to the description of the disk scale height cause less prominent effects.

The roles of each of these ingredients and their influence on the white dwarf mass distribution should become much clearer in the near future when Gaia will have obtained parallax measurements to most of these white dwarfs, allowing us to much better constrain their physical parameters, as the authors point out. The authors give us an idea of where should we put our attention in the meantime with regards to modeling: mostly the IMF, closely followed by the IFMR. We have only a few more years to work on our models and improve our descriptions to try to explain what Gaia will reveal us. Better get to work!

### John Baez - Azimuth

Large Countable Ordinals (Part 1)

I love the infinite.

It may not exist in the physical world, but we can set up rules to think about it in consistent ways, and then it’s a helpful concept. The reason is that infinity is often easier to think about than very large finite numbers.

Finding rules to work with the infinite is one of the great triumphs of mathematics. Cantor’s realization that there are different sizes of infinity is truly wondrous—and by now, it’s part of the everyday bread and butter of mathematics.

Trying to create a notation for these different infinities is very challenging. It’s not a fair challenge, because there are more infinities than expressions we can write down in any given alphabet! But if we seek a notation for countable ordinals, the challenge becomes more fair.

It’s still incredibly frustrating. No matter what notation we use it fizzles out too soon… making us wish we’d invented a more general notation. But this process of ‘fizzling out’ is fascinating to me. There’s something profound about it. So, I would like to tell you about this.

Today I’ll start with a warmup. Cantor invented a notation for ordinals that works great for ordinals less than a certain ordinal called ε0. Next time I’ll go further, and bring in the Veblen hierarchy! The single-variable Veblen functions let us describe all ordinals below a big guy called the ‘Feferman–Schütte ordinal’.

If I’m still feeling energetic after that, I will write another post that introduces the multi-variable Veblen functions. These get us all the ordinals below the ‘small Veblen ordinal’. And if I’m feeling ultra-energetic, I’ll talk about Veblen functions with infinitely many variables. These let us describe ordinals below the ‘large Veblen ordinal’.

But all this is really just the beginning of a longer story. That’s how infinity works: the story never ends!

To describe countable ordinals beyond the large Veblen ordinal, most people switch to an entirely different set of ideas, called ‘ordinal collapsing functions’. I doubt I’ll have the energy to tackle these. Maybe I will after a year-long break. My interest in the infinite doesn’t seem to be waning. It’s a decadent hobby, but I figure: some middle-aged men buy fancy red sports cars and drive them really fast; studying notions of infinity is even more intense, but it’s environmentally friendly.

I can even imagine writing a book about the infinite. Maybe these posts will become part of that book. But one step at a time…

### Cardinals versus ordinals

Cantor invented two different kinds of infinities: cardinals and ordinals. Cardinals say how big sets are. Two sets can be put into 1-1 correspondence iff they have the same number of elements—where this kind of ‘number’ is a cardinal. You may have heard about cardinals like aleph-nought (the number of integers), 2 to power aleph-nought (the number of real numbers), and so on. You may have even heard rumors of much bigger cardinals, like ‘inaccessible cardinals’ or ‘super-huge cardinals’. All this is tremendously fun, and I recommend starting here:

• Frank R. Drake, Set Theory, an Introduction to Large Cardinals, North-Holland, 1974.

There are other books that go much further, but as a beginner, I found this to be the most fun.

But I don’t want to talk about cardinals! I want to talk about ordinals.

Ordinals say how big ‘well-ordered’ sets are. A set is well-ordered if it comes with a relation ≤ obeying the usual rules:

Transitivity: if x ≤ y and y ≤ z then x ≤ z

Reflexivity: x ≤ x

Antisymmetry: if x ≤ y and y ≤ x then x = y

and one more rule: every nonempty subset has a smallest element!

For example, the empty set

$\{\}$

is well-ordered in a trivial sort of way, and the corresponding ordinal is called

$0$

Similarly, any set with just one element, like this:

$\{0\}$

is well-ordered in a trivial sort of way, and the corresponding ordinal is called

$1$

Similarly, any set with two elements, like this:

$\{0,1\}$

becomes well-ordered as soon as we decree which element is bigger; the obvious choice is to say 0 < 1. The corresponding ordinal is called

$2$

Similarly, any set with three elements, like this:

$\{0,1,2\}$

becomes well-ordered as soon as we linearly order it; the obvious choice here is to say 0 < 1 < 2. The corresponding ordinal is called

$3$

Perhaps you’re getting the pattern — you’ve probably seen these particular ordinals before, maybe sometime in grade school. They’re called finite ordinals, or "natural numbers".

But there’s a cute trick they probably didn’t teach you then: we can define each ordinal to be the set of all ordinals less than it:

$0 = \{\}$ (since no ordinal is less than 0)
$1 = \{0\}$ (since only 0 is less than 1)
$2 = \{0,1\}$ (since 0 and 1 are less than 2)
$3 = \{0,1,2\}$ (since 0, 1 and 2 are less than 3)

and so on. It’s nice because now each ordinal is a well-ordered set of the size that ordinal stands for. And, we can define one ordinal to be "less than or equal" to another precisely when its a subset of the other.

### Infinite ordinals

What comes after all the finite ordinals? Well, the set of all finite ordinals is itself well-ordered:

$\{0,1,2,3,\dots \}$

So, there’s an ordinal corresponding to this — and it’s the first infinite ordinal. It’s usually called $\omega,$ pronounced ‘omega’. Using the cute trick I mentioned, we can actually define

$\omega = \{0,1,2,3,\dots\}$

What comes after this? Well, it turns out there’s a well-ordered set

$\{0,1,2,3,\dots,\omega\}$

containing the finite ordinals together with $\omega,$ with the obvious notion of "less than": $\omega$ is bigger than the rest. Corresponding to this set there’s an ordinal called

$\omega+1$

As usual, we can simply define

$\omega+1 = \{0,1,2,3,\dots,\omega\}$

At this point you could be confused if you know about cardinals, so let me throw in a word of reassurance. The sets $\omega$ and $\omega+1$ have the same cardinality: they are both countable. In other words, you can find a 1-1 and onto function between these sets. But $\omega$ and $\omega+1$ are different as ordinals, since you can’t find a 1-1 and onto function between them that preserves the ordering. This is easy to see, since $\omega+1$ has a biggest element while $\omega$ does not.

Indeed, all the ordinals in this series of posts will be countable! So for the infinite ones, you can imagine that all I’m doing is taking your favorite countable set and well-ordering it in ever more sneaky ways.

Okay, so we got to $\omega + 1.$ What comes next? Well, not surprisingly, it’s

$\omega+2 = \{0,1,2,3,\dots,\omega,\omega+1\}$

Then comes

$\omega+3, \omega+4, \omega+5,\dots$

and so on. You get the idea.

I haven’t really defined ordinal addition in general. I’m trying to keep things fun, not like a textbook. But you can read about it here:

The main surprise is that ordinal addition is not commutative. We’ve seen that $\omega + 1 \ne \omega,$ since

$\omega + 1 = \{1, 2, 3, \dots, \omega \}$

is an infinite list of things… and then one more thing that comes after all those!. But $1 + \omega = \omega,$ because one thing followed by a list of infinitely many more is just a list of infinitely many things.

With ordinals, it’s not just about quantity: the order matters!

### ω+ω and beyond

Okay, so we’ve seen these ordinals:

$1, 2, 3, \dots, \omega, \omega + 1, \omega + 2, \omega+3, \dots$

What next?

Well, the ordinal after all these is called $\omega+\omega.$ People often call it "omega times 2" or $\omega 2$ for short. So,

$\omega 2 = \{0,1,2,3,\dots,\omega,\omega+1,\omega+2,\omega+3,\dots.\}$

It would be fun to have a book with $\omega$ pages, each page half as thick as the previous page. You can tell a nice long story with an $\omega$-sized book. I think you can imagine this. And if you put one such book next to another, that’s a nice picture of $\omega 2.$

It’s worth noting that $\omega 2$ is not the same as $2 \omega.$ We have

$\omega 2 = \omega + \omega$

while

$2 \omega = 2 + 2 + 2 + \cdots$

where we add $\omega$ of these terms. But

$2 + 2 + 2 + \cdots = (1 + 1) + (1 + 1) + (1 + 1) \dots = \omega$

so

$2 \omega = \omega$

This is not a proof, because I haven’t given you the official definition of how to add ordinals. You can find the definition here:

• Wikipedia, Ordinal arithmetic: multiplication.

Using this definition you can prove that what I’m saying is true. Nonetheless, I hope you see why what I’m saying might make sense. Like ordinal addition, ordinal multiplication is not commutative! If you don’t like this, you should study cardinals instead.

What next? Well, then comes

$\omega 2 + 1, \omega 2 + 2,\dots$

and so on. But you probably have the hang of this already, so we can skip right ahead to $\omega 3.$

In fact, you’re probably ready to skip right ahead to $\omega 4,$ and $\omega 5,$ and so on.

In fact, I bet now you’re ready to skip all the way to "omega times omega", or $\omega^2$ for short:

$\omega^2 = \{0,1,2\dots\omega,\omega+1,\omega+2,\dots ,\omega 2,\omega 2+1,\omega 2+2,\dots\}$

Suppose you had an encyclopedia with $\omega$ volumes, each one being a book with $\omega$ pages. If each book is twice as thin as one before, you’ll have $\omega^2$ pages — and it can still fit in one bookshelf! Here’s the idea:

What comes next? Well, we have

$\omega^2+1, \omega^2+2, \dots$

and so on, and after all these come

$\omega^2+\omega, \omega^2+\omega+1, \omega^2+\omega+2, \dots$

and so on — and eventually

$\omega^2 + \omega^2 = \omega^2 2$

and then a bunch more, and then

$\omega^2 3$

and then a bunch more, and then

$\omega^2 4$

and then a bunch more, and more, and eventually

$\omega^2 \omega = \omega^3$

You can probably imagine a bookcase containing $\omega$ encyclopedias, each with $\omega$ volumes, each with $\omega$ pages, for a total of $\omega^3$ pages. That’s $\omega^3.$

### ωω

I’ve been skipping more and more steps to keep you from getting bored. I know you have plenty to do and can’t spend an infinite amount of time reading this, even if the subject is infinity.

So if you don’t mind me just mentioning some of the high points, there are guys like $\omega^4$ and $\omega^5$ and so on, and after all these comes

$\omega^\omega$

Let’s try to we imagine this! First, imagine a book with $\omega$ pages. Then imagine an encyclopedia of books like this, with $\omega$ volumes. Then imagine a bookcase containing $\omega$ encyclopedias like this. Then imagine a room containing $\omega$ bookcases like this. Then imagine a floor with library with $\omega$ rooms like this. Then imagine a library with $\omega$ floors like this. Then imagine a city with $\omega$ libraries like this. And so on, ad infinitum.

You have to be a bit careful here, or you’ll be imagining an uncountable number of pages. To name a particular page in this universe, you have to say something like this:

the 23rd page of the 107th book of the 20th encyclopedia in the 7th bookcase in 0th room on the 1000th floor of the 973rd library in the 6th city on the 0th continent on the 0th planet in the 0th solar system in the…

But it’s crucial that after some finite point you keep saying “the 0th”. Without that restriction, there would be uncountably many pages! This is just one of the rules for how ordinal exponentiation works. For the details, read:

• Wikipedia, Ordinal arithmetic: exponentiation.

As they say,

But for infinite exponents, the definition may not be obvious.

Here’s a picture of $\omega^\omega,$ taken from David Madore’s wonderful interactive website:

On his website, if you click on any of the labels for an initial portion of an ordinal, like $\omega, \omega^2, \omega^3$ or $\omega^4$ here, the picture will expand to show that portion!

### Ordinals up to ε0

Okay, so we’ve reached $\omega^\omega.$ Now what?

Well, then comes $\omega^\omega + 1,$ and so on, but I’m sure that’s boring by now. And then come ordinals like

$\omega^\omega 2,\dots, \omega^\omega 3, \dots, \omega^\omega 4, \dots$

$\omega^\omega \omega = \omega^{\omega + 1}$

Then eventually come ordinals like

$\omega^\omega \omega^2 , \dots, \omega^\omega \omega^3, \dots, \omega^\omega \omega^4, \dots$

and so on, leading up to

$\omega^\omega \omega^\omega = \omega^{\omega + \omega} = \omega^{\omega 2}$

This actually reminds me of something that happened driving across South Dakota one summer with a friend of mine. We were in college, so we had the summer off, so we drive across the country. We drove across South Dakota all the way from the eastern border to the west on Interstate 90.

This state is huge — about 600 kilometers across, and most of it is really flat, so the drive was really boring. We kept seeing signs for a bunch of tourist attractions on the western edge of the state, like the Badlands and Mt. Rushmore — a mountain that they carved to look like faces of presidents, just to give people some reason to keep driving.

Anyway, I’ll tell you the rest of the story later — I see some more ordinals coming up:

$\omega^{\omega 3},\dots \omega^{\omega 4},\dots \omega^{\omega 5},\dots$

We’re really whizzing along now just to keep from getting bored — just like my friend and I did in South Dakota. You might fondly imagine that we had fun trading stories and jokes, like they do in road movies. But we were driving all the way from Princeton to my friend Chip’s cabin in California. By the time we got to South Dakota, we were all out of stories and jokes.

Hey, look! It’s

$\omega^{\omega \omega}= \omega^{\omega^2}$

That was cool. Then comes

$\omega^{\omega^3}, \dots \omega^{\omega^4}, \dots \omega^{\omega^5}, \dots$

and so on.

Anyway, back to my story. For the first half of our half of our trip across the state, we kept seeing signs for something called the South Dakota Tractor Museum.

Oh, wait, here’s an interesting ordinal:

$\omega^{\omega^\omega}$

Let’s stop and take look:

That was cool. Okay, let’s keep driving. Here comes

$\omega^{\omega^\omega} + 1, \omega^{\omega^\omega} + 2, \dots$

and then

$\omega^{\omega^\omega} + \omega, \dots, \omega^{\omega^\omega} + \omega 2, \dots, \omega^{\omega^\omega} + \omega 3, \dots$

and then

$\omega^{\omega^\omega} + \omega^2, \dots, \omega^{\omega^\omega} + \omega^3, \dots$

and eventually

$\omega^{\omega^\omega} + \omega^\omega$

and eventually

$\omega^{\omega^\omega} + \omega^{\omega^\omega} = \omega^{\omega^\omega} 2$

and then

$\omega^{\omega^\omega} 3, \dots, \omega^{\omega^\omega} 4, \dots, \omega^{\omega^\omega} 5, \dots$

and eventually

$\omega^{\omega^\omega} \omega = \omega^{\omega^\omega + 1}$

After a while we reach

$\omega^{\omega^\omega + 2}, \dots, {\omega^\omega + 3}, \dots {\omega^\omega + 4}, \dots$

and then

$\omega^{\omega^\omega + \omega}, \dots, \omega^{\omega^\omega + \omega 2}, \dots, \omega^{\omega^\omega + \omega 3}, \dots$

and then

$\omega^{\omega^\omega + \omega^2}, \dots, \omega^{\omega^\omega + \omega^3}, \dots, \omega^{\omega^\omega + \omega^4}, \dots$

and then

$\omega^{\omega^\omega + \omega^\omega} = \omega^{\omega^\omega 2}$

and then

$\omega^{\omega^\omega 3}, \dots, \omega^{\omega^\omega 4} , \dots$

and then

$\omega^{\omega^\omega \omega} = \omega^{\omega^{\omega + 1}}$

and eventually

$\omega^{\omega^{\omega + 2}}, \dots, \omega^{\omega^{\omega + 3}}, \dots, \omega^{\omega^{\omega + 4}}, \dots$

This is pretty boring; we’re already going infinitely fast, but we’re still just picking up speed, and it’ll take a while before we reach something interesting.

Anyway, we started getting really curious about this South Dakota Tractor Museum — it sounded sort of funny. It took 250 kilometers of driving before we passed it. We wouldn’t normally care about a tractor museum, but there was really nothing else to think about while we were driving. The only thing to see were fields of grain, and these signs, which kept building up the suspense, saying things like

ONLY 100 MILES TO THE SOUTH DAKOTA TRACTOR MUSEUM!

We’re zipping along really fast now:

$\omega^{\omega^{\omega^\omega}}, \dots, \omega^{\omega^{\omega^{\omega^\omega}}},\dots , \omega^{\omega^{\omega^{\omega^{\omega^{\omega}}}}},\dots$

What comes after all these?

At this point we need to stop for gas. Our notation for ordinals just ran out!

The ordinals don’t stop; it’s just our notation that fizzled out. The set of all ordinals listed up to now — including all the ones we zipped past — is a well-ordered set called

$\epsilon_0$

or "epsilon-nought". This has the amazing property that

$\epsilon_0 = \omega^{\epsilon_0}$

And it’s the smallest ordinal with this property! It looks like this:

It’s an amazing fact that every countable ordinal is isomorphic, as an well-ordered set, to some subset of the real line. David Madore took advantage of this to make his pictures.

### Cantor normal form

I’ll tell you the rest of my road story later. For now let me conclude with a bit of math.

There’s a nice notation for all ordinals less than $\epsilon_0,$ called ‘Cantor normal form’. We’ve been seeing lots of examples. Here is a typical ordinal in Cantor normal form:

$\omega^{\omega^{\omega^{\omega+\omega+1}}} \; + \; \omega^{\omega^\omega+\omega^\omega} \; + \; \omega^\omega \;+\; \omega + \omega + 1 + 1 + 1$

The idea is that you write it out using just + and exponentials and 1 and $\omega.$

Here is the theorem that justifies Cantor normal form:

Theorem. Every ordinal $\alpha$ can be uniquely written as

$\alpha = \omega^{\beta_1} c_1 + \omega^{\beta_2}c_2 + \cdots + \omega^{\beta_k}c_k$

where $k$ is a natural number, $c_1, c_2, \ldots, c_k$ are positive integers, and $\beta_1 > \beta_2 > \cdots > \beta_k \geq 0$ are ordinals.

It’s like writing ordinals in base $\omega.$

Note that every ordinal can be written this way! So why did I say that Cantor normal form is nice notation for ordinals less than $\epsilon_0$? Here’s the problem: the Cantor normal form of $\epsilon_0$ is

$\epsilon_0 = \omega^{\epsilon_0}$

So, when we hit $\epsilon_0,$ the exponents $\beta_1 ,\beta_2, \dots, \beta_k$ can be as big as the ordinal $\alpha$ we’re trying to describe! So, while the Cantor normal form still exists for ordinals $\geq \epsilon_0,$ it doesn’t give a good notation for them unless we already have some notation for ordinals this big!

This is what I mean by a notation ‘fizzling out’. We’ll keep seeing this problem in the posts to come.

But for an ordinal $\alpha$ less than $\epsilon_0,$ something nice happens. In this case, when we write

$\alpha = \omega^{\beta_1} c_1 + \omega^{\beta_2}c_2 + \cdots + \omega^{\beta_k}c_k$

all the exponents $\beta_1, \beta_2, \dots, \beta_k$ are less than $\alpha.$ So we can go ahead and write them in Cantor normal form, and so on… and because ordinals are well-ordered, this process ends after finitely many steps.

So, Cantor normal form gives a nice way to write any ordinal less than $\epsilon_0$ using finitely many symbols! If we abbreviate $\omega^0$ as $1,$ and write multiplication by positive integers in terms of addition, we get expressions like this:

$\omega^{\omega^{\omega^{\omega^{1 + 1} +\omega+1}}} \; + \; \omega^{\omega^\omega+\omega^\omega} \; + \; \omega^{\omega+1+1} \;+\; \omega + 1$

They look like trees. Even better, you can write a computer program that does ordinal arithmetic for ordinals of this form: you can add, multiply, and exponentiate them, and tell when one is less than another.

So, there’s really no reason to be scared of $\epsilon_0.$ Remember, each ordinal is just the set of all smaller ordinals. So you can think of $\epsilon_0$ as the set of tree-shaped expressions like the one above, with a particular rule for saying when one is less than another. It’s a perfectly reasonable entity. For some real excitement, we’ll need to move on to larger ordinals. We’ll do that next time.

For more, see:

• Wikipedia, Cantor normal form.

## June 28, 2016

### Emily Lakdawalla - The Planetary Society Blog

Big booster blasts Utah hillside, and NASA discusses Journey to Mars
NASA and Orbital ATK successfully completed a qualification motor firing of a five-segment solid rocket booster that will fly on the Space Launch System in 2018.

### Symmetrybreaking - Fermilab/SLAC

Preparing for their magnetic moment

Scientists are using a plastic robot and hair-thin pieces of metal to ready a magnet that will hunt for new physics.

Three summers ago, a team of scientists and engineers on the Muon g-2 experiment moved a 52-foot-wide circular magnet 3200 miles over land and sea. It traveled in one piece without twisting more than a couple of millimeters, lest the fragile coils inside irreparably break. It was an astonishing feat that took years to plan and immense skill to execute.

As it turns out, that was the easy part.

The hard part—creating a magnetic field so precise that even subatomic particles see it as perfectly smooth—has been under way for seven months. It’s a labor-intensive process that has inspired scientists to create clever, often low-tech solutions to unique problems, working from a road map written 30 years ago as they drive forward into the unknown.

The goal of Muon g-2 is to follow up on a similar experiment conducted at the US Department of Energy’s Brookhaven National Laboratory in New York in the 1990s. Scientists there built an extraordinary machine that generated a near-perfect magnetic field into which they fired a beam of particles called muons. The magnetic ring serves as a racetrack for the muons, and they zoom around it for as long as they exist—usually about 64 millionths of a second.

That’s a blink of an eye, but it’s enough time to measure a particular property: the precession frequency of the muons as they hustle around the magnetic field. And when Brookhaven scientists took those measurements, they found something different than the Standard Model, our picture of the universe, predicted they would. They didn’t quite capture enough data to claim a definitive discovery, but the hints were tantalizing.

Now, 20 years later, some of those same scientists—and dozens of others, from 34 institutions around the world—are conducting a similar experiment with the same magnet, but fed by a more powerful beam of muons at the US Department of Energy’s Fermi National Accelerator Laboratory in Illinois. Moving that magnet from New York caused quite a stir among the science-interested public, but that’s nothing compared with what a discovery from the Muon g-2 experiment would cause.

“We’re trying to determine if the muon really is behaving differently than expected,” says Dave Hertzog of the University of Washington, one of the spokespeople of the Muon g-2 experiment. “And, if so, that would suggest either new particles popping in and out of the vacuum, or new subatomic forces at work.  More likely, it might just be something no one has thought of yet.  In any case, it’s all  very exciting.”

#### Shimming to reduce shimmy

To start making these measurements, the magnetic field needs to be the same all the way around the ring so that, wherever the muons are in the circle, they will see the same pathway. That’s where Brendan Kiburg of Fermilab and a group of a dozen scientists, post-docs and students come in. For the past six months, they have been “shimming” the magnetic ring, shaping it to an almost inconceivably exact level.

“The primary goal of shimming is to make the magnetic field as uniform as possible,” Kiburg says. “The muons act like spinning tops, precessing at a rate proportional to the magnetic field. If a section of the field is a little higher or a little lower, the muon sees that, and will go faster or slower.”

Since the idea is to measure the precession rate to an extremely precise degree, the team needs to shape the magnetic field to a similar degree of uniformity. They want it to vary by no more than ten parts in a billion per centimeter. To put that in perspective, that’s like wanting a variation of no more than one second in nearly 32 years, or one sheet in a roll of toilet paper stretching from New York to London.

How do they do this? First, they need to measure the field they have. With a powerful electromagnet that will affect any metal object inside it, that’s pretty tricky. The solution is a marriage of high-tech and low-tech: a cart made of sturdy plastic and quartz, moved by a pulley attached to a motor and continuously tracked by a laser. On this cart are probes filled with petroleum jelly, with sensors measuring the rate at which the jelly’s protons spin in the magnetic field.

The laser can record the position of the cart to 25 microns, half the width of a human hair. Other sensors measure how far apart the top and bottom of the cart are to the magnet, to the micron.

“The cart moves through the field as it is pulled around the ring,” Kiburg says. “It takes between two and two-and-a-half hours to go around the ring. There are more than 1500 locations around the path, and it stops every three centimeters for a short moment while the field is precisely measured in each location. We then stitch those measurements into a full map of the magnetic field.”

Erik Swanson of the University of Washington is the run coordinator for this effort, meaning he directs the team as they measure the field and perform the manually intensive shimming. He also designed the new magnetic resonance probes that measure the field, upgrading them from the technology used at Brookhaven.

“They’re functionally the same,” he says of the probes, “but the Brookhaven experiment started in the 1990s, and the old probes were designed before that. Any electronics that old, there’s the potential that they will stop working.”

Swanson says that the accuracy to which the team has had to position the magnet’s iron pieces to achieve the desired magnetic field surprised even him. When scientists first turned the magnet on in October, the field, measured at different places around the ring, varied by as much as 1400 parts per million. That may seem smooth, but to a tiny muon it looks like a mountain range of obstacles. In order to even it out, the Muon g-2 team makes hundreds of minuscule adjustments by hand.

Video of 4HlKN0rfdKA

#### Physical physics

Stationed around the ring are about 1000 knobs that control the ways the field could become non-uniform. But when that isn’t enough, the field can be shaped by taking parts of the magnet apart and inserting extremely small pieces of steel called shims, changing the field by thousandths of an inch.

There are 12 sections of the magnet, and it takes an entire day to adjust just one of those sections.

This process relies on simulations, calibrations and iterations, and with each cycle the team inches forward toward their goal, guided by mathematical predictions. Once they’re done with the process of carefully inserting these shims, some as thin as 12.5 microns, they reassemble the magnet and measure the field again, starting the process over, refining and learning as they go.

“It’s fascinating to me how hard such a simple-seeming problem can be,” says Matthias Smith of the University of Washington, one of the students who helped design the plastic measuring robot. “We’re making very minute adjustments because this is a puzzle that can go together in multiple ways. It’s very complex.”

His colleague Rachel Osofsky, also of the University of Washington, agrees. Osofsky has helped put in more than 800 shims around the magnet, and says she enjoys the hands-on and collaborative nature of the work.

“When I first came aboard, I knew I’d be spending time working on the magnet, but I didn’t know what that meant,” she says. “You get your hands dirty, really dirty, and then measure the field to see what you did. Students later will read the reports we’re writing now and refer to them. It’s exciting.”

Similarly, the Muon g-2 team is constantly consulting the work of their predecessors who conducted the Brookhaven experiment, making improvements where they can. (One upgrade that may not be obvious is the very building that the experiment is housed in, which keeps the temperature steadier than the one used at Brookhaven and reduces shape changes in the magnet itself.)

Kiburg says the Muon g-2 team should be comfortable with the shape of the magnetic field sometime this summer. With the experiment’s beamline under construction and the detectors to be installed, the collaboration should be ready to start measuring particles by next summer. Swanson says that while the effort has been intense, it has also been inspiring.

“It’s a big challenge to figure out how to do all this right,” he says. “But if you know scientists, when a challenge seems almost impossible, that’s the one we all go for.”

### astrobites - astro-ph reader's digest

Why you should be interested in dust
Authors: Nicholas P. Ballering, Kate Y. L. Su, George H. Rieke, András Gáspár

First Author’s Institution: Steward Observatory, University of Arizona, USA

Paper Status: Accepted to ApJ.

Before you roll your eyes – I don’t mean any old dust, I mean space dust. In particular, we’re interested today in the giant debris disks that reside around nearby stars – huge clouds of gas, dust, pebbles and even asteroids, a little like our own asteroid & kuiper belts. As you may know, when stars first form they are surrounded by protoplanetary disks. As they age, the dust normally disappears – it is blown out of the system by stellar winds, or falls onto the star surface via the Poynting-Robertson effect. But occasionally, a debris disk is maintained well into the ‘teenage’ period of the star’s life (by which I mean a few tens of megayears).

These debris disks are interesting. The dust is generally assumed to be continuously regenerated by planetesimals, suggesting that the building blocks of planet formation are present in the system – a tantalising hint that there may be planets hidden within these disks. We can learn about these planets by looking at the dust – be it observing the complex warps, gaps and other morphologies in an attempt to dynamically find the mass of a planet, or in comparing the composition of the dust and the planet to understand planet formation.

But how much do we actually know about these debris disks? Since the disk around nearby star β-Pictoris was first imaged back in 1984, you might assume that astrophysicists would have it pretty much figured out by now . . . sadly, you’d be wrong. In fact, models of various nearby debris disks have consistently struggled to fit both the thermal emission (the dust acting as black body emitters of far-infrared and radio wavelength radiation) and the scattered light emission (the dust acting a bit like a mirror and reflecting visible and near-infrared radiation from the star) at the same time.

One of the dust images used in this study, taken by the Hubble Space Telescope. The main dust emission is highlighted by the dotted line running top to bottom in this image. The color bar represents the intensity of emission at each point in the sky. (fig 2, bottom panel, in the paper)

And it’s a pretty tricky game to come up with the right model, since it involves understanding dust that covers not-very-many pixels of your telescope CCD, with a model that can be incredibly complex. A perfect model would consider the dust distribution, dust scale height, composition, grain size, grain shape, grain porosity, dust/gas ratio, etc. – there’s a lot going on! It is, of course, difficult to understand all of this diverse physics with the limited data we have.

Today’s paper focuses on understanding the dust composition of the β-Pictoris debris disk. β-Pictoris is an interesting star, as it hosts a simply huge debris disk, one of the first ones to be discovered. It’s also really nearby – just over 60 light years away – and the system even boasts one of the first ever directly imaged planets, namely β-Pictoris b. So there’s a fair bit of β-Pictoris data floating around. In fact, the authors today use five separate images spanning the range from optical to radio wavelengths, and including two images taken by the Hubble Space Telescope and one by ALMA. Some of this data is shown in figure 1.

In an attempt to reduce the complexity of the problem, the authors fit the spatial distribution of the debris disk first, and then the composition entirely separately. This works, because at lower wavelengths the resolution is fairly good. After lots of careful modelling, the authors come up with the first self-consistent model that fits both the thermal and the scattered light emissions, which is a promising sign for their model. Turning their focus to composition, they conclude that the dust is probably a mixture of silicates and organic materials, with not much in the way of water ice – this modelling is shown in figure 2. This seems to fit well with previous studies showing that the β-Pictoris system hosts crystalline material in its debris disk. The composition of the debris disk appears agree with work by other people on a similar debris disk hosting star, namely HR4796A. This is another hint that the authors are probably on the right track.

The chi-squared values as the fractions of (left-to-right) silicate, ice, organic compounds and vacuum in the dust mixture are varied in the model. The lower the chi-squared, the better the model. The faint lines show the variation of chi-squared as each other parameter is held fixed – showing that this result does not depend strongly on these other parameters, and that the dust is probably mostly silicate & organic materials (fig 2, top panel, in the paper)

The authors point out that their method can be used to characterise of other nearby debris disk systems. Over the next few years, with the advent of SPHERE and ALMA, more discoveries of and insight into debris disks are to be expected. The more we understand about debris disks and the space dust they contain, the more we can learn about the planetary systems and how they form, and this paper certainly helps towards achieving that goal!

## June 27, 2016

### ZapperZ - Physics and Physicists

Landau's Nobel Prize in Physics
This is a fascinating article. It describes, using the Nobel prize archives, the process that led to Lev Landau's nomination and winning the Nobel Prize in physics. But more than that, it also describes the behind-the-scenes nominating process for the Nobel Prize.

I'm not sure if the process has change significantly since then, but I would imaging that many of the mechanism leading up to a nomination and winning the prize are similar.

Zz.

### CERN Bulletin

LHC Report: highs and wet lows

Summertime, and the livin’ is easy… not so for the LHC, which is just entering four weeks of full-on luminosity production.

In the two weeks that followed the first technical stop (7-9 June), the LHC has demonstrated once again an outstanding performance. Thanks to the excellent availability of all systems, peaking at 93% in week 24, it was possible to chain physics fill after physics fill, with 60% of the time spent in collisions.

We have now surpassed the total integrated luminosity delivered in 2015 (4.2 fb-1). The integrated luminosity for 2016 now exceeds 6 fb-1 for each of the two high-luminosity experiments, ATLAS and CMS. Long fills, exceeding 20 hours, are now part of regular operation, with some producing more than 0.5 fb-1. With the summer conferences approaching, this certainly provides a good dataset for the LHC experiments to analyse and present.

Several records were broken again, namely the highest instantaneous luminosity – over 9 x 1033 cm-2 s-1 on 14 June – and the largest integrated luminosity in one fill of around 550 pb-1 delivered in 29 hours between the 20 and 21 June.  The LHC is now very close to the design luminosity value of 1034 cm-2 s-1.

This luminosity production period was briefly interrupted for the commissioning of the high beta-star beam cycle. Contrary to what is done in the normal physics cycle, in the high beta-star cycle, the beam size at the interaction points are increased before being put into collision. For this high beta-star cycle, for example, the beta functions at the interaction points 1 (IP1) and 5 (IP5) were increased to 2.5 km, while for the normal cycle these are normally squeezed to 40 cm. This results in beams with “large" transverse size - about 1 mm - and very small angular divergence - about 0.4 microradian - at the interaction point. This allows precise small angle scattering studies by the forward physics experiments AFP, ALFA and TOTEM. By comparison, the transverse beam sizes in IP1 and IP5 during normal physics cycles are of the order of 13 micrometres and the divergence, around 33 microradians. The commissioning of this high beta-star cycle has been successful and was completed in two fills, spanning a period of around 18 hours. Furthermore, the optic parameters have been measured and corrected with a remaining error of only few percent. Some validation steps are still required prior to the dedicated physics run scheduled for September.

Unfortunately, the heavy rain of the last weeks has taken a toll not only on our spirits, but also on the LHC. On Tuesday, 14 June in the morning, dedicated sensors alerted the Technical Infrastructure (TI) operators to the presence of water in the LHC at point 3. Here, the LHC tunnel crosses an underground stream descending the Jura, and in periods of heavy rain, water infiltrations inside the LHC can occur. Interventions by several teams were necessary in order to repair the damage caused by the water, the most severe being water infiltration inside electric and electronic equipment of the collimation system, grounding the LHC for almost 48 hours.

### Lubos Motl - string vacua and pheno

ATLAS: a 2.3-sigma stop excess
This will be an extremely short blog post because one month ago, I discussed the search for gluinos by ATLAS based on the very same final states with 1 lepton, jets, and MET. See the relevant May 2016 paper.

A more recent paper released by ATLAS wants to search for top squarks (stops) using the same final state:
Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in $$\sqrt{s}=13\TeV$$ $$pp$$ collisions with the ATLAS detector
Unsurprisingly, the same 8 muon events (plus 4 electron) events appeared in this search as well. They are just interpreted differently now.

In a highly inclusive SR1 (signal region one), the excess was quantified as 2.3 sigma. No discovery but a cute deviation not to completely forget about.

On the right side of the following chart, you may see that the expected exclusion curve (dashed) was actually more ambitious than the observed one.

With some optimism, you could suggest that the yellow exclusion region was repelled from the "truth" which may include a top squark of the mass between $$700\GeV$$ and $$800\GeV$$.

Julia Gonski, a Harvard PhD student (and a Rutgers alumnus with a thesis about SUSY at CMS!), wrote a short intriguing report about this ATLAS excess. The title, "Can't stop, won't stop: the continuing search for SUSY" is nice. Note that the word "stop" may refer to the top squark.

If the excess were due to new physics, the 3.2 inverse femtobarns used in the paper above has already grown to 10+ inverse femtobarns with the 7+ of the 2016 data. By now, the 2.3 sigma excess could have grown to $2.3 \times \sqrt{\frac{10}{3.2}}=4.1$ sigma excess. That would be terrifying and we would hear about it in August. Note that the gluino interpretation of the channel suggested a $$1250\GeV$$ gluino. Both a $$1250\GeV$$ gluino and a $$750\GeV$$ stop (probably no relationship to the diphoton bump is possible) would be a dream. ;-)

Related: The Lindau meeting of Nobel prize winners has began. Check e.g. a Twitter tag. David Gross said some interesting things. Supersymmetry is more compelling than any other physics beyond the Standard Model. Gross has made some large bets on it, too. Also, the $$100\TeV$$ collider is the next level. He also said that rumors can't be stopped but physicists should behave correct. And pointed out that Chu was wrong to say that no physics beyond the Standard Model has been observed: neutrino masses have to be coming from BSM physics because the bare Majorana mass terms aren't gauge-invariant.

### CERN Bulletin

Statement about UK referendum on the EU

Dear Colleagues,

Many people have expressed their concerns about the consequences of the 23 June vote in the UK for CERN, and for the UK’s relationship with CERN. CERN is an intergovernmental organisation subject to its own treaty. We are not part of the European Union, and several of our Member States, including Switzerland, in which we are headquartered, are not EU Members. Britain’s membership of CERN is not affected by the UK electorate’s vote to leave the European Union. We look forward to continuing the very constructive relationship we have shared with the UK, one of our founding members, long into the future.

CERN was founded on the principle of international collaboration, and our success over the years is built on that. We will continue to work proactively to encourage ever-greater international collaboration in particle physics, and to help ensure that the UK continues to play a very active role.

UK nationals remain eligible for all categories of employment at CERN, and UK businesses are eligible to bid for all contracts at CERN. The referendum result does not change anything for CERN employees, but employees of UK companies working under contract to CERN may have extra administrative procedures to follow than their EU counterparts before they can work at CERN.

CERN has its own agreements with its host states allowing CERN personnel to reside in either country. These agreements also allow spouses of CERN personnel to work in France and Switzerland under certain conditions. CERN personnel also have the right to retire in France and Switzerland under certain conditions. For non-EU countries, these conditions are more stringent than they are for EU countries.

CERN’s core research programme is funded by our Member States, but we also benefit from many EU grants in areas ranging from IT to accelerator and detector development. The referendum result does not affect CERN’s relationship with the EU, so we continue to be eligible to apply for Framework programme funding. CERN Member States that are not EU Members, and do not have special arrangements with the EU, can participate in CERN-EU projects, but can not lead them, or receive EU funding.

As long as CERN is in receipt of EU funding to support Marie Skłodowska-Curie Fellows, UK nationals will be able to apply for such Fellowships at CERN.

Kind regards,

Fabiola Gianotti

### CERN Bulletin

Ombud’s Corner: a world without lies?

Can a world without lies exist? Are there different types of lies, some more acceptable than others, or is that just an excuse that we use to justify ourselves? What consequences do lies have in the working environment?

If we look in the dictionary for the definition of “lie”, we find: “A lie is a false statement made with deliberate intent to deceive”. This simple definition turns out to be very useful when we feel stuck in intricate conflict situations where we suspect lies to have played a role. Examples may include supervisors presenting a situation in different ways to different colleagues; colleagues withholding information that could be useful to others; reports given in a non-accurate way; and rumours that spread around but cannot be verified.

Peter was very keen to lead a particular project. He spoke to his supervisor Philippe who told him that he had in fact already proposed him to the board. When he did not get the job, Peter shared his disappointment with Charles, one of the board members, and he was very surprised to learn that his name had never been put forward for consideration. Who should he believe?

Sometimes, a lie is implicit and only comes to the surface when the consequences of an action are revealed, leaving one with the realisation that they had been deliberately misled:

Carlo needs to appoint a project leader to replace a colleague who has recently retired. He sees this as an opportunity to reorganise his team and asks all the members to express their interest. Both François and Jane put themselves forward and he thanks them for their commitment. The same afternoon, the whole group gets an e-mail announcing that Mats, a young engineer from another department has been appointed to the position. When questioned, Mats tells them that this had been agreed weeks ago.

An act of omission, where the full truth of a situation is not shared, can also be perceived as a lie with damaging consequences:

Marc tells Anna that they have been asked to publish a status report on their joint project. He writes the report and sends it to her on Thursday for comment. Anna realises that the work of her team has been misrepresented and spends her weekend revising it. However, when she sends it to Marc on Monday morning, he tells her that the deadline has passed and that the report has already been published.

Finally, there are those insidious lies, based on rumours and obscure origins that are self-perpetuating and have long-lasting effects:

Helene is disappointed not to have been proposed for a promotion and asks Susan, her Group Leader, for an explanation. Susan reminds her that it is a collegial decision and says, “I didn’t propose you because I knew that the others would block it” and adds, “they all have a negative impression of you”.

Whatever the lie, regardless of whether the statement is totally false or only partially so, what really matters is the impact on the people concerned and what action can be taken to address it.

Challenging a lie with a view to understanding its source can prove to be constructive in that it may give us an insight into how we can correct misconceptions and maybe even restore the relationship and start rebuilding trust.

If, however, as the dictionary says, the deliberate intent is to deceive, we face a challenging situation, as it is extremely difficult to set the story straight without good faith on both sides.We always have the option of discussing the issue with higher management or turning to mediation or more formal processes to re-establish the truth, but this may turn out to be a rough journey as proving a statement to be false is no easy task.

A much easier route, and one that is well within the grasp of each of us, would be to make sure that we do not give in to the temptation of perpetuating a lie. One way of doing this would be to refrain from circulating false information or even just sharing rumours. Perhaps a world without lies is an impossible construct, but surely we can refuse to be part of the game and choose to follow the path of integrity instead?

All previous Ombud's Corners can be accessed in the Ombud's blog.

### Emily Lakdawalla - The Planetary Society Blog

Watch a test of the world's largest solid rocket booster tomorrow on NASA TV
Tomorrow morning at 10:05 a.m. EDT (14:05 UTC), NASA and Orbital ATK are test-firing the world's largest solid rocket booster in northern Utah. You can follow along live on NASA TV.

### astrobites - astro-ph reader's digest

New rings detected for old protoplanetary disk

Observing the unknown

A lot of research in physics involves looking for something specific. Researchers at CERN were looking for the Higgs-Boson, members of the LIGO team have been looking for signs of gravitational waves and astronomers involved in exoplanetary observations have been and will be successfully looking for – guess what – exoplanets. However, sometimes it happens that researchers are looking without knowing what they’re looking for. The observation of the protoplanetary disk around HL Tau (also known as HL Tauri) is a great example of such a case.

Figure 1: Image of HL Tau from 2014 press release showing ring structures in the protoplanetary disk around the star.

Rings occur in protoplanetary disks

It is now one and a half years since the observation of gaps and rings in the disk of HL Tau astonished the world. The image was taken with the Atacama Large (sub-)Millimeter Array (ALMA) telescope in Chile. Astronomers certainly expected to see a protoplanetary disk around the host star, but nobody expected to see ring structures in the disk (Figure 1). Astronomers were and still are amazed by the image because the gaps and ring may be a sign of ongoing planet formation. Hence, astronomers are excited to see whether disks around other stars show gaps and rings too. Fortunately, thanks to the latest ALMA observations presented in todays paper, we can now answer this question with ‘Yes, but…’.

Figure 2: Image of the disk around TW Hya at 870 micrometer wavelength. The small figure in the upper right shows a zoom in to the inner inner part of the disk. The white circles represent the beam size. [Figure 1 in the article.]

Rings occur at different stages of disk evolution

The authors observed the disk of TW Hya (also known as TW Hydrae) at sub-mm wavelength (870 micrometer), which allows the emission from the dust to be seen. Light emission from the dust occurs when a photon collides with a dust grain. The dust grain absorbs the photon, but later emits a new photon of longer wavelength. The disk around TW Hya is the closest protoplanetary disk to the solar system that we have observed, which allows us to view it in record-breaking spatial resolution (as small as 1 AU, the distance between the Earth and the Sun). Similar to HL Tau, the disk shows clear signs of rings in the infrared, revealing that dust is trapped in concentric annuli inside the disk (Figure 2; for a video of the observation see here). What is particularly interesting about the rings in this disk is the fact that we can see a ring at very small radius, only 1 AU from the host star. Despite the fact that both disks show clear evidence for rings, the structures themselves are very different from each other. But such distinctions are only on first sight surprising. We believe that TW Hya is about ten times older than HL Tau, so the differences in their disks are to be expected (TW Hya is around 10 million years old, compared to less than 1 million years for HL Tau).

The observed rings in TW Hya are much weaker than they are for HL Tau, we can only see them clearly due to TW Hya’s proximity. To emphasize this, the authors point out that if TW Hya was located as far away as HL Tau (~140 parsecs instead of ~54 parsecs), the observations would only show weak signs of rings at a radius of 22 AU. Additionally, the authors show a plot of the observed brightness temperature as a function of radius (Figure 3). The red dashed line shows the expected temperature at the mid-plane of the disk. If no dust was present, we would measure this temperature profile. However, dust in the disk absorbs these photons and re-emits photons of lower temperature. You can see a change in the slope at 20 AU, which indicates that there is less dust beyond 20 AU. The authors suggest that inside of 20 AU – except for the ring at 1 AU – on average a photon is absorbed by the dust grains in the disk (in physical jargon: the inner disk is optically thick).

Figure 3: Average radial surface brightness around the star (black line). The red dashed curve shows the mid-plane temperature, which the authors calculate from an underlying model. [Bottom panel of Figure 2 in the article.]

Finally, the authors discuss several previous models and ideas that may explain the cause of the observed ring-structures. However, the reason for the gaps still remains an open question, and explaining them will remain a hot topic among protoplanetary disk theorists.

## June 26, 2016

### CERN Bulletin

News from Council

With this message I would like to share with you some highlights of this week’s Council meetings.

A major topic was the approval of CERN’s Medium Term Plan (MTP) 2017-2021, along with the budget for 2017. In approving the document, Council expressed its very strong support for the research programme the MTP outlines for the coming years.

Another important topic this week was the formal approval of the High Luminosity LHC project, HL-LHC. This comes as extremely good news not only for CERN, but also for particle physics globally. HL-LHC is the top priority of the European Strategy for Particle Physics in its 2013 update, and is part of the 2016 roadmap of the European Strategy Forum on Research Infrastructures, ESFRI. It was also identified as a priority in the US P5 strategy process, and in Japan’s strategic vision for the field. It secures CERN’s future until 2035, and ensures that we will achieve the maximum scientific return on the investment in the LHC. The Finance Committee passed some HL-LHC contract adjudications this week, allowing construction work to begin without delay.

Council also noted that the measures put in place in 2010 to rebalance the Pension Fund show the desired effect with an improved funding ratio over the years.

Enlargement was also high on the agenda, with confirmation that Romania is depositing its documents of accession to Membership with UNESCO, and progress being made with many other countries. Following a report from the fact-finding task force to India, Council gave the go ahead to submit the model agreement for Associate Membership to the Indian government. A task force will be going to Lithuania next week.

Finally, Council and its committees applauded the superb performance of the LHC and the excellent accomplishments of the CERN scientific programme in general.

The Directors and I would like to emphasize that these results were only possible  thanks to the dedication and competence of the CERN personnel.

In conclusion, this has been a constructive week with many positives to take forward.

Fabiola Gianotti

## June 25, 2016

### Tommaso Dorigo - Scientificblogging

The top quark is the heaviest known subatomic particle we may call "elementary", i.e. one we describe as a point-like object; it weighs a full 66% more than the Higgs boson itself! Top was discovered in 1995 by the CDF and DZERO collaborations at the Fermilab Tevatron collider, which produced collisions between protons and antiprotons at an energy 7 times smaller than that of the proton-proton collisions now provided by the Large Hadron Collider at CERN.

### John Baez - Azimuth

Exponential Zero

guest post by David A. Tanzer

Here is a mathematical riddle.  Consider the function below, which is undefined for negative values, sends zero to one, and sends positive values to zero.   Can you come up with a nice compact formula for this function, which uses only the basic arithmetic operations, such as addition, division and powers?  You can’t use any special functions, including things like sign and step functions, which are by definition discontinuous.

In college, I ran around showing people the graph, asking them to guess the formula.  I even tried it out on some professors there, U. Penn.  My algebra prof, who was kind of intimidating, looked at it, got puzzled, and then got irritated. When I showed him the answer, he barked out: Is this exam over??!  Then I tried it out during office hours on E. Calabi, who was teaching undergraduate differential geometry.  With a twinkle in his eye, he said, why that’s zero to the x!

The graph of 0x is not without controversy.   It is reasonable that for positive x, we have that 0x is zero.  Then 0-x = 1/0x = 1/0, so the function is undefined for negative values.  But what about 00?  This question is bound to come up in the course of one’s general mathematical education, and has been the source of long, ruminative arguments.

There are three contenders for 00:  undefined, 0, and 1.  Let’s try to define it in a way that is most consistent with the general laws of exponents — in particular, that for all a, x and y, ax+y = ax ay, and a-x = 1/ax. Let’s stick to these rules, even when a, x and y are all zero.

Then 00 equals its own square, because 00 = 00 + 0 = 00 00. And it equals its reciprocal, because 00 = 0-0 = 1/00. By these criteria, 00 equals 1.

That is the justification for the above graph — and for the striking discontinuity that it contains.

Here is an intuition for the discontinuity. Consider the family of exponential curves bx, with b as the parameter.  When b = 1, you get the constant function 1.  When b is more than 1, you get an increasing exponential, and when it is between 0 and 1, you get a decreasing exponential.  The intersection of all of these graphs is the “pivot” point x = 0, y = 1.  That is the “dot” of discontinuity.

What happens to bx, as b decreases to zero?  To the right of the origin, the curve progressively flattens down to zero.  To the left it rises up towards infinity more and more steeply.  But it always crosses through the point x = 0, y = 1, which remains in the limiting curve.  In heuristic terms, the value y = 1 is the discontinuous transit from infinitesimal values to infinite values.

There are reasons, however, why 00 could be treated as indeterminate, and left undefined.  These were indicated by the good professor.

Dr. Calabi had a truly inspiring teaching style, back in the day. He spoke of Italian paintings, and showed a kind of geometric laser vision.  In the classroom, he showed us the idea of torsion using his arms to fly around the room like an airplane.  There’s even a manifold named after him, the Calabi-Yau manifold.

He went on to talk about the underpinnings of this quirky function.  First he drew attention to the function f(x,y) = xy, over the complex domain, and attempted to sketch its level sets.  He focused on the behavior of the function when x and y are close to zero.   Then he stated that every one of the level sets $L(z) = \{(x,y)|x^y = z\}$ comes arbitrarily close to (0,0).

This means that xy has a wild singularity at the origin: every complex number z is the limit of xy along some path to zero.  Indeed, to reach z, just take a path in L(z) that approaches (0,0).

To see why the level sets all approach the origin, take logs, to get ln(xy) = y ln(x) = ln(z).  That gives y = ln(z) / ln(x), which is a parametric formula for L(z).  As x goes to zero, ln(x) goes to negative infinity, so y goes to zero.  These are paths (x, ln(z)/ln(x)), completely within L(z), which approach the origin.

In making these statements, we need to keep in mind that xy is multi-valued.  That’s because xy = e y ln(x), and ln(x) is multi-valued. That is because ln(x) is the inverse of the complex exponential, which is many-to-one: adding any integer multiple of $2 \pi i$ to z leaves ez unchanged.  And that follows from the definition of the exponential, which sends a + bi to the complex number with magnitude a and phase b.

Footnote:  to visualize these operations, represent the complex numbers by the real plane.  Addition is given by vector addition.  Multiplication gives the vector with magnitude equal to the product of the magnitudes, and phase equal to the sum of the phases.   The positive real numbers have phase zero, and the positive imaginary numbers are at 90 degrees vertical, with phase $\pi / 2$.

For a specific (x,y), how many values does xy have?  Well, ln(x) has a countable number of values, all differing by integer multiples of $2 \pi i$.  This generally induces a countable number of values for xy.  But if y is rational, they  collapse down to a finite set.  When y = 1/n, for example, the values of y ln(x) are spaced apart by $2 \pi i / n$, and when these get pumped back through the exponential function, we find only n distinct values for x 1/n — they are the nth roots of x.

So, to speak of the limit of xy along a path, and of the partition of $\mathbb{C}^2$ into level sets, we need to work within a branch of xy.   Each branch induces a different partition of $\mathbb{C}^2$.  But for every one of these partitions, it holds true that all of the level sets approach the origin.  That follows from the formula for the level set L(z), which is y = ln(z) / ln(x).  As x goes to zero, every branch of ln(x) goes to negative infinity.  (Exercise:  why?)  So y also goes to zero.  The branch affects the shape of the paths to the origin, but not their existence.

Here is a qualitative description of how the level sets fit together:  they are like spokes around the origin, where each spoke is a curve in one complex dimension.  These curves are 1-D complex manifolds, which are equivalent to  two-dimensional surfaces in $\mathbb{R}^4$.  The partition comprises a two-parameter family of these surfaces, indexed by the complex value of xy.

What can be said about the geometry and topology of this “wheel of manifolds”?  We know they don’t intersect.  But are they “nicely” layered, or twisted and entangled?  As we zoom in on the origin, does the picture look smooth, or does it have a chaotic appearance, with infinite fine detail?  Suggestive of chaos is the fact that the gradient

$\nabla x^y = (y x^{y-1}, \ln(y) x^y) = (y/x, \ln(y)) x^y$

is also “wildly singular” at the origin.

These questions can be explored with plotting software.  Here, the artist would have the challenge of having only two dimensions to work with, when the “wheel” is really a structure in four-dimensional space.  So some interesting cross-sections would have to be chosen.

Exercises:

• Speak about the function bx, where b is negative, and x is real.
• What is $0^\pi$, and why?
• What is $0^i$?

Moral: something that seems odd, or like a joke that might annoy your algebra prof, could be more significant than you think.  So tell these riddles to your professors, while they are still around.

## June 24, 2016

### Andrew Jaffe - Leaves on the Line

The Sick Rose

O Rose thou art sick.
The invisible worm,
That flies in the night
In the howling storm:

Has found out thy bed
Of crimson joy:
And his dark secret love
Does thy life destroy.

—William Blake, Songs of Experience

### Clifford V. Johnson - Asymptotia

Historic Hysteria

So, *that* happened... (Click for larger view.)

The post Historic Hysteria appeared first on Asymptotia.

### Clifford V. Johnson - Asymptotia

Concern…

Anyone else finding this terrifying? A snapshot (click for larger view) from the Guardian's live results tracker* as of 19:45 PST - see here.

-cvj

*BTW, I've been using their trackers a lot during the presidential primaries, they're very good. Click to continue reading this post

The post Concern… appeared first on Asymptotia.

## June 23, 2016

### Symmetrybreaking - Fermilab/SLAC

The Higgs-shaped elephant in the room

Higgs bosons should mass-produce bottom quarks. So why is it so hard to see it happening?

Higgs bosons are born in a blob of pure concentrated energy and live only one-septillionth of a second before decaying into a cascade of other particles. In 2012, these subatomic offspring were the key to the discovery of the Higgs boson.

So-called daughter particles stick around long enough to show up in the CMS and ATLAS detectors at the Large Hadron Collider. Scientists can follow their tracks and trace the family trees back to the Higgs boson they came from.

But the particles that led to the Higgs discovery were actually some of the boson’s less common progeny. After recording several million collisions, scientists identified a handful of Z bosons and photons with a Higgs-like origin. The Standard Model of particle physics predicts that Higgs bosons produce those particles 2.5 and 0.2 percent of the time. Physicists later identified Higgs bosons decaying into W bosons, which happens about 21 percent of the time.

According to the Standard Model, the most common decay of the Higgs boson should be a transformation into a pair of bottom quarks. This should happen about 60 percent of the time.

The strange thing is, scientists have yet to discover it happening (though they have seen evidence).

According to Harvard researcher John Huth, a member of the ATLAS experiment, seeing the Higgs turning into bottom quarks is priority No. 1 for Higgs boson research.

“It would behoove us to find the Higgs decaying to bottom quarks because this is the largest interaction,” Huth says, “and it darn well better be there.”

If the Higgs to bottom quarks decay were not there, scientists would be left completely dumbfounded.

“I would be shocked if this particle does not couple to bottom quarks,” says Jim Olsen, a Princeton researcher and Physics Coordinator for the CMS experiment. “The absence of this decay would have a very large and direct impact on the relative decay rates of the Higgs boson to all of the other known particles, and the recent ATLAS and CMS combined measurements are in excellent agreement with expectations.”

To be fair, the decay of a Higgs to two bottom quarks is difficult to spot.

When a dying Higgs boson produces twin Z or W bosons, they each decay into a pair of muons or electrons. These particles leave crystal clear signals in the detectors, making it easy for scientists to spot them and track their lineage. And because photons are essentially immortal beams of light, scientists can immediately spot them and record their trajectory and energy with electromagnetic detectors.

But when a Higgs births a pair of bottom quarks, they impulsively marry other quarks, generating huge unstable families which bourgeon, break and reform. This chaotic cascade leaves a messy ancestry.

Scientists are developing special tools to disentangle the Higgs from this multi-generational subatomic soap opera. Unfortunately, there are no cheek swabs or Maury Povich to announce, Higgs, you are the father! Instead, scientists are working on algorithms that look for patterns in the energy these jets of particles deposit in the detectors.

“The decay of Higgs bosons to bottom quarks should have different kinematics from the more common processes and leave unique signatures in our detector,” Huth says. “But we need to deeply understand all the variables involved if we want to squeeze the small number of Higgs events from everything else.”

Physicist Usha Mallik and her ATLAS team of researchers at the University of Iowa have been mapping the complex bottom quark genealogies since shortly after the Higgs discovery in 2012.

“Bottom quarks produce jets of particles with all kinds and colors and flavors,” Mallik says. “There are fat jets, narrow gets, distinct jets and overlapping jets. Just to find the original bottom quarks, we need to look at all of the jet’s characteristics. This is a complex problem with a lot of people working on it.”

This year the LHC will produce five times more data than it did last year and will generate Higgs bosons 25 percent faster. Scientists expect that by August they will be able to identify this prominent decay of the Higgs and find out what it can tell them about the properties of this unique particle.

### Lubos Motl - string vacua and pheno

Negative rumors haven't passed the TRF threshold
...yet?...

Several blogs and Twitter accounts have worked hard to distribute the opinion that the 2015 excess of diphoton events resembling a new $$750\GeV$$ particle at the LHC wasn't repeated in the 2016 data. No details are given but it is implicitly assumed that this result was shared with the members of ATLAS at a meeting on June 16 at 1pm and those of CMS on June 20 at 5pm.

In recent 5 years, my sources have informed me about all similar news rather quickly and all such "rumors about imminent announcements" you could have read here were always accurate. And I became confident whenever I had at least 2 sources that looked "probably more than 50% independent of one another".

Well, let me say that the number of such sources that are telling me about the disappearance of the cernette is zero as of today. It doesn't mean that those negative reports must be unsubstantiated or even that the particle exists – it is totally plausible that it doesn't exist – but there is a reason to think that the reports are unsubstantiated. The channels that I am seeing seem untrustworthy from my viewpoint.

One can have some doubts about the "very existence of the new results" in the LHC collaborations. The processing needed to get the answers could be fast (especially because the collisions in 2016 take place at the same energy as a year earlier so the old methods just work) – but it was often slow, too. Only in the recent week or two, the amount of the 2016 collisions was higher than in 2015.

It seems reasonable to me that the experimenters were waiting at least for an equally large amount of collisions as the 2015 dataset and the processing of the information didn't really start up to recently. Freya Blekman of CMS (plus Caterina Doglioni of ATLAS) argues that it takes much more than two weeks to perform the difficult analyses and calibrate the data. She claims that the rumors are not spread by those who are involved in this process.

Andre David of CMS seriously or jokingly indicates that he likes how the false rumors have been successfully propagated. I actually find this "deliberate fog" a plausible scenario, too. ATLAS and CMS could have become more experienced and self-controlling when it comes to this rumor incontinence.

ATLAS and CMS could have launched a social experiment with false rumors according to the template pioneered by Dr Sheldon Cooper and Dr Amy Farrah-Fowler

So you know, before these reports, I estimated the probability of new physics near the invariant mass of $$750\GeV$$ to be 50%. It may have decreased to something like 45% for me now. So far, the reasons for substantial changes haven't arrived. That may change within minutes after I post this blog post. Alternatively, it may refuse to change up to mid July. Blekman wants people to be even more patient: the results will be known at ICHEP 2016, between August 3rd and 10th. That conference should incorporate the data up to mid July.

Update Thursday: One of the two main traditional gossip channels of mine has confirmed the negative rumors so my subjective probability that the rumor comes from well-informed sources has increased to 75%.

### Clifford V. Johnson - Asymptotia

QED and so forth…

(Spoiler!! :) )

Talking about gauge invariance took a couple more pages than I planned...

The post QED and so forth… appeared first on Asymptotia.

## June 22, 2016

### Jester - Resonaances

Game of Thrones: 750 GeV edition
The 750 GeV diphoton resonance has made a big impact on theoretical particle physics. The number of papers on the topic is already legendary, and they keep coming at the rate of order 10 per week. Given that the Backović model is falsified, there's no longer a theoretical upper limit.  Does this mean we are not dealing with the classical ambulance chasing scenario? The answer may be known in the next days.

So who's winning this race?  What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric. The contest is much more fierce than one might expect:  it takes 8 papers on the topic to win, and 7 papers to even get on the podium.  Among the 3 authors with 7 papers the final classification is decided by trial by combat the citation count.  The result is (drums):

Citations, tja...   Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):

This explains why the resonance is usually denoted by the letter S.

Congratulations to all the winners.  For all the rest, wish you more luck and persistence in the next edition,  provided it will take place.

My scripts are not perfect (in previous versions I missed crucial contenders, as pointed out in the comments), so let me know in case I missed your papers or miscalculated citations.

### Jester - Resonaances

Off we go
The LHC is back in action since last weekend, again colliding protons with 13 TeV energy. The weasels' conspiracy was foiled, and the perpetrators were exemplarily electrocuted. PhD students have been deployed around the LHC perimeter to counter any further sabotage attempts (stoats are known to have been in league with weasels in the past). The period that begins now may prove to be the most exciting time for particle physics in this century.  Or the most disappointing.

The beam intensity is still a factor of 10 below the nominal one, so  the harvest of last weekend is meager 40 inverse picobarns. But the number of proton bunches in the beam is quickly increasing, and once it reaches O(2000), the data will stream at a rate of a femtobarn per week or more. For the nearest future, the plan is to have a few inverse femtobarns on tape by mid-July, which would roughly double the current 13 TeV dataset. The first analyses of this chunk of data  should be presented around the time of the  ICHEP conference in early August. At that point we will know whether the 750 GeV particle is real. Celebrations will begin if the significance of the diphoton peak increases after adding the new data, even if the statistics is not enough to officially announce  a discovery. In the best of all worlds, we may also get a hint of a matching 750 GeV peak in another decay channel (ZZ, Z-photon, dilepton, t-tbar,...) which would help focus our model building. On the other hand, if the significance of the diphoton peak drops in August, there will be a massive hangover...

By the end of October, when the 2016 proton collisions are scheduled to end, the LHC hopes to collect some 20 inverse femtobarns of data. This should already give us a rough feeling of new physics within the reach of the LHC. If a hint of another resonance is seen at that point, one will surely be able to confirm or refute it with the data collected in the following years.  If nothing is seen... then you should start telling yourself that condensed matter physics is also sort of fundamental,  or that systematic uncertainties in astrophysics are not so bad after all...  In any scenario, by December, when first analyses of the full  2016 dataset will be released,  we will know infinitely more than we do today.

So fasten your seat belts and get ready for a (hopefully) bumpy ride. Serious rumors should start showing up on blogs and twitter starting from July.

### Jester - Resonaances

Weekend Plot: The king is dead (long live the king)
The new diphoton king has been discussed at length in the blogoshpere, but the late diboson king also deserves a word or two. Recall that last summer ATLAS announced a 3 sigma excess in the dijet invariant mass distribution where each jet resembles a fast moving W or Z boson decaying to a pair of quarks. This excess can be interpreted as a 2 TeV resonance decaying to a pair of W or Z bosons. For example, it could be a heavy cousin of the W boson, W' in short, decaying to a W and a Z boson. Merely a month ago this paper argued that the excess remains statistically significant after combining several different CMS and ATLAS diboson resonance run-1 analyses in hadronic and leptonic channels of W and Z decay. However, the hammer came down seconds before the diphoton excess announced: diboson resonance searches based on the LHC 13 TeV collisions data do not show anything interesting around 2 TeV. This is a serious problem for any new physics interpretation of the excess since, for this mass scale,  the statistical power of the run-2 and run-1 data is comparable.  The tension is summarized in this plot:
The green bars show the 1 and 2 sigma best fit cross section to the diboson excess. The one on the left takes into account  only the hadronic channel in ATLAS, where the excess is most significant; the one on the right is bases on  the combined run-1 data. The red lines are the limits from run-2 searches in ATLAS and CMS, scaled to 8 TeV cross sections assuming W' is produced in quark-antiquark collisions. Clearly, the best fit region for the 8 TeV data is excluded by the new 13 TeV data. I display results for the W' hypothesis, however conclusions are similar (or more pessimistic) for other hypotheses leading to WW and/or ZZ final states. All in all,  the ATLAS diboson excess is not formally buried yet, but at this point any a reversal of fortune would be a miracle.

### Jon Butterworth - Life and Physics

Don’t let’s quit

This doesn’t belong on the Guardian Science pages, because even though universities and science will suffer if Britain leaves the EU, that’s not my main reason for voting ‘remain’. But lots of friends have been writing or talking about their choice, and the difficulties of making it, and I feel the need to write my own reasons down even if everyone is saturated by now. It’s nearly over, after all.

Even though the EU is obviously imperfect, a pragmatic compromise, I will vote to stay in with hope and enthusiasm. In fact, I’ll do so partly because it’s an imperfect, pragmatic compromise.

I realise there are a number of possible reasons for voting to leave the EU, some better than others, but please don’t.

### Democracy

Maybe you’re bothered because EU democracy isn’t perfect. Also we can get outvoted on some things (these are two different points. Being outvoted sometimes is actually democratic. Some limitations on EU democracy are there to stop countries being outvoted by other countries too often). But it sort of works and it can be improved, especially if we took EU elections more seriously after all this. And we’re still ‘sovereign’, simply because we can vote to leave if we get outvoted on something important enough.

### Misplaced nostalgia and worse

Maybe you don’t like foreigners, or you want to ‘Take Britain back’  (presumably to some fantasy dreamworld circa 1958). Unlucky; the world has moved on and will continue to do so whatever the result this week. I don’t have a lot of sympathy, frankly, and I don’t think this applies to (m)any of my ‘leave’ friends.

### Lies

Maybe you believed the lies about the £350m we don’t send, which wouldn’t save the NHS anyway even if we did, or the idea that new countries are lining up to join and we couldn’t stop them if we wanted. If so please look at e.g. https://fullfact.org/europe/ for help. Some people I love and respect have believed some of these lies, and that has made me cross. These aren’t matters of opinion, and the fact that the ‘leave’ campaign repeats them over and over shows both their contempt for the intelligence of voters and the weakness of their case. If you still want to leave, knowing the facts, then fair enough. But don’t do it on a lie.

### We need change

Maybe you have a strong desire for change, because bits of British life are rubbish and unfair. In this case, the chances are your desire for change is directed at entirely the wrong target. The EU is not that powerful in terms of its direct effects on everyday life. The main thing it does is provide a mechanism for resolving common issues between EU member states. It is  a vast improvement on the violent means used in previous centuries. It spreads rights and standards to the citizens and industries of members states, making trade and travel safer and easier. And it amplifies our collective voice in global politics.

People who blame the EU for the injustices of British life are being made fools of by unscrupulous politicians, media moguls and others who have for years been content to see the destruction of British industry, the undermining of workers’ rights, the underfunding of the NHS and education, relentless attacks on national institutions such as the BBC, neglect of whole regions of the country and more.

These are the people now telling us to cut off our nose to spite our face, and they are exploiting the discontent they have fostered to persuade us this would be a good idea, by blaming the EU for choices made by UK governments.

They are quite happy for industry to move to lower-wage economies in the developing world when is suits them, but they don’t want us agreeing common standards, protections and practices with our EU partners. They don’t like Nation states clubbing together, because that can make trouble for multinationals, and (in principle at least) threatens their ability to cash in on exploitative labour practices and tax havens. They would much rather play nation off against nation.

### If…

If we vote to leave, the next few years will be dominated by attempts to negotiate something from the wreckage, against the background of a shrinking economy and a dysfunctional political class.  This will do nothing to fix inequality and the social problems we face (and I find it utterly implausible that people like Bojo, IDS or Farage would even want that). Those issues will be neglected or worse. Possibly this distraction, which is already present, is one reason some in the Conservative Party have involved us all in their internal power struggles.

If we vote remain, I hope the desire for change is preserved beyond Thursday, and is focussed not on irresponsible ‘blame the foreigner’ games, but on real politics, of hope and pragmatism, where it can make a positive difference.

I know there’s no physics here. This is the ‘life’ bit, and apart from the facts, it’s just my opinion. Before writing it I said this on twitter:

and it probably still be true that it’s better than the above. Certainly it’s shorter. But I had to try my own words.

I’m not going to enable comments here since they can be added on twitter and facebook if you feel the urge, and I can’t keep up with too many threads.

Filed under: Politics

## June 21, 2016

### Symmetrybreaking - Fermilab/SLAC

All four one and one for all

A theory of everything would unite the four forces of nature, but is such a thing possible?

Over the centuries, physicists have made giant strides in understanding and predicting the physical world by connecting phenomena that look very different on the surface.

One of the great success stories in physics is the unification of electricity and magnetism into the electromagnetic force in the 19th century. Experiments showed that electrical currents could deflect magnetic compass needles and that moving magnets could produce currents.

Then physicists linked another force, the weak force, with that electromagnetic force, forming a theory of electroweak interactions. Some physicists think the logical next step is merging all four fundamental forces—gravity, electromagnetism, the weak force and the strong force—into a single mathematical framework: a theory of everything.

Those four fundamental forces of nature are radically different in strength and behavior. And while reality has cooperated with the human habit of finding patterns so far, creating a theory of everything is perhaps the most difficult endeavor in physics.

“On some level we don't necessarily have to expect that [a theory of everything] exists,” says Cynthia Keeler, a string theorist at the Niels Bohr Institute in Denmark. “I have a little optimism about it because historically, we have been able to make various unifications. None of those had to be true.”

Despite the difficulty, the potential rewards of unification are great enough to keep physicists searching. Along the way, they’ve discovered new things they wouldn’t have learned had it not been for the quest to find a theory of everything.

Illustration by Sandbox Studio, Chicago with Corinne Mucha

### United we hope to stand

No one has yet crafted a complete theory of everything.

It’s hard to unify all of the forces when you can’t even get all of them to work at the same scale. Gravity in particular tends to be a tricky force, and no one has come up with a way of describing the force at the smallest (quantum) level.

Physicists such as Albert Einstein thought seriously about whether gravity could be unified with the electromagnetic force. After all, general relativity had shown that electric and magnetic fields produce gravity and that gravity should also make electromagnetic waves, or light. But combining gravity and electromagnetism, a mission called unified field theory, turned out to be far more complicated than making the electromagnetic theory work. This was partly because there was (and is) no good theory of quantum gravity, but also because physicists needed to incorporate the strong and weak forces.

A different idea, quantum field theory, combines Einstein’s special theory of relativity with quantum mechanics to explain the behavior of particles, but it fails horribly for gravity. That’s largely because anything with energy (or mass, thanks to relativity) creates a gravitational attraction—including gravity itself. To oversimplify somewhat, the gravitational interaction between two particles has a certain amount of energy, which produces an additional gravitational interaction with its own energy, and so on, spiraling to higher energies with each extra piece.

“One of the first things you learn about quantum gravity is that quantum field theory probably isn’t the answer,” says Robert McNees, a physicist at Loyola University Chicago. “Quantum gravity is hard because we have to come up with something new.”

Illustration by Sandbox Studio, Chicago with Corinne Mucha

### An evolution of theories

The best-known candidate for a theory of everything is string theory, in which the fundamental objects are not particles but strings that stretch out in one dimension.

Strings were proposed in the 1970s to try to explain the strong force. This first string theory proved to be unnecessary, but physicists realized it could be joined to the another theory called Kaluza-Klein theory as a possible explanation of quantum gravity.

String theory expresses quantum gravity in two dimensions rather than the four, bypassing all the problems of the quantum field theory approach but introducing other complications, namely an extra six dimensions that must be curled up on a scale too small to detect.

Unfortunately, string theory has yet to reproduce the well-tested predictions of the Standard Model.

Another well-known idea is the sci-fi-sounding “loop quantum gravity,” in which space-time on the smallest scales is made of tiny loops in a flexible mesh that produces gravity as we know it.

The idea that space-time is made up of smaller objects, just as matter is made of particles, is not unique to the theory. There are many others with equally Jabberwockian names: twistors, causal set theory, quantum graphity and so on. Granular space-time might even explain why our universe has four dimensions rather than some other number.

Loop quantum gravity’s trouble is that it can’t replicate gravity at large scales, such as the size of the solar system, as described by general relativity.

None of these theories has yet succeeded in producing a theory of everything, in part because it's so hard to test them.

“Quantum gravity is expected to kick in only at energies higher than anything that we can currently produce in a lab,” says Lisa Glaser, who works on causal set quantum gravity at the University of Nottingham. “The hope in many theories is now to predict cumulative effects,” such as unexpected black hole behavior during collisions like the ones detected recently by LIGO.

Today, many of the theories first proposed as theories of everything have moved beyond unifying the forces. For example, much of the current research in string theory is potentially important for understanding the hot soup of particles known as the quark-gluon plasma, along with the complex behavior of electrons in very cold materials like superconductors—something seemingly as far removed from quantum gravity as could be.

“On a day-to-day basis, I may not be doing a calculation that has anything directly to do with string theory,” Keeler says. “But it’s all about these ideas that came from string theory.”

Finding a theory of everything is unlikely to change the way most of us go about our business, even if our business is science. That’s the normal way of things: Chemists and electricians don't need to use quantum electrodynamics, even though that theory underlies their work. But finding such a theory could change the way we think of the universe on a fundamental level.

Even a successful theory of everything is unlikely to be a final theory. If we’ve learned anything from 150 years of unification, it’s that each step toward bringing theories together uncovers something new to learn.

### Axel Maas - Looking Inside the Standard Model

How to search for dark, unknown things: A bachelor thesis
Today, I would like to write about a recently finished bachelor thesis on the topic of dark matter and the Higgs. Though I will also present the results, the main aim of this entry is to describe an example of such a bachelor thesis in my group. I will try to follow up also in the future with such entries, to give those interested in working in particle physics an idea of what one can do already at a very early stage in one's studies.

The framework of the thesis is the idea that dark matter could interact with the Higgs particle. This is a serious possibility, as both objects are somehow related to mass. There is also not yet any substantial reason why this should not be the case. The unfortunate problem is only: how strong is this effect? Can we measure it, e.g. in the experiments at CERN?

We are looking in a master thesis in the dynamical features of this idea. This is ongoing, and something I will certainly write about later. Knowing the dynamics, however, is only the first step towards connecting the theory to experiment. To do so, we need the basic properties of the theory. This input will then be put through a simulation of what happens in the experiment. Only this result is the one really interesting for experimental physicists. They then look what any kind of imperfections of the experiments change and then they can conclude, whether they will be able to detect something. Or not.

In the thesis, we did not yet had the results from the master student's work, so we parametrized the possible outcomes. This meant mainly to have the mass and the strength of the interaction between the Higgs and the dark matter particle to play around. This gave us what we call an effective theory. Such a theory does not describe every detail, but it is sufficiently close to study a particular aspect of a theory. In this case how dark matter should interact with the Higgs at the CERN experiments.

With this effective theory, it was then possible to use simulations of what happens in the experiment. Since dark matter cannot, as the name says, be directly seen, we needed somehow a marker to say that it has been there. For that purpose we choose the so-called associate production mode.

We knew that the dark matter would escape the experiment undetected. In jargon, this is called missing energy, since we miss the energy of the dark matter particles, when we account for all we see. Since we knew what went in, and know that what goes in must come out, anything not accounted for must have been carried away by something we could not directly see. To make sure that this came from an interaction with the Higgs we needed a tracer that a Higgs had been involved. The simplest solution was to require that there is still a Higgs. Also, there are deeper reasons which require that dark matter in this theory should not only arrive with a Higgs particle, but should be obtained also from a Higgs particle before the emission of the dark matter particles. The simplest way to check for this is that there is besides the Higgs in the end also a so-called Z-boson, for technical reasons. Thus, we had what we called a signature: Look for a Higgs, a Z-boson, and missing energy.

There is, however, one unfortunate thing in known particle physics which makes this more complicated: neutrinos. These particles are also essentially undetectable for an experiment at the LHC. Thus, when produced, they will also escape undetected as missing energy. Since we do not detect either dark matter or neutrinos, we cannot decide, what actually escaped. Unfortunately, the tagging with the Higgs and the Z do not help, as neutrinos can also be produced together with them. This is what we call a background to our signal. Thus, it was necessary to account for this background.

Fortunately, there are experiments which can detect, with a lot of patience, neutrinos. They are very different from the one we at the LHC. But they gave us a lot of information on neutrinos. Hence, we knew how often neutrinos would be produced in the experiment. So, we would only need to remove this known background from what the simulation gives. Whatever is left would then be the signal of dark matter. If the remainder would be large enough, we would be able to see the dark matter in the experiment. Of course, there are many subtleties involved in this process, which I will skip.

So the student simulated both cases, and determined the signal strength. From that she could deduce that the signal grows quickly with the strength of the interaction. She also found that the signal became stronger if the dark matter particles become lighter. That is so because there is only a finite amount of energy available to produce them. But the more energy is left to make the dark matter particles move the easier it gets to produce them, an effect known in physics as phase space. In addition, she found that if the dark matter particles have half the mass of the Higgs their production became also very efficient. The reason is a resonance. Just like two noises amplify each other if they are at the same frequency, so such amplifications can happen in particle physics.

The final outcome of the bachelor thesis was thus telling us for the values of the two parameters of the effective theory how strong our signal would be. Once we know these values from our microscopic theory in the master project, we know whether we have a chance to see these particles in this type of experiments.

## June 20, 2016

### Sean Carroll - Preposterous Universe

Father of the Big Bang

Georges Lemaître died fifty years ago today, on 20 June 1966. If anyone deserves the title “Father of the Big Bang,” it would be him. Both because he investigated and popularized the Big Bang model, and because he was an actual Father, in the sense of being a Roman Catholic priest. (Which presumably excludes him from being an actual small-f father, but okay.)

John Farrell, author of a biography of Lemaître, has put together a nice video commemoration: “The Greatest Scientist You’ve Never Heard Of.” I of course have heard of him, but I agree that Lemaître isn’t as famous as he deserves.

### Robert Helling - atdotde

Restoring deleted /etc from TimeMachine
Yesterday, I managed to empty the /etc directory on my macbook (don't ask how I did it. I was working on subsurface and had written a perl script to move system files around that had to be run with sudo. And I was still debugging...).

Anyway, once I realized what the problem was I did some googling but did not find the answer. So here, as a service to fellow humans googling for help is how to fix this.

The problem is that in /etc all kinds of system configuration files are stored and without it the system does not know anymore how to do a lot of things. For example it contains /etc/passwd which contains a list of all users, their home directories and similar things. Or /etc/shadow which contains (hashed) passwords or, and this was most relevant in my case, /etc/sudoers which contains a list of users who are allowed to run commands with sudo, i.e. execute commands with administrator privileges (in the GUI this shows as as a modal dialog asking you to type in your password to proceed).

In my case, all was gone. But, luckily enough, I had a time machine backup. So I could go 30 minutes back in time and restore the directory contents.

The problem was that after restoring it, it ended up as a symlink to /private/etc and user helling wasn't allowed to access its contents. And I could not sudo the access since the system could not determine I am allowed to sudo since it could not read /etc/sudoers.

I tried a couple of things including a reboot (as a last resort I figured I could always boot in target disk mode and somehow fix the directory) but it remained in /private/etc and I could not access it.

Finally I found the solution (so here it is): I could look at the folder in Finder (it had a red no entry sign on it meaning that I could not open it). But I could right click and select Information and there I could open the lock by tying in my password (no idea why that worked) and give myself read (and for that matter write) permissions and then everything was fine again.

### Tommaso Dorigo - Scientificblogging

Program Of The Statistics Session At QCHS 2016
I am happy to announce here that a session on "Statistical Methods for Physics Analysis in the XXI Century" will take place at the "Quark Confinement and the Hadron Spectrum" conference, which will be held in Thessaloniki on August 28th to September 3rd this year. I have already mentioned this a few weeks ago, but now I can release a tentative schedule of the two afternoons devoted to the topic.

### Lubos Motl - string vacua and pheno

Formal string theory is physics, not mathematics
I was sent a book on string theory by Joseph Conlon and I pretty much hate it. However, it's the concise, frank comments such as his remark at 4gravitons that make it really transparent why I couldn't endorse almost anything that this Gentleman says.
I can’t agree on the sociology. Most of what goes under the name of ‘formal string theory’ (including the majority of what goes under the name of QFT) is far closer in spirit and motivation to what goes on in mathematics departments than in physics departments. While people working here like to call themselves ‘physicists’, in reality what is done has very little in common with what goes on with the rest of the physics department.
What? If you know the amusing quiz "Did Al Gore or Unabomber say it?", these sentences could be similarly used in the quiz "Did Conlon or Sm*lin say it?".

The motivation of formal string theory is to understand the truly fundamental ideas in string theory which is assumed by the practitioners to be the theory explaining or predicting everything in the Universe that may be explained or predicted. How may someone say that the motivation is similar to that of mathematicians? By definition, mathematicians study the truth values of propositions within axiomatic systems they invented, whether or not they have something to do with any real-world phenomena.

In the past, physicists and mathematicians co-existed and almost everyone was both (and my Alma Mater was the Department of Mathematics and Physics – in Prague, people acknowledge the proximity of the subjects). But for more than 100 years, the precise definition of mathematics as something independent of the "facts about Nature" has been carefully obeyed. Although formal string theorists would be mostly OK if they worked in mathematics departments, and some of them do, it would be a wrong classification of the subject.

Formal string theory uses mathematics more intensely, more carefully, and it often uses more advanced mathematics than other parts of physics. But all these differences are purely quantitative – to some extent, all of physics depends on mathematics that has to be done carefully enough and that isn't quite trivial – while the difference between mathematics and physics is qualitative.

Conlon also says that what formal string theorists do has very little in common with the work in the rest of the physics department. One problem with the assertion is that all work in a physics department studies phenomena that in principle follow from the most fundamental laws of Nature – which most of the top formal theorists believe to be the laws of string theory. For this reason, to say that these subdisciplines have nothing in common is laughable.

But they surely focus on very different aspects of the physical objects or reality. However, that's true for basically every other project investigated by people in the physics departments. Lene Hau is playing with some exotic states of materials that allow her to slow down light basically to zero speed. Now, what does it have to do with the work in the rest of her physics department? No classic condensed matter physicists are talking about slow light. For particle physicists, the speed of light is basically always 299,792,458 m/s. Someone else measures the magnetic moment of the electron with the accuracy of one part per quadrillion. It's all about the last digits. What do those have to do with the work in the rest of the physics department?

People are simply doing different things. For Mr Conlon to try to single out formal string theory is absolutely dishonest and totally idiotic.

He seems to be unaware of lots of totally basic facts – such as the fact that his very subfield of string phenomenology is also just a ramification of research in formal string theory. The physicists who first found the heterotic string were doing formal string theory – very analogous activity to what formal string theorists are doing today. People who found its Calabi-Yau compactifications were really doing formal string theory, too. And so on. Conlon's own work is just a minor derivative enterprise extending some previous work that may be mostly classified as formal string theory. How could his detailed work belong to physics if the major insights in his subdiscipline wouldn't belong to physics? It makes absolutely no sense.

Also, formal string theorists are in no way the "first generations of formal theorists". Formal theory has been around for a long time and it's been important at all times. The categorization is sometimes ambiguous. But I think it's right to say that e.g. Sidney Coleman was mostly a formal theorist (in quantum field theory).

The attempted demonization of formal string theory by Mr Conlon makes absolutely no sense. It's at least as irrational and as unjustifiable as the demonization of Jewish physicists in Germany of the 1930s. In both cases, the demonized entities are really responsible for something like 50% of the progress in cutting-edge physics.
From my perspective in string pheno/cosmo/astro, people go into formal topics because they are afraid of real physics – they want to be in areas that are permanently safe, and where their ideas can never be killed by a rude injection of data.
He's so hostile that the quote above could have been said by Peter W*it or the Unabomber, after all. Are you serious, Conlon? And how many papers of yours have interacted with some "rude injection of data"? There have been virtually no data about this kind of questions in your lifetime so what the hell are you talking about?

By definition, formal theorists are indeed people who generally don't want to deal with the daily dirty doses of experimental data. But there's absolutely nothing wrong about it. After all, Albert Einstein could have been classified in the same way, and so could Paul Dirac and others. Formal theorists are focusing on a careful mathematical thinking about known facts which seems a more reliable way for them to find the truth about Nature. And this opinion has been shown precious in so many examples. Perhaps a majority of the important developments in modern physics may be attributed to deeply thinking theorists who didn't want to deal with "rude injections of data".

Another aspect of the quote that is completely wrong is the identification of the "permanent safety" with the "avoidance of rude injections of data". These two things aren't the same. They're not even close. None of them is a subset of the other.

First, even if formal string theory were classified as mathematics, it still wouldn't mean that its papers are permanently safe. If someone writes a wrong paper or makes an invalid conjecture, the error may often be found and a counterexample may be invented. Formal theoretical papers and even mathematical papers run pretty much the same risk of being discredited as papers about raw experimental data. And string theorists often realize that the mathematical investigation of a particular configuration in string theory may be interpreted as a complete analogy of an experiment. Such a calculation may test a "stringy principle" just like regular experiments are testing particular theories.

If you write a mathematical paper really carefully and you prove something, it's probably going to be safe. To a lesser extent, that's true in formal theoretical physics, too (the extent is lesser because physics can't ever be quite rigorous because we don't know all the right axioms of Nature). But there's nothing wrong about doing careful work that is likely to withstand the test of time. On the contrary, it's better when the theory papers are of this kind – theorists should try to achieve these adjectives. So Conlon's logic is perverse if he presents this kind of "permanent safety" as a disadvantage. Permanently safe theoretical papers would be those that are done really well. They are an ideal that may and should be approached by the theorists but it can't ever be quite reached.

Second, we must carefully ask what is or isn't "permanently safe". In the previous paragraphs, I wrote that even in the absence of raw experimental data, papers or propositions in theoretical physics (and even mathematics) aren't "permanently safe". They can still be shown wrong. On the other hand, Mr Conlon talks about "areas" that are permanently safe. What is exactly this "area"? If he means the whole subdiscipline of formal theory in high-energy physics, that subdiscipline is indeed permanently safe (assuming that the human civilization won't be exterminated or completely intellectually crippled in some way), and it should be. It is just as permanently safe as condensed matter physics. As physics is making progress, people move to the research of new questions. But they still study solid materials – and similarly, they study the most fundamental and theoretical aspects of the laws of physics.

So what the hell is your problem? People just pick fields. Some people pick formal string theory, other people pick other subdisciplines. No particular paper or project or research direction is permanently safe in any of these subdisciplines. But all sufficiently widely defined subdisciplines are permanently safe and that's a good thing, too. For Mr Conlon to single out formal theory for this assault proves that he lacks the integrity needed to do science. There is absolutely no justification for such singling out.
For those of a certain generation – who did PhDs before or within the 10-15 years following the construction of the Standard Model – this is less true, and they generally have a good knowledge of particle physics. But I would say probably >90% of formal people under the age of 40 have basically zero ability to contribute anything in the pheno/cosmo areas; I have talked to enough to know that most have little real knowledge of how the Standard Model (of either particle physics or cosmology) works, how experiments work, or how ideas to go beyond the SM work.
This is of course a massive attack against a large group of (young) theorists. Tetragraviton objects with a counterexample, Edward Hughes (whom Mr Conlon knows), and I could bring even better examples. The quality and versatility of various people differs, too. I don't want to go into names because a credible grading of all formal theorists below 40 years of age would need a far more careful research.

Instead, let me assume that what Mr Conlon writes is true. He complains that the formal theorists couldn't usefully do phenomenology. Great and what? In the same way, Mr Conlon or other phenomenologists would be unable to do formal theory or most of its subdivisions. What's the difference? Why would he be demanding that people in a different subdiscipline of high-energy physics should be able to do what he does?

Phenomenology and formal theory are overlapping but they have also been "partly segregated" for quite some time. When Paul Ginsparg founded the arXiv.org (originally xxx.lanl.gov) server around 1991, he already established two different archives, hep-ph and hep-th (phenomenology and theory), and invented the clever name "phenomenology" for the first group. This classification reflected a genuine soft split of the community that existed in the early 1990s. In fact, such a split actually existed before string theory was born. It was just a "finer split" that continued the separation into "theory and experiment", something that could have been observed in physics for more than a century.
I have heard the ‘when something exciting happens we will move in and sort it out’ attitude of formal theorists the entire time I have been in the subject – and it’s deluded BS.
Tetragraviton replies that people are far more flexible than Mr Conlon thinks. But I think that the main problem is that Mr Conlon has wrong expectations about what people should be doing. When someone is in love with things like monstrous moonshine, it's rather unlikely that he will immediately join some "dirty" experiment-driven activity. Instead, such a person – just like every other person – may be waiting for interesting events that happen close enough to his specialization. He has a higher probability to join some "uprising" that is closer to his previous work; and a lower probability to join something very different.

But what's obvious, important, and what seems to be completely misunderstood by Mr Conlon is that there must exist (sufficiently many) formal theorists because the evolution of physics without the most theoretical branch would be unavoidably unhealthy. The people who do primarily formal theory may be good at other things – phenomenology, experiments, tennis, or something else. It's good when they are but that's not their primary task. Even if you can't get people who can be like Enrico Fermi and be good at these different enough methodologies or subdisciplines, it's still true that physics – and string theory – simply needs formal theory.

The most effective way to increase the number of "really versatile and smart" formal theorists is to stop the bullying that surely repels a fraction of the greatest young brains from theoretical physics. But whatever is the fraction of the young big shots who choose one subject or another, it's obvious that formal string theory has to be studied.

On the 4gravitons blog, Haelfix posted a sensible comment about the reasons why many people prefer formal string theory over string phenomenology: the search for the right vacuum seems too hard to them because of the large number of solutions, and because of the inability to compute all things "quite exactly" even in a well-defined string compactification (generic quantities are only calculable in various perturbative schemes etc.). For this reason, many people believe that before physicists identify the "precisely correct stringy model" to describe the world around us, some qualitative progress must take place in the foundations of string theory first – and that's why they think that it's a faster route to progress to study formal string theory at this point. Recent decades seem to vindicate them – formal string theory has produced significantly more profound changes than string phenomenology after the mid 1990s. Those victories of formal string theory include D-branes and all the things they helped to spark - dualities, M-theory, AdS/CFT correspondence, advances in black hole information puzzle, the landscape as a sketch of the map of string theory's solutions, and other things. String phenomenology has worked pretty nicely but the changes since the 1980s were relatively incremental in comparison.

Mr Conlon seems to miss all these things – and instead seems to be full of superficial and fundamentally misguided Šmoit-like vitriol that make him attack whole essential subdisciplines of physics.

## June 19, 2016

### Lubos Motl - string vacua and pheno

Ambulance chasing is a justifiable strategy to search for the truth
As Ben Allanach, a self-described occasional ambulance chaser, described in his 2014 TRF guest blog, ambulance chasers were originally lawyers with fast cars who were (or are) trying to catch an ambulance (or visit a disaster site) because the sick and injured people in them, potential clients who may have a pretty good reason to sue someone and win the lawsuit. For certain reasons, this practice is illegal in the U.S. and Australia.

Analogously, in particle physics, ambulance chasers are people who write many papers about a topic that is hot, especially one ignited by an excess in the experimental data. This activity is thankfully legal.

The phrase "ambulance chasing" is often used pejoratively. It's partly because the "ambulance chasers" may justifiably look a bit immoral and egotistically ambitious. However, most of the time, it is because the accusers are jealous and lazy losers. Needless to say, it often turns out that there are no patients capable of suing in the ambulance which the critics of ambulance chasing view as a vindication. However, this vindication is not a rule.

The probability to find clients is higher in the ambulances. It's similar as the reason why it's a better investment of money to make Arabs strip at the airport than to ask old white grandmothers to do the same – whether or not some politically correct ideologues want to deny this obvious point.

Is it sensible that we see examples of ambulance chasing such as the 400 or so papers about the $$750\GeV$$ cernette diphoton resonance?

It just happens that in recent 2 days, there were two places in the physics blogosphere that discussed a similar topic:
An exchange between 4gravitons and Giotis

Game of Thrones: 750 GeV edition (Resonaances)
It seems rather clear that much like your humble correspondent, the first page is much more sympathetic to the ambulance chasing episodes than the latter one.

My conservative background and that of a formal theorist make me a natural opponent of ambulance chasers. If I oversimplify this viewpoint, we're solving primarily long-term tasks and it's a symptom of the absence of anchoring when people chase the ambulances. For example, quantum field theory and string theory will almost certainly be with us regardless of some minor episodes and discoveries and that's why we need to understand it better and we should pay too much attention to some "probably" short-lived fads.

However, the more experiment-oriented parts of particle physics – and even the theoretical ones – are sometimes much more dependent on an exciting breakthrough. Some developments may look like short-lived fads at the beginning but they actually turn out to be important in the long run. A subfield suddenly grows, people write lots of papers about it and they may even switch their fields a little bit. Is that wrong? Is it a pathology?

I don't think so.

As Giotis and Tetragraviton agreed in their discussion, the size of the subfields of string theory (or any other portion of physics research) changes with time. The subfields that recently experienced a perceived "breakthrough" grow bigger. Is that immoral?

Not at all.

It simply means that people appreciate that some methods or ideas or questions have been successful or they have demonstrated that they could be a route to learn lots of new truth quickly. And because scientists want to learn as much of the truth as possible, they naturally tend to pick the methods that make this process more efficient. It's common sense. There seem to low-hanging fruits around the breakthroughs and people try to pick them. As they're being picked, the perceived density of the fruits may go down and the "fad" may fade away.

So string phenomenology was producing a huge amount of papers for a few years after some amazing progress took place in that field. Similarly, the AdS/CFT correspondence opened a whole new industry of papers – the papers that study quantum field theories using "holographic" methods. AdS/CFT came from Maldacena's research of formal string theory and the black hole information puzzle. These two interrelated subfields of string theory have actually produced several other subindustries, although smaller than the holographic ones. The word "subindustry" is a more appropriate term than a "minirevolution" for something that the cynics call a "fad", especially when this "fad" continues to grow for 20 years or so.

The ability of formal string theory to produce similar breakthroughts is arguably still being underestimated.

At any rate, a part of the dynamics of the changing relative importance of subfields of fundamental physics – or string theory – is certainly justifiable. I don't want to claim that people study things exactly as much as they should – I have often made it clear that I believe that such deviations from the optimum exist, most recently in the previous short paragraph ;-) – but most critics generally misunderstand how sensible it is for people to be excited about different things at different times. Many critics seemingly believe that they are able to determine how much effort should be spent with every thing (such as A, B, C) even though they don't understand anything about A, B, C. But that's nonsense. You just can't understand how big percentages of time physicists spend with various ideas if you don't know what all of these ideas are.

At Resonaances, Adam Falkowski is basically making fun out of the people who have written many papers about the $$750\GeV$$ diphoton resonance. With this seemingly hostile attitude to resonances, you may think it's ironic for the blog to be called Resonaances. But maybe the word means Resona-hahaha-ances, i.e. a blog trying to mock resonances.

The top contestants have a similar output as Alessandro Strumia – some 7 papers on the topic and 500 citations. A paper with 10 authors from December 2015 has over 300 citations now. If they had to divide the citations among the co-authors, it wouldn't be too many per co-author.

Clearly, physicists like Strumia are partially engaging in an activity that could be classified as a sport. But is that wrong? I don't think so. Physics is the process of learning the truth. We often imagine theoretical physicists as monks who despise things like sports and by my fundamental instincts, I am close enough to it. But the "learning of the truth" isn't in any strict way incompatible with "sports". They're just independent entities. And sports sometimes do help to find the truth in science.

I would agree with some "much softer and more careful" criticisms of ambulance chasing. For example, in most such situations, the law of diminishing returns applies. The value of many papers usually grows sublinearly with their number which is why most "fads" usually fade away. The value of 7 papers about the resonance is probably smaller than 7 times the value of a single paper about the resonance. First, there is probably some overlap among the papers. Second, even if there is no overlap, many of the "added" papers are probably less interesting because the author knows which of the ideas are the most promising ones, and by writing many papers, he or she probably has to publish the less promising ones, too.

So this activity may inflate the "total number of citations" more quickly than the underlying value of the physical insights. At the end, the number of citations that people like Alessandro Strumia have accumulated may overstate their overall contributions. But they don't really hysterically fight for the opinion that it doesn't. They just do science – and a lot of it. It's up to others who need the number to quantify the contributions.

At the end, I still find it obvious that in average, it's better when a physicist writes 7 papers about a topic than if he writes 2. The results vary and 1 paper is often more valuable than 10,000 other papers. But if one sort of imagines that they're papers at the same level, 7 is better than 2. So it's not reasonable for Falkowski to mock Strumia et al. Needless to say, if the resonance goes away, Falkowski will feel vindicated. If it is confirmed, Falkowski will look sort of stupid.

But even if the resonance (or whatever it is) goes away, we can't be sure about it now. The only way to feel "sure" about it is to assume that no important new physics will ever be found. But if you really believe such a thing, you just shouldn't work in this portion of physics, Dr Falkowski, because by your assumption, your work is worthless.

It is absolutely healthy when the experimental deviations energize the research into some models. You know, a similar research into models takes place even in the absence of excesses. But by Bayes' theorem, the excesses increase the odds that certain models of new physics are right. You can calculate how big this increase is: it is a simple function of the formal $$p$$-value. If there is just a 0.01% formal probability that the cernette excess is a false positive, it is even justifiable to increase the activity on the models compatible with it by four orders of magnitude. In practice, a much smaller increase takes place. But some increase is undoubtedly justified.

Physicists often say that they're doing the research purely out of their curiosity and passion for the truth. In practice, it's almost never the case. People have personal ambitions and they also want jobs, salaries, grants, and maybe also fame. But that's true in other occupations, too. I don't think it's right to demonize the athletes among particle physicists. They're pretty amazing.

In particle physics, you can have counterparts of ATP and WTA rankings in tennis. Strumia is probably below veterans such as John Ellis. At the end, physics isn't just like tennis so the most important and famous physicists may be – and often are – non-athletes such as physics incarnations of monks. They're rather essential. But the physics athletes do a nontrivial part of the progress, anyway. Attempts to erase one of these whole groups amount to cultural revolutions of Mao's type. You just shouldn't think about such plans. Everyone who is dreaming about a similar cultural revolution surely misunderstands something important about the processes that make science – and the human society or the economy – work.

Whether or not the diphoton resonance is going to be confirmed, it was utterly reasonable and should be appreciated that some people did a lot of work on models that could explain such a deviation. At least, they worked on an exercise that had great chances to be relevant in Nature. But even if it weren't relevant, some of the lessons of this research may be applied in a world without the cernette, too. The experimental deviation has also been a natural source of the excitement and energy for the phenomenologists because many of them clearly do believe in their hearts that this excess could be real.

Incidentally, we may learn whether it's real very soon. In 2016, more than 5 inverse femtobarns have already been recorded by each major LHC detector. So before the end of June, 2016 has already beaten all of 2015. The data needed to decide about the fate of the theory that the cernette exists with the cross section indicated in 2015 have almost certainly been collected. Someone may already know the answer but it hasn't gotten out yet. It may change within days.

Bonus
An example of an activity much less justified than ambulance chasing, from Resonaances
Jason Stanidge said...

It doesn't look good IMO that none of the physicists listed come from leading institutions such as Harvard, Cambridge UK etc. Ah well: when the bump fades as will be soon rumoured by Jester, at least there is the possibility of other bumps in the data to look forward to.

Anonymous said...

Hi Jason Stanidge
are you runouring that the bump is fading away?
Now, this is an example of "trabant chasing with the hope that the trabant will turn into an ambulance". Anonymous has no rational reason to think that Jason Stanidge knows anything about the most recent LHC data. He is trying to push Jason to admit that he is an insider who knows something. And Jason has some probability to reply in a way that suggests that he may know something. But everyone has a nonzero probability to reply in this way. This "signal" would be artificially constructed by the "experimenter", Anonymous, which makes it much less tangible than the actual diphoton excess that was produced by the unique 2015 LHC dataset.

By the way, the comment about the leading institutions is illogical, too. The diphoton resonance has surely been worked on by numerous researchers from Harvard and all other top places, too. Moreover, I find it somewhat bizarre not to count Strumia's affiliation, CERN theory group, among the top places in particle physics. The winners of the ambulance chasing contests may be from other places than the Ivy League but that doesn't show that there's something "not good" about the intense work on currently intriguing experimental signs. You simply can't define the best character of work as whatever is being done at Harvard.

## June 18, 2016

### Jester - Resonaances

Black hole dark matter
The idea that dark matter is made of primordial black holes is very old but has always been in the backwater of particle physics. The WIMP or asymmetric dark matter paradigms are preferred for several reasons such as calculability, observational opportunities, and a more direct connection to cherished theories beyond the Standard Model. But in the recent months there has been more interest, triggered in part by the LIGO observations of black hole binary mergers. In the first observed event, the mass of each of the black holes was estimated at around 30 solar masses. While such a system may well be of boring astrophysical origin, it is somewhat unexpected because typical black holes we come across in everyday life are either a bit smaller (around one solar mass) or much larger (supermassive black hole in the galactic center). On the other hand, if the dark matter halo were made of black holes, scattering processes would sometimes create short-lived binary systems. Assuming a significant fraction of dark matter in the universe is made of primordial black holes, this paper estimated that the rate of merger processes is in the right ballpark to explain the LIGO events.

Primordial black holes can form from large density fluctuations in the early universe. On the largest observable scales the universe is incredibly homogenous, as witnessed by the uniform temperature of the Cosmic Microwave Background over the entire sky. However on smaller scales the primordial inhomogeneities could be much larger without contradicting observations.  From the fundamental point of view, large density fluctuations may be generated by several distinct mechanism, for example during the final stages of inflation in the waterfall phase in the hybrid inflation scenario. While it is rather generic that this or similar process may seed black hole formation in the radiation-dominated era, severe fine-tuning is required to produce the right amount of black holes and ensure that the resulting universe resembles the one we know.

All in all, it's fair to say that the scenario where all or a significant fraction of  dark matter  is made of primordial black holes is not completely absurd. Moreover, one typically expects the masses to span a fairly narrow range. Could it be that the LIGO events is the first indirect detection of dark matter made of O(10)-solar-mass black holes? One problem with this scenario is that it is excluded, as can be seen in the plot.  Black holes sloshing through the early dense universe accrete the surrounding matter and produce X-rays which could ionize atoms and disrupt the Cosmic Microwave Background. In the 10-100 solar mass range relevant for LIGO this effect currently gives the strongest constraint on primordial black holes: according to this paper they are allowed to constitute  not more than 0.01% of the total dark matter abundance. In astrophysics, however, not only signals but also constraints should be taken with a grain of salt.  In this particular case, the word in town is that the derivation contains a numerical error and that the corrected limit is 2 orders of magnitude less severe than what's shown in the plot. Moreover, this limit strongly depends on the model of accretion, and more favorable assumptions may buy another order of magnitude or two. All in all, the possibility of dark matter made of  primordial black hole in the 10-100 solar mass range should not be completely discarded yet. Another possibility is that black holes make only a small fraction of dark matter, but the merger rate is faster, closer to the estimate of this paper.

Assuming this is the true scenario, how will we know? Direct detection of black holes is discouraged, while the usual cosmic ray signals are absent. Instead, in most of the mass range, the best probes of primordial black holes are various lensing observations. For LIGO black holes, progress may be made via observations of fast radio bursts. These are strong radio signals of (probably) extragalactic origin and millisecond duration. The radio signal passing near a O(10)-solar-mass black hole could be strongly lensed, leading to repeated signals detected on Earth with an observable time delay. In the near future we should observe hundreds of such repeated bursts, or obtain new strong constraints on primordial black holes in the interesting mass ballpark. Gravitational wave astronomy may offer another way.  When more statistics is accumulated, we will be able to say something about the spatial distributions of the merger events. Primordial black holes should be distributed like dark matter halos, whereas astrophysical black holes should be correlated with luminous galaxies. Also, the typical eccentricity of the astrophysical black hole binaries should be different.  With some luck, the primordial black hole dark matter scenario may be vindicated or robustly excluded  in the near future.

## June 17, 2016

### Jaques Distler - Musings

Coriolis

I really like the science fiction TV series The Expanse. In addition to a good plot and a convincing vision of human society two centuries hence, it depicts, as Phil Plait observes, a lot of good science in a matter-of-fact, almost off-hand fashion. But one scene (really, just a few dialogue-free seconds in a longer scene) has been bothering me. In it, Miller, the hard-boiled detective living on Ceres, pours himself a drink. And we see — as the whiskey slowly pours from the bottle into the glass — that the artificial gravity at the lower levels (where the poor people live) is significantly weaker than near the surface (where the rich live) and that there’s a significant Coriolis effect. Unfortunately, the effect depicted is 3 orders-of-magnitude too big.

To explain, six million residents inhabit the interior of the asteroid, which has been spun up to provide an artificial gravity. Ceres has a radius, ${R}_{C}=4.73×{10}^{5}R_C = 4.73\times 10^5$ m and a surface gravity ${g}_{C}=.27\phantom{\rule{thinmathspace}{0ex}}\text{m}/{\text{s}}^{2}g_C=.27\,\text\left\{m\right\}/\text\left\{s\right\}^2$. The rotational period is supposed to be 40 minutes ($\omega \sim 2.6×{10}^{-3}\phantom{\rule{thinmathspace}{0ex}}/\text{s}\omega\sim 2.6\times 10^\left\{-3\right\}\, /\text\left\{s\right\}$). Near the surface, this yields ${\omega }^{2}{R}_{C}\left(1-{ϵ}^{2}\right)\equiv {\omega }^{2}{R}_{C}-{g}_{C}\sim 0.3\omega^2 R_C\left(1-\epsilon^2\right)\equiv \omega^2 R_C -g_C \sim 0.3$ g. On the innermost level, $R=\frac{1}{3}{R}_{C}R=\tfrac\left\{1\right\}\left\{3\right\} R_C$, and the effective artificial gravity is only 0.1 g.

So how big is the Coriolis effect in this scenario?

The equations1 to be solved are

(1)$\begin{array}{rl}\frac{{d}^{2}x}{d{t}^{2}}& ={\omega }^{2}\left(1-{ϵ}^{2}\right)x-2\omega \frac{dy}{dt}\\ \frac{{d}^{2}y}{d{t}^{2}}& ={\omega }^{2}\left(1-{ϵ}^{2}\right)\left(y-R\right)+2\omega \frac{dx}{dt}\end{array}\begin\left\{split\right\} \frac\left\{d^2 x\right\}\left\{d t^2\right\}&= \omega^2\left(1-\epsilon^2\right) x - 2 \omega \frac\left\{d y\right\}\left\{d t\right\}\\ \frac\left\{d^2 y\right\}\left\{d t^2\right\}&= \omega^2\left(1-\epsilon^2\right) \left(y-R\right) + 2 \omega \frac\left\{d x\right\}\left\{d t\right\} \end\left\{split\right\} $

with initial conditions $x\left(t\right)=\stackrel{˙}{x}\left(t\right)=y\left(t\right)=\stackrel{˙}{y}\left(t\right)=0x\left(t\right)=\dot\left\{x\right\}\left(t\right)=y\left(t\right)=\dot\left\{y\right\}\left(t\right)=0$. The exact solution solution is elementary, but for $\omega t\ll 1\omega t\ll 1$, i.e. for times much shorter than the rotational period, we can approximate

(2)$\begin{array}{rl}x\left(t\right)& =\frac{1}{3}\left(1-{ϵ}^{2}\right)R\left(\omega t{\right)}^{3}+O\left(\left(\omega t{\right)}^{5}\right),\\ y\left(t\right)& =-\frac{1}{2}\left(1-{ϵ}^{2}\right)R\left(\omega t{\right)}^{2}+O\left(\left(\omega t{\right)}^{4}\right)\end{array}\begin\left\{split\right\} x\left(t\right)&= \frac\left\{1\right\}\left\{3\right\} \left(1-\epsilon^2\right) R \left(\omega t\right)^3 +O\bigl\left(\left(\omega t\right)^5\bigr\right),\\ y\left(t\right)&= - \tfrac\left\{1\right\}\left\{2\right\} \left(1-\epsilon^2\right)R\left(\omega t\right)^2+O\bigl\left(\left(\omega t\right)^4\bigr\right) \end\left\{split\right\} $

From (2), if the whiskey falls a distance $h\ll Rh\ll R$, it undergoes a lateral displacement

(3)$\Delta x=\frac{2}{3}h\phantom{\rule{thinmathspace}{0ex}}{\left(\frac{2h}{\left(1-{ϵ}^{2}\right)R}\right)}^{1/2}\Delta x = \tfrac\left\{2\right\}\left\{3\right\} h\, \left\{\left\left(\frac\left\{2h\right\}\left\{\left(1-\epsilon^2\right)R\right\}\right\right)\right\}^\left\{1/2\right\} $

For $h=16h=16$ cm and $R=\frac{1}{3}{R}_{C}R=\tfrac\left\{1\right\}\left\{3\right\}R_C$, this is $\frac{\Delta x}{h}={10}^{-3}\frac\left\{\Delta x\right\}\left\{h\right\}= 10^\left\{-3\right\}$ which is 3 orders of magnitude smaller than depicted in the screenshot above2.

So, while I love the idea of the Coriolis effect appearing — however tangentially — in a TV drama, this really wasn’t the place for it.

1 Here, I’m approximating Ceres to be a sphere of uniform density. That’s not really correct, but since the contribution of Ceres’ intrinsic gravity to (3) is only a 5% effect, the corrections from non-uniform density are negligible.

2 We could complain about other things: like that the slope should be monotonic (very much unlike what’s depicted). But that seems a minor quibble, compared to the effect being a thousand times too large.

### Quantum Diaries

Enough data to explore the unknown

The Large Hadron Collider (LHC) at CERN has already delivered more high energy data than it had in 2015. To put this in numbers, the LHC has produced 4.8 fb-1, compared to 4.2 fb-1 last year, where fb-1 represents one inverse femtobarn, the unit used to evaluate the data sample size. This was achieved in just one and a half month compared to five months of operation last year.

With this data at hand, and the projected 20-30 fb-1 until November, both the ATLAS and CMS experiments can now explore new territories and, among other things, cross-check on the intriguing events they reported having found at the end of 2015. If this particular effect is confirmed, it would reveal the presence of a new particle with a mass of 750 GeV, six times the mass of the Higgs boson. Unfortunately, there was not enough data in 2015 to get a clear answer. The LHC had a slow restart last year following two years of major improvements to raise its energy reach. But if the current performance continues, the discovery potential will increase tremendously. All this to say that everyone is keeping their fingers crossed.

If any new particle were found, it would open the doors to bright new horizons in particle physics. Unlike the discovery of the Higgs boson in 2012, if the LHC experiments discover a anomaly or a new particle, it would bring a new understanding of the basic constituents of matter and how they interact. The Higgs boson was the last missing piece of the current theoretical model, called the Standard Model. This model can no longer accommodate new particles. However, it has been known for decades that this model is flawed, but so far, theorists have been unable to predict which theory should replace it and experimentalists have failed to find the slightest concrete signs from a broader theory. We need new experimental evidence to move forward.

Although the new data is already being reconstructed and calibrated, it will remain “blinded” until a few days prior to August 3, the opening date of the International Conference on High Energy Physics. This means that until then, the region where this new particle could be remains masked to prevent biasing the data reconstruction process. The same selection criteria that were used for last year data will then be applied to the new data. If a similar excess is still observed at 750 GeV in the 2016 data, the presence of a new particle will make no doubt.

Even if this particular excess turns out to be just a statistical fluctuation, the bane of physicists’ existence, there will still be enough data to explore a wealth of possibilities. Meanwhile, you can follow the LHC activities live or watch CMS and ATLAS data samples grow. I will not be available to report on the news from the conference in August due to hiking duties, but if anything new is announced, even I expect to hear its echo reverberating in the Alps.

Pauline Gagnon

To find out more about particle physics, check out my book « Who Cares about Particle Physics: making sense of the Higgs boson, the Large Hadron Collider and CERN », which can already be ordered from Oxford University Press. In bookstores after 21 July. Easy to read: I understood everything!

The total amount of data delivered in 2016 at an energy of 13 TeV to the experiments by the LHC (blue graph) and recorded by CMS (yellow graph) as of 17 June. One fb-1 of data is equivalent to 1000 pb-1.

### Quantum Diaries

Assez de données pour explorer l’inconnu

Le Grand collisionneur de hadrons (LHC) du CERN a déjà produit depuis avril plus de données à haute énergie qu’en 2015. Pour quantifier le tout, le LHC a produit 4.8 fb-1 en 2016, à comparer aux 4.2 fb-1 de l’année dernière. Le symbole fb-1 représente un femtobarn inverse, l’unité utilisée pour évaluer la taille des échantillons de données. Tout cela en à peine un mois et demi au lieu des cinq mois requis en 2015.

Avec ces données en réserve et les 20-30 fb-1 projetés d’ici à novembre, les expériences ATLAS et CMS peuvent déjà repousser la limite du connu et, entre autres, vérifier si les étranges événements rapportés fin 2015 sont toujours observés. Si cet effet était confirmé, il révèlerait la présence d’une nouvelle particule ayant une masse de 750 GeV, soit six fois plus lourde que le boson de Higgs. Malheureusement en 2015, il n’y avait pas suffisamment de données pour obtenir une réponse claire. Après deux ans de travaux majeurs visant à accroître sa portée en énergie, le LHC a repris ses opérations l’an dernier mais à faible régime. Si sa performance actuelle se maintient, les chances de faire de nouvelles découvertes seront décuplées. Tout le monde garde donc les doigts croisés.

Toute nouvelle particule ouvrirait la porte sur de nouveaux horizons en physique des particules. Contrairement à la découverte du boson de Higgs en 2012, si les expériences du LHC révèlent une anomalie ou l’existence d’une nouvelle particule, cela modifierait notre compréhension des constituants de base de la matière et des forces qui les régissent. Le boson de Higgs constituait la pièce manquante du Modèle standard, le modèle théorique actuel. Ce modèle ne peut plus accommoder de nouvelles particules. On sait pourtant depuis des décennies qu’il est limité, bien qu’à ce jour, les théoriciens et théoriciennes n’aient pu prédire quelle théorie devrait le remplacer et les expérimentalistes ont échoué à trouver le moindre signe révélant cette nouvelle théorie. Une évidence expérimentale est donc absolument nécessaire pour avancer.

Bien que les nouvelles données soient déjà en cours de reconstruction et de calibration, elles resteront “masquées” jusqu’à quelques jours avant le 3 août, date d’ouverture de la principale conférence de physique cet été. D’ici là, la région où la nouvelle particule pourrait se trouver est masquée afin de ne pas biaiser le processus de reconstruction des données. A la dernière minute, on appliquera aux nouvelles données les mêmes critères de sélection que ceux utilisés l’an dernier. Si ces évènements sont toujours observés à 750 GeV dans les données de 2016, la présence d’une nouvelle particule ne fera alors plus aucun doute.

Mais même si cela s’avérait n’être qu’une simple fluctuation statistique, ce qui arrive souvent en physique de par sa nature, la quantité de données accumulée permettra d’explorer une foule d’autres possibilités. En attendant, vous pouvez suivre les activités du LHC en direct ou voir grandir les échantillons de données de CMS et d’ATLAS. Je ne pourrai malheureusement pas vous rapporter ce qui sera présenté à la conférence en août, marche en montagne oblige, mais si une découverte quelconque est annoncée, même moi je m’attends à entendre son écho résonner dans les Alpes.

Pauline Gagnon

Pour en apprendre plus sur la physique des particules, ne manquez pas mon livre « Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels » disponible en librairie au Québec et en Europe, de meme qu’aux Editions MultiMondes. Facile à lire : moi, j’ai tout compris!

Graphe cumulatif montrant la quantité de données produites à 13 TeV en 2016 par le LHC (en bleu) et récoltées par l’expérience CMS (en jaune) en date du 17 juin.

## June 16, 2016

### Marco Frasca - The Gauge Connection

Higgs or not Higgs, that is the question

LHCP2016 is running yet with further analysis on 2015 data by people at CERN. We all have seen the history unfolding since the epochal event on 4 July 2012 where the announcement of the great discovery happened. Since then, also Kibble passed away. What is still there is our need of a deep understanding of the Higgs sector of the Standard Model. Quite recently, LHC restarted operations at the top achievable and data are gathered and analysed in view of the summer conferences.

The scalar particle observed at CERN has a mass of about 125 GeV. Data gathered on 2015 seem to indicate a further state at 750 GeV but this is yet to be confirmed. Anyway, both ATLAS and CMS see this bump in the $\gamma\gamma$ data and this seems to follow the story of the discovery of the Higgs particle. But we have not a fully understanding of the Higgs sector  yet. The reason is that, in run I, gathered data were not enough to reduce the error bars to such small values to decide if Standard Model wins or not. Besides, as shown by run II, further excitations seem to pop up. So, several theoretical proposals for the Higgs sector still stand up and could be also confirmed already in August this year.

Indeed, there are great news already in the data presented at LHCP2016. As I pointed out here, there is a curious behavior of the strengths of the signals of Higgs decay in $WW,\ ZZ$ and some tension, even if small, appeared between ATLAS and CMS results. Indeed, ATLAS seemed to have seen more events than CMS moving these contributions well beyond the unit value but, as CMS had them somewhat below, the average was the expected unity agreeing with expectations from the Standard Model. The strength of the signals is essential to understand if the propagator of the Higgs field is the usual free particle one or has some factor reducing it significantly with contributions from higher states summing up to unity. In this case, the observed state at 125 GeV would be just the ground state of a tower of particles being its excited states. As I showed recently, this is not physics beyond the Standard Model, rather is obtained by solving exactly the quantum equations of motion of the Higgs sector (see here). This is done considering the other fields interacting with the Higgs field just a perturbation.

So, let us do a recap of what was the situation for the strength of the signals for the $WW\, ZZ$ decays of the Higgs particle. At LHCP2015 the data were given in the following slide

From the table one can see that the signal strengths for $WW,\ ZZ$ decays in ATLAS are somewhat beyond unity while in CMS these are practically unity for $ZZ$ but, more interestingly, 0.85 for $WW$. But we know that data gathered for $WW$ decay are largely more than for $ZZ$ decay. The error bars are large enough to be not a concern here. The value 0.85 is really in agreement with the already cited exact computations from the Higgs sector but, within the error, in overall agreement with the Standard Model. This seems to point toward on overestimated number of events in ATLAS but a somewhat reduced number of events in CMS, at least for $WW$ decay.

At LHCP2016 new data have been presented from the two collaborations, at least for the $ZZ$ decay. The results are striking. In order to see if the scenario provided from the exact solution of the Higgs sector is in agreement with data, these should be confirmed from run II and those from ATLAS should go down significantly. This is indeed what is going on! This is the corresponding slide

This result is striking per se as shows a tendency toward a decreasing value when, in precedence, it was around unity. Now it is aligned with the value seen at CMS for the $WW$ decay! The value seen is again in agreement with that given in the exact solution of the Higgs sector. And ATLAS? This is the most shocking result: They see a significant reduced set of events and the signal strength they obtain is now aligned to the one of CMS (see Strandberg’s talk at page 11).

What should one conclude from this? If the state at 750 GeV should be confirmed, as the spectrum given by the exact solution of the Higgs sector is given by an integer multiplied by a mass, this would be at $n=6$. Together with the production strengths, if further data will confirm them, the proper scenario for the breaking of electroweak symmetry is exactly the one described by the exact solution. Of course, this should be obviously true but an experimental confirmation is essential for a lot of reasons, last but not least the form of the Higgs potential that, if the numbers are these, the one postulated in the sixties would be the correct one. An other important reason is that coupling with other matter does not change the spectrum of the theory in a significant way.

So, to answer to the question of the title remains to wait a few weeks. Then, summer conferences will start and, paraphrasing Coleman: God knows, I know and by the end of the summer we all know.

Marco Frasca (2015). A theorem on the Higgs sector of the Standard Model Eur. Phys. J. Plus (2016) 131: 199 arXiv: 1504.02299v3

Filed under: Particle Physics, Physics Tagged: ATLAS, CERN, CMS, Higgs decay, Higgs particle, LHC, Standard Model

## June 15, 2016

### Clifford V. Johnson - Asymptotia

The Red Shoes…

Well, this conversation (for the book) takes place in a (famous) railway station, so it would be neglectful of me to not have people scurrying around and so forth. I can't do too many of these... takes a long time to draw all that detail, then put in shadows, then paint, etc. Drawing directly on screen saves time (cutting out scanning, adjusting the scan, etc), but still...

This is a screen shot (literally, sort of - I just pointed a camera at it) of a detailed large panel in progress. I got bored doing the [...] Click to continue reading this post

The post The Red Shoes… appeared first on Asymptotia.

### Symmetrybreaking - Fermilab/SLAC

Second gravitational wave detection announced

For a second time, scientists from the LIGO and Virgo collaborations saw gravitational waves from the merger of two black holes.

Scientists from the LIGO and Virgo collaborations announced today the observation of gravitational waves from a set of merging black holes.

This follows their previous announcement, just four months ago, of the first ever detection of gravitational waves, also from a set of merging black holes.

The detection of gravitational waves confirmed a major prediction of Albert Einstein’s 1915 general theory of relativity. Einstein posited that every object with mass exerts a gravitational pull on everything around it. When a massive object moves, its pull changes, and that change is communicated in the form of gravitational waves.

Gravity is by far the weakest of the known forces, but if an object is massive enough and accelerates quickly enough, it creates gravitational waves powerful enough to be observed experimentally. LIGO, or Laser Interferometer Gravitational-wave Observatory, caught the two sets of gravitational waves using lasers and mirrors.

LIGO consists of two huge interferometers in Livingston, Louisiana, and Hanford, Washington. In an interferometer, a laser beam is split and sent down a pair of perpendicular arms. At the end of each arm, the split beams bounce off of mirrors and return to recombine in the center. If a gravitational wave passes through the laser beams as they travel, it stretches space-time in one direction and compresses it in another, creating a mismatch between the two.

Scientists on the Virgo collaboration have been working with LIGO scientists to analyze their data.

With this second observation, “we are now a real observatory,” said Gabriela Gonzalez, LIGO spokesperson and professor of physics and astronomy at Louisiana State University, in a press conference at the annual meeting of the American Astronomical Society.

The latest discovery was accepted for publication in the journal Physical Review Letters.

On Christmas evening in 2015, a signal that had traveled about 1.4 billion light years reached the twin LIGO detectors. The distant merging of two black holes caused a slight shift in the fabric of space-time, equivalent to changing the distance between the Earth and the sun by a fraction of an atomic diameter.

The black holes were 14 and eight times as massive as the sun, and they merged into a single black hole weighing 21 solar masses. That might sound like a lot, but these were relative flyweights compared to the black holes responsible for the original discovery, which weighed 36 and 29 solar masses.

“It is very significant that these black holes were much less massive than those observed in the first detection,” Gonzalez said in a press release. “Because of their lighter masses compared to the first detection, they spent more time—about one second—in the sensitive band of the detectors.”

The LIGO detectors saw almost 30 of the last orbits of the black holes before they coalesced, Gonzalez said during the press conference.

LIGO’s next data-taking run will begin in the fall. The Virgo detector, located near Pisa, Italy, is expected to come online in early 2017. Additional gravitational wave detectors are in the works in Japan and India.

Additional detectors will make it possible not only to find evidence of gravitational waves, but also to triangulate their origins.

On its own, LIGO is “more of a microphone,” capturing the “chirps” from these events, Gonzalez said.

The next event scientists are hoping to “hear” is the merger of a pair of neutron stars, said Caltech’s David Reitze, executive director of the LIGO laboratory, at the press conference.

Whereas two black holes merging are not expected to release light, a pair of neutron stars in the process of collapsing into one another could produce a plethora of observable gamma rays, X-rays, infrared light and even neutrinos.

In the future, gravitational wave hunters hope to be able to alert astronomers to an event with enough time and precision to allow them to train their instruments on the area and see those signals.

### Matt Strassler - Of Particular Significance

LIGO detects a second merger of black holes

There’s additional news from LIGO (the Laser Interferometry Gravitational Observatory) about gravitational waves today. What was a giant discovery just a few months ago will soon become almost routine… but for now it is still very exciting…

LIGO got a Christmas (US) present: Dec 25th/26th 2015, two more black holes were detected coalescing 1.4 billion light years away — changing the length of LIGO’s arms by 300 parts in a trillion trillion, even less than the first merger observed in September. The black holes had 14 solar masses and 8 solar masses, and merged into a black hole with 21 solar masses, emitting 1 solar mass of energy in gravitational waves. In contrast to the September event, which was short and showed just a few orbits before the merger, in this event nearly 30 orbits over a full second are observed, making more information available to scientists about the black holes, the merger, and general relativity.  (Apparently one of the incoming black holes was spinning with at least 20% of the maximum possible rotation rate for a black hole.)

The signal is not so “bright” as the first one, so it cannot be seen by eye if you just look at the data; to find it, some clever mathematical techniques are needed. But the signal, after signal processing, is very clear. (Signal-to-noise ratio is 13; it was 24 for the September detection.) For such a clear signal to occur due to random noise is 5 standard deviations — officially a detection. The corresponding “chirp” is nowhere near so obvious, but there is a faint trace.

This gives two detections of black hole mergers over about 48 days of 2015 quality data. There’s also a third “candidate”, not so clear — signal-to-noise of just under 10. If it is really due to gravitational waves, it would be merging black holes again… midway in size between the September and December events… but it is borderline, and might just be a statistical fluke.

It is interesting that we already have two, maybe three, mergers of large black holes… and no mergers of neutron stars with black holes or with each other, which are harder to observe. It seems there really are a lot of big black holes in binary pairs out there in the universe. Incidentally, the question of whether they might form the dark matter of the universe has been raised; it’s still a long-shot idea, since there are arguments against it for black holes of this size, but seeing these merger rates one has to reconsider those arguments carefully and keep an open mind about the evidence.

Let’s remember also that advanced-LIGO is still not running at full capacity. When LIGO starts its next run, six months long starting in September, the improvements over last year’s run will probably give a 50% to 100% increase in the rate for observed mergers.   In the longer run, the possibility of one merger per week is possible.

Meanwhile, VIRGO in Italy will come on line soon too, early in 2017. Japan and India are getting into the game too over the coming years. More detectors will allow scientists to know where on the sky the merger took place, which then can allow normal telescopes to look for flashes of light (or other forms of electromagnetic radiation) that might occur simultaneously with the merger… as is expected for neutron star mergers but not widely expected for black hole mergers.  The era of gravitational wave astronomy is underway.

Filed under: Astronomy, Dark Matter, Gravitational Waves Tagged: black holes, Gravitational Waves, LIGO

### Jaques Distler - Musings

Moonshine Paleontology

Back in the Stone Age, Bergman, Varadarajan and I wrote a paper about compactifications of the heterotic string down to two dimension. The original motivation was to rebut some claims that such compactifications would be incompatible with holography.

To that end, we cooked up some examples of compactifications with no massless bosonic degrees of freedom, but lots of supersymmetry. Not making any attempt to be exhaustive, we wrote down one example with $\left(8,8\right)\left(8,8\right)$ spacetime supersymmetry, and another class of examples with $\left(24,0\right)\left(24,0\right)$ spacetime supersymmetry.

I never thought much more about the subject (the original motivation having long-since been buried by the sands of history), but recently these sorts of compactification have come back into fashion from the point of view of Umbral Moonshine and (thanks to Yuji Tachikawa for pointing it out to me) one recent paper makes extensive use of our (24,0) examples.

Unfortunately (as sort of a digression in their analysis), they say some things about these models which are not quite correct.

The worldsheet theory of these heterotic string compactifications is an asymmetric orbifold of a toroidal compactification. The latter is specified by an even self-dual Lorentzian lattice, $\Lambda \Lambda$, of signature $\left(24,8\right)\left(24,8\right)$. We took $\Lambda ={\Lambda }_{24}\oplus {\Lambda }_{{E}_{8}}\Lambda=\Lambda_\left\{24\right\}\oplus \Lambda_\left\{E_8\right\}$, where ${\Lambda }_{24}\Lambda_\left\{24\right\}$ is a Niemeier lattice. For our $\left(8,8\right)\left(8,8\right)$ model, we took ${\Lambda }_{24}\Lambda_\left\{24\right\}$ to be the Leech lattice, and the orbifold action to be a ${ℤ}_{2}\mathbb\left\{Z\right\}_2$ acting only on the left-movers (yielding the famous Monster Module as the left-moving CFT).

Our $\left(24,0\right)\left(24,0\right)$ models were obtained by having the ${ℤ}_{2}\mathbb\left\{Z\right\}_2$ act only on the right-movers, with a large set of choices for the left-moving CFT.

The resulting theory in 2D spacetime (unlike the $\left(8,8\right)\left(8,8\right)$ case) is chiral and hence has a potential gravitational anomaly. The anomaly is cancelled by the dimensional reduction of the Green-Schwarz term, which – in 2D – has the particularly simple form: $k\int Bk\int B$.

Paquette et al make the nice observation that this Green-Schwarz term is a tadpole for the (constant mode of) the $BB$-field which, if nonzero, needs to be canceled by some number of space-filling fundamental strings. Unfortunately, they don’t state quite correctly what the strength of the tadpole is, nor how many space-filling strings are required to cancel it.

We can understand the correct answer by looking at the gravitational anomaly. We found that the spacetime theory has no massless bosonic degrees of freedom and $24{n}_{1}24n_1$ massless Majorana-Weyl fermions, all of the same chirality, where ${n}_{1}n_1$ is the number of $h=1h=1$ primary fields in the aforementioned $c=24c=24$ left-moving CFT. In the conventional normalization, that’s a contribution of $-12{n}_{1}-12n_1$ to the gravitational anomaly.

The light-cone gauge worldsheet theory of our heterotic string has ${c}_{L}=24c_L=24$, ${c}_{R}=8+4=12c_R=8+4=12$, and so a space-filling string contributes ${c}_{L}-{c}_{R}=+12c_L-c_R = +12$ to the gravitational anomaly. We conclude that we need precisely ${n}_{1}n_1$ space-filling strings.

Now, Paquette et al are particularly interested in the case where the left-moving CFT is the Monster Module, which is the unique case1 with ${n}_{1}=0n_1=0$. So, in their case, there is no tadpole.

Admittedly for our other examples, with ${n}_{1}\ne 0n_1\neq 0$, our story was incomplete. The 1-loop tadpole for the $BB$-field requires the introduction of ${n}_{1}n_1$ space-filling strings. But, while that does make the physics more interesting, it doesn’t affect our conclusion about Goheer et al.

1 This model is a ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-orbifold of our $\left(8,8\right)\left(8,8\right)$ model. The orbifolding kills 8 of the original supersymmetries, but 16 new ones arise in the twisted sector.

## June 14, 2016

### Matt Strassler - Of Particular Significance

Giving two free lectures 6/20,27 about gravitational waves

For those of you who live in or around Berkshire County, Massachusetts, or know people who do…

Starting next week I’ll be giving two free lectures about the LIGO experiment’s discovery of gravitational waves.  The lectures will be at 1:30 pm on Mondays June 20 and 27, at Berkshire Community College in Pittsfield, MA.  The first lecture will focus on why gravitational waves were expected by scientists, and the second will be on how gravitational waves were discovered, indirectly and then directly.  No math or science background will be assumed.  (These lectures will be similar in style to the ones I gave a couple of years ago concerning the Higgs boson discovery.)

Here’s a flyer with the details:  http://berkshireolli.org/ProfessorMattStrasslerOLLILecturesFlyer.pdf

Filed under: Astronomy, Gravitational Waves, LHC News, Public Outreach Tagged: astronomy, black holes, Gravitational Waves, PublicOutreach, PublicTalks

### Symmetrybreaking - Fermilab/SLAC

The neutrino turns 60

Project Poltergeist led to the discovery of the ghostly particle. Sixty years later, scientists are confronted with more neutrino mysteries than ever before.

In 1930, Wolfgang Pauli proposed the existence of a new tiny particle with no electric charge. The particle was hypothesized to be very light—or possibly have no mass at all—and hardly ever interact with matter. Enrico Fermi later named this mysterious particle the “neutrino” (or “little neutral one”).

Although neutrinos are extremely abundant, it took 26 years for scientists to confirm their existence. In the 60 years since the neutrino’s discovery, we’ve slowly learned about this intriguing particle.

“At every turn, it seems to take a decade or two for scientists to come up with experiments to start to probe the next property of the neutrino,” says Keith Rielage, a neutrino researcher at the Department of Energy’s Los Alamos National Laboratory. “And once we do, we’re often left scratching our heads because the neutrino doesn’t act as we expect. So the neutrino has been an exciting particle from the start.”

We now know that there are actually three types, or “flavors,” of neutrinos: electron, muon and tau. We also know that neutrinos change, or “oscillate,” between the three types as they travel through space. Because neutrinos oscillate, we know they must have mass.

However, many questions about neutrinos remain, and the search for the answers involves scientists and experiments around the world.

### The mystery of the missing energy

Pauli thought up the neutrino while trying to solve the problem of energy conservation in a particular reaction called beta decay. Beta decay is a way for an unstable atom to become more stable—for example, by transforming a neutron into a proton. In this process, an electron is emitted.

If the neutron transformed into only a proton and an electron, their energies would be well defined. However, experiments showed that the electron did not always emerge with a particular energy—instead, electrons showed a range of energies. To account for this range, Pauli hypothesized that an unknown neutral particle must be involved in beta decay.

“If there were another particle involved in the beta decay, all three particles would share the energy, but not always exactly the same way,” says Jennifer Raaf, a neutrino researcher at DOE’s Fermi National Accelerator Laboratory. “So sometimes you could get an electron with a high energy and sometimes you could get one with a low energy.”

In the early 1950s, Los Alamos physicist Frederick Reines and his colleague Clyde Cowan set out to detect this tiny, neutral, very weakly interacting particle.

At the time, neutrinos were known as mysterious “ghost” particles that are all around us but mostly pass straight through matter and take away energy in beta decays. For this reason, Reines and Cowan’s search to detect the neutrino came to be known as “Project Poltergeist.”

“The name seemed logical because they were basically trying to exorcise a ghost,” Rielage says.

### Catching the ghost particle

“The story of the discovery of the neutrino is an interesting one, and in some ways, one that could only happen at Los Alamos,” Rielage says.

It all started in the early 1950s. Working at Los Alamos, Reines had led several projects testing nuclear weapons in the Pacific, and he was interested in fundamental physics questions that could be explored as part of the tests. A nuclear explosion was thought to create an intense burst of antineutrinos, and Reines thought an experiment could be designed to detect some of them. Reines convinced Cowan, his colleague at Los Alamos, to work with him to design such an experiment.

Reines and Cowan’s first idea was to put a large liquid scintillator detector in a shaft next to an atmospheric nuclear explosion test site. But then they came up with a better idea—to put the detector next to a nuclear reactor.

So in 1953, Reines and Cowan headed to the large fission reactor in Hanford, Washington with their 300-liter detector nicknamed “Herr Auge” (German for “Mr. Eye”).

Although Reines and Cowan did detect a small increase in neutrino-like signals when the reactor was on versus when it was off, the noise was overwhelming. They could not definitively conclude that the small signal was due to neutrinos. While the detector’s shielding succeeded in blocking the neutrons and gamma rays from the reactor, it could not stop the flood of cosmic rays raining down from space.

Over the next year, Reines and Cowan completely redesigned their detector into a stacked three-layer configuration that would allow them to clearly differentiate between a neutrino signal and the cosmic ray background. In late 1955, they hit the road again with their new 10-ton detector—this time to the powerful fission reactor at the Savannah River Plant in South Carolina.

For more than five months, Reines and Cowan collected data and analyzed the results. In June 1956, they sent a telegram to Pauli. It said, “We are happy to inform you that we have definitively detected neutrinos.”

### Solving the next neutrino mystery

In the 1960s, a new mystery involving the neutrino began—this time in a gold mine in South Dakota.

Ray Davis, a nuclear chemist at the DOE’s Brookhaven National Laboratory, had designed an experiment to detect neutrinos produced in reactions in the sun, also known as solar neutrinos. It featured a large chlorine-based detector located a mile underground in the Homestake Mine, which provided shielding from cosmic rays.

In 1968, the Davis experiment detected solar neutrinos for the first time, but the results were puzzling. Astrophysicist John Bahcall had calculated the expected flux of neutrinos from the sun—that is, the number of neutrinos that should be detected over a certain area in a certain amount of time. However, the experiment was only detecting about one-third the number of neutrinos predicted. This discrepancy came to be known as the “solar neutrino problem.”

At first, scientists thought there was a problem with Davis’ experiment or with the model of the sun, but no problems were found. Slowly, scientists began to suspect that it was actually an issue with the neutrinos.

“Neutrinos always seem to surprise us,” Rielage says. “We think something is fairly straightforward, and it turns out not to be.”

Scientists theorized that neutrinos might oscillate, or change from one type to another, as they travel through space. Davis’ experiment was only sensitive to electron neutrinos, so if neutrinos oscillated and arrived at the Earth as a mixture of the three types, it would explain why the experiment was only detecting one-third of them.

In 1998, the Super-Kamiokande experiment in Japan first detected atmospheric neutrino oscillations. Then, in 2001, the Sudbury Neutrino Observatory in Canada announced the first evidence of solar neutrino oscillations, followed by conclusive evidence in 2002. After more than 30 years, scientists were able to confirm that neutrinos oscillate, thus solving the solar neutrino problem.

“The fact that neutrinos oscillate is interesting, but the critical thing is that it tells us that neutrinos must have mass,” says Gabriel Orebi Gann, a neutrino researcher at the University of California, Berkeley, and the DOE’s Lawrence Berkley National Laboratory and a SNO collaborator. “This is huge because there was no expectation in the Standard Model that the neutrino would have mass.”

### Mysteries beyond the Standard Model

The Standard Model—the theoretical model that describes elementary particles and their interactions—does not include a mechanism for neutrinos to have mass. The discovery of neutrino oscillation put a serious crack into an otherwise extremely accurate picture of the subatomic world.

“It’s important to poke at this picture and see which parts of it hold up to experimental testing and which parts still need additional information filled in,” Raaf says.

After 60 years of studying neutrinos, several mysteries remain that could provide windows into physics beyond the Standard Model.

Is the neutrino its own antiparticle?

The neutrino is unique in that it has the potential to be its own antiparticle. “The only thing we know at the moment that distinguishes matter from antimatter is electric charge,” Orebi Gann says. “So for the neutrino, which has no electric charge, it’s sort of an obvious question – what is the difference between a neutrino and its antimatter partner?”

If the neutrino is not its own antiparticle, there must be something other than charge that makes antimatter different from matter. “We currently don’t know what that would be,” Orebi Gann says. “It would be what we call a new symmetry.”

Scientists are trying to determine if the neutrino is its own antiparticle by searching for neutrinoless double beta decay. These experiments look for events in which two neutrons decay into protons at the same time. The standard double beta decay would produce two electrons and two antineutrinos. However, if the neutrino is its own antiparticle, the two antineutrinos could annihilate, and only electrons would come out of the decay.

Several upcoming experiments will look for neutrinoless double beta decay. These include the SNO+ experiment in Canada, the CUORE experiment at the Laboratori Nazionali del Gran Sasso in Italy, the EXO-200 experiment at the Waste Isolation Pilot Plant in New Mexico, and the MAJORANA experiment at the Sanford Underground Research Facility in the former Homestake mine in South Dakota (the same mine in which Davis conducted his famous solar neutrino experiment).

What is the order, or “hierarchy,” of the neutrino mass states?

We know that neutrinos have mass and that the three neutrino mass states differ slightly, but we do not know which is the heaviest and which is the lightest. Scientists are aiming to answer this question through experiments that study neutrinos as they oscillate over long distances.

For these experiments, a beam of neutrinos is created at an accelerator and sent through the Earth to far-away detectors. Such long-baseline experiments include Japan’s T2K experiment, Fermilab’s NOvA experiment and the planned Deep Underground Neutrino Experiment.

What is the absolute mass of neutrinos?

To try to measure the absolute mass of neutrinos, scientists are returning to the reaction that first signaled the existence of the neutrino—beta decay. The KATRIN experiment in Germany aims to directly measure the mass of the neutrino by studying tritium (an isotope of hydrogen) that decays through beta decay.

Are there more than three types of neutrinos?

Scientists have hypothesized another even more weakly interacting type of neutrino called the “sterile” neutrino. To look for evidence of sterile neutrinos, scientists are studying neutrinos as they travel over short distances.

As part of the short baseline neutrino program at Fermilab, scientists will use three detectors to look for sterile neutrinos: the Short Baseline Neutrino Detector, MicroBooNE and ICARUS (a neutrino detector that previously operated at Gran Sasso). Gran Sasso will also host an upcoming experiment called SOX that will look for sterile neutrinos.

Do neutrinos violate “charge parity (CP) symmetry”?

Scientists are also using long-baseline experiments to search for something called CP violation. If equal amounts of matter and antimatter were created in the Big Bang, it all should have annihilated. Because the universe contains matter, something must have led to there being more matter than antimatter. If neutrinos violate CP symmetry, it could help explain why there is more matter.

“Not having all the answers about neutrinos is what makes it exciting,” Rielage says. “The problems that are left are challenging, but we often joke that if it were easy, someone would have already figured it out by now. But that’s what I enjoy about it—we have to really think outside the box in our search for the answers.”

## June 13, 2016

### Symmetrybreaking - Fermilab/SLAC

CERN grants beam time to students

Contest winners will study special relativity and an Egyptian pyramid using a CERN beamline.

Two groups of high school teams have beat out nearly 150 others from around the world to secure a highly prized opportunity: the chance to do a science project—at CERN.

After sorting through a pool of teams that represented more than a thousand students from 37 countries, today CERN announced the winners of its third Beamline for Schools competition. The two teams, “Pyramid Hunters” from Poland and “Relatively Special” from the United Kingdom, will travel to Geneva in September to put their experiments to the test.

“We honestly couldn’t be more thrilled to have been given this opportunity,” said Henry Broomfield, a student on the “Relatively Special” team, in an email. “The prospect of winning always seemed like something that would only occur in a parallel universe, so at first we didn’t believe it.”

“Relatively Special” consists of 17 students from Colchester Royal Grammar School in the United Kingdom. Nine of the students will travel to CERN for the competition. They plan to test the Lorentz factor, an input used in calculations related to Einstein’s theory of special relativity.

According to the theory, the faster an object moves, the higher its apparent mass will be and the slower its time will pass relative to our own. This concept, known as time dilation, is most noticeable at speeds approaching the speed of light and is the reason GPS satellites have to adjust their clocks to match the time on Earth. At CERN, “Relatively Special” will measure the decay of pions, particles containing a quark and an antiquark, to see if the particles moving closer to the speed of light decay at the slower rate predicted by time dilation.

Video of flKV8dvIM10

The other team, “Pyramid Hunters,” is a group of seven students from Liceum Ogólnokształcące im. Marsz. St. Małachowskiego in Poland. These students plan to use particle physics to strengthen the archeological knowledge of the Pyramid of Khafre, one of the largest and most iconic of the Egyptian pyramids.

The pyramid was mapped in the 1960s using muon tomography, a technique similar to X-ray scanning that uses heavy particles called muons to generate images of a target. “Pyramid Hunters” will attempt to improve the understanding of that early data by firing muons into limestone, the material that was used to build the pyramids. They will observe the rate at which the muons are absorbed. The absorption rate can tell researchers about the thickness of the material they scanned.

Video of fp4FOYXjsUs

The Beamline for Schools competition began two years ago, coinciding with CERN’s 60th anniversary. Its purpose was to give students the opportunity to run an experiment on a CERN beamline in the same way its regular researchers do. For the competition, students submitted written proposals for their projects, as well as creative one-minute videos to explain their goals for their projects and the experience in general.

A CERN committee selected the students based on “creativity, motivation, feasibility and scientific method,” according to a press release. CERN recognized the projects of nearly 30 other teams, rewarding them with certificates, t-shirts and pocket-size cosmic ray detectors for their schools.

“I am impressed with the level of interest within high schools all over Europe and beyond, as well as with the quality of the proposals,” Claude Vallee, the chairperson of the CERN committee that chose the winning teams, said in a press release.

The previous winning teams hailed from the Netherlands, Italy, Greece and South Africa. Some of their projects have included examining the weak force and testing calorimeters and particle detectors made from different materials.

“I can't imagine better way of learning physics than doing research in the largest particle physics laboratory in the world,” said Kamil Szymczak, a student on the “Pyramid Hunters” team, in a press release. “I still can't believe it.”

## June 12, 2016

### The n-Category Cafe

How the Simplex is a Vector Space

It’s an underappreciated fact that the interior of every simplex ${\Delta }^{n}\Delta^n$ is a real vector space in a natural way. For instance, here’s the 2-simplex with twelve of its 1-dimensional linear subspaces drawn in:

(That’s just a sketch. See below for an accurate diagram by Greg Egan.)

In this post, I’ll explain what this vector space structure is and why everyone who’s ever taken a course on thermodynamics knows about it, at least partially, even if they don’t know they do.

Let’s begin with the most ordinary vector space of all, ${ℝ}^{n}\mathbb\left\{R\right\}^n$. (By “vector space” I’ll always mean vector space over $\mathbb\left\{R\right\}$.) There’s a bijection

$ℝ↔\left(0,\infty \right) \mathbb\left\{R\right\} \leftrightarrow \left(0, \infty\right) $

between the real line and the positive half-line, given by exponential in one direction and log in the other. Doing this bijection in each coordinate gives a bijection

${ℝ}^{n}↔\left(0,\infty {\right)}^{n}. \mathbb\left\{R\right\}^n \leftrightarrow \left(0, \infty\right)^n. $

So, if we transport the vector space structure of ${ℝ}^{n}\mathbb\left\{R\right\}^n$ along this bijection, we’ll produce a vector space structure on $\left(0,\infty {\right)}^{n}\left(0, \infty\right)^n$. This new vector space $\left(0,\infty {\right)}^{n}\left(0, \infty\right)^n$ is isomorphic to ${ℝ}^{n}\mathbb\left\{R\right\}^n$, by definition.

Explicitly, the “addition” of the vector space $\left(0,\infty {\right)}^{n}\left(0, \infty\right)^n$ is coordinatewise multiplication, the “zero” vector is $\left(1,\dots ,1\right)\left(1, \ldots, 1\right)$, and “subtraction” is coordinatewise division. The scalar “multiplication” is given by powers: multiplying a vector $y=\left({y}_{1},\dots ,{y}_{n}\right)\in \left(0,\infty {\right)}^{n}\mathbf\left\{y\right\} = \left(y_1, \ldots, y_n\right) \in \left(0, \infty\right)^n$ by a scalar $\lambda \in ℝ\lambda \in \mathbb\left\{R\right\}$ gives $\left({y}_{1}^{\lambda },\dots ,{y}_{n}^{\lambda }\right)\left(y_1^\lambda, \ldots, y_n^\lambda\right)$.

Now, the ordinary vector space ${ℝ}^{n}\mathbb\left\{R\right\}^n$ has a linear subspace $UU$ spanned by $\left(1,\dots ,1\right)\left(1, \ldots, 1\right)$. That is,

$U=\left\{\left(\lambda ,\dots ,\lambda \right):\lambda \in ℝ\right\}. U = \\left\{\left(\lambda, \ldots, \lambda\right) \colon \lambda \in \mathbb\left\{R\right\} \\right\}. $

Since the vector spaces ${ℝ}^{n}\mathbb\left\{R\right\}^n$ and $\left(0,\infty {\right)}^{n}\left(0, \infty\right)^n$ are isomorphic, there’s a corresponding subspace $WW$ of $\left(0,\infty {\right)}^{n}\left(0, \infty\right)^n$, and it’s given by

$W=\left\{\left({e}^{\lambda },\dots ,{e}^{\lambda }\right):\lambda \in ℝ\right\}=\left\{\left(\gamma ,\dots ,\gamma \right):\gamma \in \left(0,\infty \right)\right\}. W = \\left\{\left(e^\lambda, \ldots, e^\lambda\right) \colon \lambda \in \mathbb\left\{R\right\} \\right\} = \\left\{\left(\gamma, \ldots, \gamma\right) \colon \gamma \in \left(0, \infty\right)\\right\}. $

But whenever we have a linear subspace of a vector space, we can form the quotient. Let’s do this with the subspace $WW$ of $\left(0,\infty {\right)}^{n}\left(0, \infty\right)^n$. What does the quotient $\left(0,\infty {\right)}^{n}/W\left(0, \infty\right)^n/W$ look like?

Well, two vectors $y,z\in \left(0,\infty {\right)}^{n}\mathbf\left\{y\right\}, \mathbf\left\{z\right\} \in \left(0, \infty\right)^n$ represent the same element of $\left(0,\infty {\right)}^{n}/W\left(0, \infty\right)^n/W$ if and only if their “difference” — in the vector space sense — belongs to $WW$. Since “difference” or “subtraction” in the vector space $\left(0,\infty {\right)}^{n}\left(0, \infty\right)^n$ is coordinatewise division, this just means that

$\frac{{y}_{1}}{{z}_{1}}=\frac{{y}_{2}}{{z}_{2}}=\cdots =\frac{{y}_{n}}{{z}_{n}}. \frac\left\{y_1\right\}\left\{z_1\right\} = \frac\left\{y_2\right\}\left\{z_2\right\} = \cdots = \frac\left\{y_n\right\}\left\{z_n\right\}. $

So, the elements of $\left(0,\infty {\right)}^{n}/W\left(0, \infty\right)^n/W$ are the equivalence classes of $nn$-tuples of positive reals, with two tuples considered equivalent if they’re the same up to rescaling.

Now here’s the crucial part: it’s natural to normalize everything to sum to $11$. In other words, in each equivalence class, we single out the unique tuple $\left({y}_{1},\dots ,{y}_{n}\right)\left(y_1, \ldots, y_n\right)$ such that ${y}_{1}+\cdots +{y}_{n}=1y_1 + \cdots + y_n = 1$. This gives a bijection

$\left(0,\infty {\right)}^{n}/W↔{\Delta }_{n}^{\circ } \left(0, \infty\right)^n/W \leftrightarrow \Delta_n^\circ $

where ${\Delta }_{n}^{\circ }\Delta_n^\circ$ is the interior of the $\left(n-1\right)\left(n - 1\right)$-simplex:

${\Delta }_{n}^{\circ }=\left\{\left({p}_{1},\dots ,{p}_{n}\right):{p}_{i}>0,\sum {p}_{i}=1\right\}. \Delta_n^\circ = \\left\{\left(p_1, \ldots, p_n\right) \colon p_i \gt 0, \sum p_i = 1 \\right\}. $

You can think of ${\Delta }_{n}^{\circ }\Delta_n^\circ$ as the set of probability distributions on an $nn$-element set that satisfy Cromwell’s rule: zero probabilities are forbidden. (Or as Cromwell put it, “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”)

Transporting the vector space structure of $\left(0,\infty {\right)}^{n}/W\left(0, \infty\right)^n/W$ along this bijection gives a vector space structure to ${\Delta }_{n}^{\circ }\Delta_n^\circ$. And that’s the vector space structure on the simplex.

So what are these vector space operations on the simplex, in concrete terms? They’re given by the same operations in $\left(0,\infty {\right)}^{n}\left(0, \infty\right)^n$, followed by normalization. So, the “sum” of two probability distributions $p\mathbf\left\{p\right\}$ and $q\mathbf\left\{q\right\}$ is

$\frac{\left({p}_{1}{q}_{1},{p}_{2}{q}_{2},\dots ,{p}_{n}{q}_{n}\right)}{{p}_{1}{q}_{1}+{p}_{2}{q}_{2}+\cdots +{p}_{n}{q}_{n}}, \frac\left\{\left(p_1 q_1, p_2 q_2, \ldots, p_n q_n\right)\right\}\left\{p_1 q_1 + p_2 q_2 + \cdots + p_n q_n\right\}, $

the “zero” vector is the uniform distribution

$\frac{\left(1,1,\dots ,1\right)}{1+1+\cdots +1}=\left(1/n,1/n,\dots ,1/n\right), \frac\left\{\left(1, 1, \ldots, 1\right)\right\}\left\{1 + 1 + \cdots + 1\right\} = \left(1/n, 1/n, \ldots, 1/n\right), $

and “multiplying” a probability distribution $p\mathbf\left\{p\right\}$ by a scalar $\lambda \in ℝ\lambda \in \mathbb\left\{R\right\}$ gives

$\frac{\left({p}_{1}^{\lambda },{p}_{2}^{\lambda },\dots ,{p}_{n}^{\lambda }\right)}{{p}_{1}^{\lambda }+{p}_{2}^{\lambda }+\cdots +{p}_{n}^{\lambda }}. \frac\left\{\left(p_1^\lambda, p_2^\lambda, \ldots, p_n^\lambda\right)\right\}\left\{p_1^\lambda + p_2^\lambda + \cdots + p_n^\lambda\right\}. $

For instance, let’s think about the scalar “multiples” of

$p=\left(0.2,0.3,0.5\right)\in {\Delta }_{3}. \mathbf\left\{p\right\} = \left(0.2, 0.3, 0.5\right) \in \Delta_3. $

“Multiplying” $p\mathbf\left\{p\right\}$ by $\lambda \in ℝ\lambda \in \mathbb\left\{R\right\}$ gives

$\frac{\left({0.2}^{\lambda },{0.3}^{\lambda },{0.5}^{\lambda }\right)}{{0.2}^{\lambda }+{0.3}^{\lambda }+{0.5}^{\lambda }} \frac\left\{\left(0.2^\lambda, 0.3^\lambda, 0.5^\lambda\right)\right\}\left\{0.2^\lambda + 0.3^\lambda + 0.5^\lambda\right\} $

which I’ll call ${p}^{\left(\lambda \right)}\mathbf\left\{p\right\}^\left\{\left(\lambda\right)\right\}$, to avoid the confusion that would be created by calling it $\lambda p\lambda\mathbf\left\{p\right\}$.

When $\lambda =0\lambda = 0$,   ${p}^{\left(\lambda \right)}\mathbf\left\{p\right\}^\left\{\left(\lambda\right)\right\}$ is just the uniform distribution $\left(1/3,1/3,1/3\right)\left(1/3, 1/3, 1/3\right)$ — which of course it has to be, since multiplying any vector by the scalar $00$ has to give the zero vector.

For equally obvious reasons, ${p}^{\left(1\right)}\mathbf\left\{p\right\}^\left\{\left(1\right)\right\}$ has to be just $p\mathbf\left\{p\right\}$.

When $\lambda \lambda$ is large and positive, the powers of $0.50.5$ dominate over the powers of the smaller numbers $0.20.2$ and $0.30.3$, so ${p}^{\left(\lambda \right)}\to \left(0,0,1\right)\mathbf\left\{p\right\}^\left\{\left(\lambda\right)\right\} \to \left(0, 0, 1\right)$ as $\lambda \to \infty \lambda \to \infty$.

For similar reasons, ${p}^{\left(\lambda \right)}\to \left(1,0,0\right)\mathbf\left\{p\right\}^\left\{\left(\lambda\right)\right\} \to \left(1, 0, 0\right)$ as $\lambda \to -\infty \lambda \to -\infty$. This behaviour as $\lambda \to ±\infty \lambda \to \pm\infty$ is the reason why, in the picture above, you see the curves curling in at the ends towards the triangle’s corners.

Some physicists refer to the distributions ${p}^{\left(\lambda \right)}\mathbf\left\{p\right\}^\left\{\left(\lambda\right)\right\}$ as the “escort distributions” of $p\mathbf\left\{p\right\}$. And in fact, the scalar multiplication of the vector space structure on the simplex is a key part of the solution of a very basic problem in thermodynamics — so basic that even I know it.

The problem goes like this. First I’ll state it using the notation above, then afterwards I’ll translate it back into terms that physicists usually use.

Fix ${\xi }_{1},\dots ,{\xi }_{n},\xi >0\xi_1, \ldots, \xi_n, \xi \gt 0$. Among all probability distributions $\left({p}_{1},\dots ,{p}_{n}\right)\left(p_1, \ldots, p_n\right)$ satisfying the constraint

${\xi }_{1}^{{p}_{1}}{\xi }_{2}^{{p}_{2}}\cdots {\xi }_{n}^{{p}_{n}}=\xi , \xi_1^\left\{p_1\right\} \xi_2^\left\{p_2\right\} \cdots \xi_n^\left\{p_n\right\} = \xi, $

which one minimizes the quantity

${p}_{1}^{{p}_{1}}{p}_{2}^{{p}_{2}}\cdots {p}_{n}^{{p}_{n}}? p_1^\left\{p_1\right\} p_2^\left\{p_2\right\} \cdots p_n^\left\{p_n\right\}? $

It makes no difference to this question if ${\xi }_{1},\dots ,{\xi }_{n},\xi \xi_1, \ldots, \xi_n, \xi$ are normalized so that ${\xi }_{1}+\cdots +{\xi }_{n}=1\xi_1 + \cdots + \xi_n = 1$ (since multiplying each of ${\xi }_{1},\dots ,{\xi }_{n},\xi \xi_1, \ldots, \xi_n, \xi$ by a constant doesn’t change the constraint). So, let’s assume this has been done.

Then the answer to the question turns out to be: the minimizing distribution $p\mathbf\left\{p\right\}$ is a scalar multiple of $\left({\xi }_{1},\dots ,{\xi }_{n}\right)\left(\xi_1, \ldots, \xi_n\right)$ in the vector space structure on the simplex. In other words, it’s an escort distribution of $\left({\xi }_{1},\dots ,{\xi }_{n}\right)\left(\xi_1, \ldots, \xi_n\right)$. Or in other words still, it’s an element of the linear subspace of ${\Delta }_{n}^{\circ }\Delta_n^\circ$ spanned by $\left({\xi }_{1},\dots ,{\xi }_{n}\right)\left(\xi_1, \ldots, \xi_n\right)$. Which one? The unique one such that the constraint is satisfied.

Proving that this is the answer is a simple exercise in calculus, e.g. using Lagrange multipliers.

For instance, take $\left({\xi }_{1},{\xi }_{2},{\xi }_{3}\right)=\left(0.2,0.3,0.5\right)\left(\xi_1, \xi_2, \xi_3\right) = \left(0.2, 0.3, 0.5\right)$ and $\xi =0.4\xi = 0.4$. Among all distributions $\left({p}_{1},{p}_{2},{p}_{3}\right)\left(p_1, p_2, p_3\right)$ that satisfy the constraint

${0.2}^{{p}_{1}}×{0.3}^{{p}_{2}}×{0.5}^{{p}_{3}}=0.4, 0.2^\left\{p_1\right\} \times 0.3^\left\{p_2\right\} \times 0.5^\left\{p_3\right\} = 0.4, $

the one that minimizes ${p}_{1}^{{p}_{1}}{p}_{2}^{{p}_{2}}{p}_{3}^{{p}_{3}}p_1^\left\{p_1\right\} p_2^\left\{p_2\right\} p_3^\left\{p_3\right\}$ is some escort distribution of $\left(0.2,0.3,0.5\right)\left(0.2, 0.3, 0.5\right)$. Maybe one of the curves shown in the picture above is the 1-dimensional subspace spanned by $\left(0.2,0.3,0.5\right)\left(0.2, 0.3, 0.5\right)$, and in that case, the $p\mathbf\left\{p\right\}$ that minimizes is somewhere on that curve.

The location of $p\mathbf\left\{p\right\}$ on that curve depends on the value of $\xi \xi$, which here I chose to be $0.40.4$. If I changed it to $0.200010.20001$ or $0.499990.49999$ then $p\mathbf\left\{p\right\}$ would be nearly at one end or the other of the curve, since $\left(0.2,0.3,0.5{\right)}^{\left(\lambda \right)}\left(0.2, 0.3, 0.5\right)^\left\{\left(\lambda\right)\right\}$ converges to $0.20.2$ as $\lambda \to -\infty \lambda \to -\infty$ and to $0.50.5$ as $\lambda \to \infty \lambda \to \infty$.

Aside I’m glossing over the question of existence and uniqueness of solutions to the optimization question. Since ${\xi }_{1}^{{p}_{1}}{\xi }_{2}^{{p}_{2}}\cdots {\xi }_{n}^{{p}_{n}}\xi_1^\left\{p_1\right\} \xi_2^\left\{p_2\right\} \cdots \xi_n^\left\{p_n\right\}$ is a kind of average of ${\xi }_{1},{\xi }_{2},\dots ,{\xi }_{n}\xi_1, \xi_2, \ldots, \xi_n$ — a weighted, geometric mean — there’s no solution at all unless ${\mathrm{min}}_{i}{\xi }_{i}\le \xi \le {\mathrm{max}}_{i}{\xi }_{i}\min_i \xi_i \leq \xi \leq \max_i \xi_i$. As long as that inequality is satisfied, there’s a minimizing $p\mathbf\left\{p\right\}$, although it’s not always unique: e.g. consider what happens when all the ${\xi }_{i}\xi_i$s are equal.

Physicists prefer to do all this in logarithmic form. So, rather than start with ${\xi }_{1},\dots ,{\xi }_{n},\xi >0\xi_1, \ldots, \xi_n, \xi \gt 0$, they start with ${x}_{1},\dots ,{x}_{n},x\in ℝx_1, \ldots, x_n, x \in \mathbb\left\{R\right\}$; think of this as substituting ${x}_{i}=-\mathrm{log}{\xi }_{i}x_i = -\log \xi_i$ and $x=-\mathrm{log}\xi x = -\log \xi$. So, the constraint

${\xi }_{1}^{{p}_{1}}{\xi }_{2}^{{p}_{2}}\cdots {\xi }_{n}^{{p}_{n}}=\xi \xi_1^\left\{p_1\right\} \xi_2^\left\{p_2\right\} \cdots \xi_n^\left\{p_n\right\} = \xi $

becomes

${e}^{-{p}_{1}{\xi }_{1}}{e}^{-{p}_{2}{\xi }_{2}}\cdots {e}^{-{p}_{n}{\xi }_{n}}={e}^{-x} e^\left\{-p_1 \xi_1\right\} e^\left\{-p_2 \xi_2\right\} \cdots e^\left\{-p_n \xi_n\right\} = e^\left\{-x\right\} $

or equivalently

${p}_{1}{x}_{1}+{p}_{2}{x}_{2}+\cdots +{p}_{n}{x}_{n}=x. p_1 x_1 + p_2 x_2 + \cdots + p_n x_n = x. $

We’re trying to minimize ${p}_{1}^{{p}_{1}}{p}_{2}^{{p}_{2}}\cdots {p}_{n}^{{p}_{n}}p_1^\left\{p_1\right\} p_2^\left\{p_2\right\} \cdots p_n^\left\{p_n\right\}$ subject to that constraint, and again the physicists prefer the logarithmic form (with a change of sign): maximize

$-\left({p}_{1}\mathrm{log}{p}_{1}+{p}_{2}\mathrm{log}{p}_{2}+\cdots +{p}_{n}\mathrm{log}{p}_{n}\right). -\left(p_1 \log p_1 + p_2 \log p_2 + \cdots + p_n \log p_n\right). $

That quantity is the Shannon entropy of the distribution $\left({p}_{1},\dots ,{p}_{n}\right)\left(p_1, \ldots, p_n\right)$: so we’re looking for the maximum entropy solution to the constraint. This is called the Gibbs state, and as we saw, it’s a scalar multiple of $\left({\xi }_{1},\dots ,{\xi }_{n}\right)\left(\xi_1, \ldots, \xi_n\right)$ in the vector space structure on the simplex. Equivalently, it’s

$\frac{\left({e}^{-\lambda {x}_{1}},{e}^{-\lambda {x}_{2}},\dots ,{e}^{-\lambda {x}_{n}}\right)}{{e}^{-\lambda {x}_{1}}+{e}^{-\lambda {x}_{2}}+\cdots +{e}^{-\lambda {x}_{n}}} \frac\left\{\left(e^\left\{-\lambda x_1\right\}, e^\left\{-\lambda x_2\right\}, \ldots, e^\left\{-\lambda x_n\right\}\right)\right\}\left\{e^\left\{-\lambda x_1\right\} + e^\left\{-\lambda x_2\right\} + \cdots + e^\left\{-\lambda x_n\right\}\right\} $

for whichever value of $\lambda \lambda$ satisfies the constraint. The denominator here is the famous partition function.

So, that basic thermodynamic problem is (implicitly) solved by scalar multiplication in the vector space structure on the simplex. A question: does addition in the vector space structure on the simplex also have a role to play in physics?

## June 11, 2016

### John Baez - Azimuth

Azimuth News (Part 5)

I’ve been rather quiet about Azimuth projects lately, because I’ve been too busy actually working on them. Here’s some of what’s happening:

Jason Erbele is finishing his thesis, entitled Categories in Control: Applied PROPs. He successfully gave his thesis defense on Wednesday June 8th, but he needs to polish it up some more. Building on the material in our paper “Categories in control”, he’s defined a category where the morphisms are signal flow diagrams. But interestingly, not all the diagrams you can draw are actually considered useful in control theory! So he’s also found a subcategory where the morphisms are the ‘good’ signal flow diagrams, the ones control theorists like. For these he studies familiar concepts like controllability and observability. When his thesis is done I’ll announce it here.

Brendan Fong is also finishing his thesis, called The Algebra of Open and Interconnected Systems. Brendan has already created a powerful formalism for studying open systems: the decorated cospan formalism. We’ve applied it to two examples: electrical circuits and Markov processes. Lately he’s been developing the formalism further, and this will appear in his thesis. Again, I’ll talk about it when he’s done!

Blake Pollard and I are writing a paper called “A compositional framework for open chemical reaction networks”. Here we take our work on Markov processes and throw in two new ingredients: dynamics and nonlinearity. Of course Markov processes have a dynamics, but in our previous paper when we ‘black-boxed’ them to study their external behaviour, we got a relation between flows and populations in equilibrium. Now we explain how to handle nonequilibrium situations as well.

Brandon Coya, Franciscus Rebro and I are writing a paper that might be called “The algebra of networks”. I’m not completely sure of the title, nor who the authors will be: Brendan Fong may also be a coauthor. But the paper explores the technology of PROPs as a tool for describing networks. As an application, we’ll give a new shorter proof of the functoriality of black-boxing for electrical circuits. This new proof also applies to nonlinear circuits. I’m really excited about how the theory of PROPs, first introduced in algebraic topology, is catching fire with all the new applications to network theory.

I expect all these projects to be done by the end of the summer. Near the end of June I’ll go to the Centre for Quantum Technologies, in Singapore. This will be my last summer there. My main job will be to finish up the two papers that I’m supposed to be writing.

There’s another paper that’s already done:

Kenny Courser has written a paper “A bicategory of decorated cospans“, pushing Brendan’s framework from categories to bicategories. I’ll explain this very soon here on this blog! One goal is to understand things like the coarse-graining of open systems: that is, the process of replacing a detailed description by a less detailed description. Since we treat open systems as morphisms, coarse-graining is something that goes from one morphism to another, so it’s naturally treated as a 2-morphism in a bicategory.

So, I’ve got a lot of new ideas to explain here, and I’ll start soon! I also want to get deeper into systems biology.

In the fall I’ve got a couple of short trips lined up:

• Monday November 14 – Friday November 18, 2016 – I’ve been invited by Yoav Kallus to visit the Santa Fe Institute. From the 16th to 18th I’ll attend a workshop on Statistical Physics, Information Processing and Biology.

• Monday December 5 – Friday December 9 – I’ve been invited to Berkeley for a workshop on Compositionality at the Simons Institute for the Theory of Computing, organized by Samson Abramsky, Lucien Hardy, and Michael Mislove. ‘Compositionality’ is a name for how you describe the behavior of a big complicated system in terms of the behaviors of its parts, so this is closely connected to my dream of studying open systems by treating them as morphisms that can be composed to form bigger open systems.

Here’s the announcement:

The compositional description of complex objects is a fundamental feature of the logical structure of computation. The use of logical languages in database theory and in algorithmic and finite model theory provides a basic level of compositionality, but establishing systematic relationships between compositional descriptions and complexity remains elusive. Compositional models of probabilistic systems and languages have been developed, but inferring probabilistic properties of systems in a compositional fashion is an important challenge. In quantum computation, the phenomenon of entanglement poses a challenge at a fundamental level to the scope of compositional descriptions. At the same time, compositionally has been proposed as a fundamental principle for the development of physical theories. This workshop will focus on the common structures and methods centered on compositionality that run through all these areas.

I’ll say more about both these workshops when they take place.

## June 10, 2016

### Tommaso Dorigo - Scientificblogging

Top Evidence At The Altarelli Memorial Symposium
I am spending some time today at the Altarelli Memorial Symposium, which is taking place at the main auditorium at CERN. The recently deceased Guido Altarelli was one of the leading theorists who brought us to the height of our understanding of the Standard Model of particle physics, and it is heart-warming to see so many colleagues young and old here today - Guido was a teacher for all of us.

## June 09, 2016

### The n-Category Cafe

Good News

Various bits of good news concerning my former students Alissa Crans, Derek Wise, Jeffrey Morton and Chris Rogers.

Alissa Crans did her thesis on Lie 2-Algebras back in 2004. She got hired by Loyola Marymount University, got tenure there in 2011… and a couple of weeks ago she got promoted to full professor! Hurrah!

Derek Wise did his thesis on Topological Gauge Theory, Cartan Geometry, and Gravity in 2007. After a stint at U. C. Davis he went to Erlangen in 2010. When I was in Erlangen in the spring of 2014 he was working with Catherine Meusberger on gauge theory with Hopf algebras replacing groups, and a while back they came out with a great paper on that: Hopf algebra gauge theory on a ribbon graph. But the good news is this: last fall, he got a tenure-track job at Concordia University St Paul!

Jeffrey Morton did his thesis on Extended TQFT’s and Quantum Gravity in 2007. After postdocs at the University of Western Ontario, the Instituto Superior Técnico, Universität Hamburg, Mount Allison University and a visiting assistant professorship at Toledo University, he has gotten a tenure-track job at SUNY Buffalo State! I guess he’ll start there in the fall.

They’re older and wiser now, but here’s what they looked like once:

From left to right it’s Derek Wise, Jeffrey Morton and Alissa Crans… and then two more students of mine: Toby Bartels and Miguel Carrión Álvarez.

And one more late-breaking piece of news! Chris Rogers wrote his thesis on Higher Symplectic Geometry in 2011. After postdocs at Göttingen and the University of Greifswald, and a lot of great work on higher structures, he got a tenure-track job at the University of Louisiana. But now he’s accepted a tenure-track position at the University of Nevada at Reno, where his wife teaches dance. This solves a long-running two-body problem for them!

### Matt Strassler - Of Particular Significance

Pop went the Weasel, but Vroom goes the LHC

At the end of April, as reported hysterically in the press, the Large Hadron Collider was shut down and set back an entire week by a “fouine”, an animal famous for chewing through wires in cars, and apparently in colliders too. What a rotten little weasel! especially for its skill in managing to get the English-language press to blame the wrong species — a fouine is actually a beech marten, not a weasel, and I’m told it goes Bzzzt, not Pop. But who’s counting?

Particle physicists are counting. Last week the particle accelerator operated so well that it generated almost half as many collisions as were produced in 2015 (from July til the end of November), bringing the 2016 total to about three-fourths of 2015.

The key question is how many of the next few weeks will be like this past one.  We’d be happy with three out of five, even two.  If the amount of 2016 data can significantly exceed that of 2015 by July 15th, as now seems likely, a definitive answer to the question on everyone’s mind (namely, what is the bump on that plot?!? a new particle? or just a statistical fluke?) might be available at the time of the early August ICHEP conference.

So it’s looking more likely that we’re going to have an interesting August… though it’s not at all clear yet whether we’ll get great news (in which case we get no summer vacation), bad news (in which case we’ll all need a vacation), or ambiguous news (in which case we wait a few additional months for yet more news.)

Filed under: LHC News, Particle Physics Tagged: atlas, cms, LHC, particle physics

### Tommaso Dorigo - Scientificblogging

The Call To Outreach
I have recently put a bit of order into my records of activities as a science communicator, for an application to an outreach prize. In doing so, I have been able to take a critical look at those activities, something which I would otherwise not have spent my time doing. And it is indeed an interesting look back.

The blogging

Overall, I have been blogging continuously since January 4th 2005. That's 137 months! By continuously, I mean I wrote an average of a post every two days, or a total of about 2000 posts, 60% of which are actual outreach articles meant to explain physics to real outsiders.

My main internet footprint is now distributed in not one, but at least six distinct web sites:

### ZapperZ - Physics and Physicists

New Physics Beyond The Higgs?
Marcelo Gleiser has written a nice article on the curious 750 GeV bump coming from the LHC as announced last year. It is a very good article for the general public, especially on his condensed version of the analysis provided by PRL on the possible origin of this bump.

Still, there is an important point that I want to highlight that is not necessarily about this particular experiment, but rather about physicists and how physics is done. It is in this paragraph:

The exciting part of this is that the bump would be new, surprising physics, beyond expectations. There's nothing more interesting for a scientist than to have the unexpected show up, as if nature is trying to nudge us to look in a different direction.

If you have followed this blog for a considerable period of time, you would have read something similar in my many postings. This is especially true when I tried to debunk the erroneous claim of many crackpots who keep stressing that scientists are merely people who simply work within the box, and can't think outside of the box, or refuse to look for something new. This is of course, utterly dumb and false, because scientists, by definition, study things that are not known, not fully understood, etc. Otherwise, there will be no progression of knowledge the way we have seen it.

I'm going to keep harping this, because I continue to see nonsense like this being perpetuated in many different places.

Zz.

## June 07, 2016

### Symmetrybreaking - Fermilab/SLAC

The neutrino cocktail

Neutrinos are a puzzling mixture of three flavors and three masses. Scientists want to measure them down to the last drop.

For a neutrino, travel is truly life-changing. When one of the tiny particles ends its 500-mile journey from Fermilab’s neutrino source to the NOvA experiment’s detector in Minnesota, it may arrive in an entirely different state than when it started. The particles, which zip through most matter without any interaction at all, can change from one of the three known neutrino varieties into another, a phenomenon known as oscillation.

Due to quantum mechanics, a traveling neutrino is actually in several different states at once. This is a result of a property known as mixing, and though it sounds esoteric, it’s necessary for some of the most important reactions in the universe—and studying it may hold the key to one of the biggest puzzles in particle physics.

Though mixing happens with several types of particles, physicists are focusing on lepton mixing, which occurs in one kind of lepton, the elusive neutrino. There are three known types, or flavors, of neutrinos—electron, muon and tau—and also three mass types, or mass states. But unlike objects in our everyday world, where an apple is always heavier than a grape, neutrino mass states and flavors do not have a one-to-one correspondence.

“When we say there’s mixing between the masses and the flavors, what we mean is that the electron flavor is not only one mass of neutrino,” says Kevin McFarland, a physics professor at Rochester University and co-spokesperson for the MINERvA neutrino experiment at the Department of Energy’s Fermilab.

At any given point in time, a neutrino is some fraction of all three different mass states, adding up to 1. There is more overlap between some flavors and some mass states. When neutrinos are in a state of definite mass, scientists say they’re in their mass eigenstates. Physicists use the term mixing angle to describe this overlap. A small mixing angle means there is little overlap, while maximum mixing angle describes a situation where the parameters are as evenly mixed as possible.

Mixing angles have constant values, and physicists don't know why those particular values are found in nature.

Artwork by Sandbox Studio, Chicago with Jill Preston

“This is given by nature,” says Patrick Huber, a theoretical physicist at Virginia Tech. “We very much would like to understand why these numbers are what they are. There are theories out there to try to explain them, but we really don’t know where this is coming from.”

In order to find out, physicists need large experiments where they can control the creation of neutrinos and study their interactions in a detector. In 2011, the Daya Bay experiment in China began studying antineutrinos produced from nuclear power plants, which generate tens of megawatts of power in antineutrinos. That’s an astonishing number; for comparison, beams of neutrinos created at labs are in the kilowatt range. Just a year later, scientists working there nailed down one of the mixing angles, known as theta13 (pronounced theta one three).

The discovery was a crucial one, confirming that all mixing angles are greater than zero. That property is necessary for physicists to begin using neutrino mixing as a probe for one of the greatest mysteries of the universe: why there is any matter at all.

According to the Standard Model of cosmology, the Big Bang should have created equal amounts of matter and antimatter. Because the two annihilate each other upon contact, the fact that any matter exists at all shows that the balance somehow tipped in favor of matter. This violates a rule known as charge-parity symmetry, or CP symmetry.

One way to study CP violation is to look for instances where a matter particle behaves differently than its antimatter counterpart. Physicists are looking for a specific value in a mixing parameter, known as a complex phase, in neutrino mixing, which would be evidence of CP violation in neutrinos. And the Daya Bay result paved the way.

“Now we know, OK, we have a nonzero value for all mixing angles,” says Kam Biu-Luk, spokesperson for the Daya Bay collaboration. “As a result, we know we have a chance to design a new experiment to go after CP violation.”

Information collected from Daya Bay, as well as ongoing neutrino experiments such as NOvA at Fermilab and T2K in Japan, will be used to help untangle the data from the upcoming international Deep Underground Neutrino Experiment (DUNE). This will be the largest accelerator-based neutrino experiment yet, sending the particles on an 800-mile odyssey into massive detectors filled with 70,000 total tons of liquid argon. The hope is that the experiment will yield precise data about the complex phase, revealing the mechanism that allowed matter to flourish.

“Neutrino oscillation is in a sense new physics, but now we’re looking for new physics inside of that,” Huber says. “In a precision experiment like DUNE we’ll have the ability to test for these extra things beyond only oscillations.”

Neutrinos are not the only particles that exhibit mixing. Building blocks called quarks exhibit the property too.

Physicists don’t yet know if mixing is an inherent property of all particles. But from what they know so far, it’s clear that mixing is fundamental to powering the universe.

“Without this mixing, without these reactions, there are all sorts of critical processes in the universe that just wouldn’t happen,” McFarland says. “It seems nature likes to have that happen. And we don’t know why.”

### Jon Butterworth - Life and Physics

Running to stand still: a brief political analogy

Yesterday morning, the clock radio was broken because we’d spilled water on it. In the evening, a bolt fell out of the bed, and we noticed that the whole bed was in danger of collapse.

During the day, I dried out the clock radio and it began working fine again. I also stripped down the bed, refitted the bolt and tightened the others, so the bed is structurally sound again. And we won’t keep a glass of water right near the radio again either.

So somehow this felt like a productive day, and in one sense it was. By prompt action I had saved myself a certain amount of expense and danger.

In another sense, I was in fact right back where I thought I had been the day before.

I will feel similar, on a much more serious scale if, by the end of the year, the UK remains in the EU and Donald Trump is not President of the USA. Surprising danger and expense avoided. Things much as I thought they were before, with perhaps some resulting improvement in underlying structures.

Right now, things feel ricketty.

Oh, and UK people who haven’t already, please register to vote today!

Filed under: Politics, Rambling Tagged: europe

## June 06, 2016

### Tommaso Dorigo - Scientificblogging

The Large Hadron Collider Piles Up More Data
With CERN's Large Hadron Collider slowly but steadily cranking up its instantaneous luminosity, expectations are rising on the results that CMS and ATLAS will present at the 2016 summer conferences, in particular ICHEP (which will take place in Chicago at the beginning of August). The data being collected will be used to draw some conclusions on the tentative signal of a diphoton resonance, as well as on the other 3-sigma effects seen by about 0.13 % of the searches carried out on previous data this far.

## June 05, 2016

### Geraint Lewis - Cosmic Horizons

A Sunday Confession: I never wanted to be an astronomer
After an almost endless Sunday, winter has arrived with a thump in Sydney and it is wet, very, very wet. So, time for a quick post.

Last week, I spoke at an Early Career Event in the Yarra Valley, with myself and Rachel Webster from the University of Melbourne talking about the process of applying for jobs in academia. I felt it was a very productive couple of days, discussing a whole range of topics, from transition into industry and the two-body problem, and I received some very positive feedback on the material I presented. I even recruited a new mentee to work with.

What I found interesting was the number of people who said they had decided to be a scientist or astronomer when they were a child, and were essentially following their dream to become a professor at a university one day. While I didn't really discuss this at the meeting, I have a confession, namely that I never wanted to be an astronomer.

This will possibly come as a surprise to some. What I am doing here as a university professor undertaking research in astronomy if it was never my life dream?

I don't really remember having too many career ideas as a child. I was considering being a vet, or looking after dinosaur bones in a museum, but the thought of being astronomer was not on the list. I know I had an interest in science, and I read about science and astronomy, but I never had a telescope, never remembered the names of constellations, never wanted to be an astronomer myself.

I discovered, at about age 16, that I could do maths and physics, did OK in school, found myself in university, where I did better, and then ended up doing a PhD. I did my PhD at the Institute of Astronomy in Cambridge, but went there because I really liked physics, and the thought of applying physics to the universe. With luck and chance, I found myself in postdoctoral positions and then a permanent position, and now a professor.

And my passion is still understanding the workings of the universe through the laws of physics, and it's the part of my job I love (one aspect of the ECR meeting was discussing the issue that a lot of the academic job at a university is not research!). And I am pleased to find myself where I am, but I didn't set out along this path with any purpose or forethought. In fact, in the times I have thought about jumping ship and trying another a career, the notion of not being an astronomer anymore never bothered me. And I think it still doesn't. As long as the job is interesting, I think I'd be happy.

So, there's my Sunday confession. I'm happy being a research astronomer trying to understand the universe, but it has never been a dream of mine. I think this has helped weather some of the trials facing researchers in the establishing a career. I never wanted to be an astronomer.

Oh, and I don't think much of Star Trek either.

### John Baez - Azimuth

Programming with Data Flow Graphs

Network theory is catching on—in a very practical way!

Google recently started a new open source library called TensorFlow. It’s for software built using data flow graphs. These are graphs where the edges represent tensors—that is, multidimensional arrays of numbers—and the nodes represent operations on tensors. Thus, they are reminiscent of the spin networks used in quantum gravity and gauge theory, or the tensor networks used in renormalization theory. However, I bet the operations involved are nonlinear! If so, they’re more general.

TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

### What is a Data Flow Graph?

Data flow graphs describe mathematical computation with a directed graph of nodes & edges. Nodes typically implement mathematical operations, but can also represent endpoints to feed in data, push out results, or read/write persistent variables. Edges describe the input/output relationships between nodes. These data edges carry dynamically-sized multidimensional data arrays, or tensors. The flow of tensors through the graph is where TensorFlow gets its name. Nodes are assigned to computational devices and execute asynchronously and in parallel once all the tensors on their incoming edges becomes available.

### TensorFlow Features

Deep Flexibility. TensorFlow isn’t a rigid neural networks library. If you can express your computation as a data flow graph, you can use TensorFlow. You construct the graph, and you write the inner loop that drives computation. We provide helpful tools to assemble subgraphs common in neural networks, but users can write their own higher-level libraries on top of TensorFlow. Defining handy new compositions of operators is as easy as writing a Python function and costs you nothing in performance. And if you don’t see the low-level data operator you need, write a bit of C++ to add a new one.

True Portability. TensorFlow runs on CPUs or GPUs, and on desktop, server, or mobile computing platforms. Want to play around with a machine learning idea on your laptop without need of any special hardware? TensorFlow has you covered. Ready to scale-up and train that model faster on GPUs with no code changes? TensorFlow has you covered. Want to deploy that trained model on mobile as part of your product? TensorFlow has you covered. Changed your mind and want to run the model as a service in the cloud? Containerize with Docker and TensorFlow just works.

Connect Research and Production. Gone are the days when moving a machine learning idea from research to product require a major rewrite. At Google, research scientists experiment with new algorithms in TensorFlow, and product teams use TensorFlow to train and serve models live to real customers. Using TensorFlow allows industrial researchers to push ideas to products faster, and allows academic researchers to share code more directly and with greater scientific reproducibility.

Auto-Differentiation. Gradient based machine learning algorithms will benefit from TensorFlow’s automatic differentiation capabilities. As a TensorFlow user, you define the computational architecture of your predictive model, combine that with your objective function, and just add data — TensorFlow handles computing the derivatives for you. Computing the derivative of some values w.r.t. other values in the model just extends your graph, so you can always see exactly what’s going on.

Language Options. TensorFlow comes with an easy to use Python interface and a no-nonsense C++ interface to build and execute your computational graphs. Write stand-alone TensorFlow Python or C++ programs, or try things out in an interactive TensorFlow iPython notebook where you can keep notes, code, and visualizations logically grouped. This is just the start though — we’re hoping to entice you to contribute SWIG interfaces to your favorite language — be it Go, Java, Lua, JavaScript, or R.

Maximize Performance. Want to use every ounce of muscle in that workstation with 32 CPU cores and 4 GPU cards? With first-class support for threads, queues, and asynchronous computation, TensorFlow allows you to make the most of your available hardware. Freely assign compute elements of your TensorFlow graph to different devices, and let TensorFlow handle the copies.

### Who Can Use TensorFlow?

TensorFlow is for everyone. It’s for students, researchers, hobbyists, hackers, engineers, developers, inventors and innovators and is being open sourced under the Apache 2.0 open source license.

TensorFlow is not complete; it is intended to be built upon and extended. We have made an initial release of the source code, and continue to work actively to make it better. We hope to build an active open source community that drives the future of this library, both by providing feedback and by actively contributing to the source code.

### Why Did Google Open Source This?

If TensorFlow is so great, why open source it rather than keep it proprietary? The answer is simpler than you might think: We believe that machine learning is a key ingredient to the innovative products and technologies of the future. Research in this area is global and growing fast, but lacks standard tools. By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products. Google engineers really do use TensorFlow in user-facing products and services, and our research group intends to share TensorFlow implementations along side many of our research publications.

For more details, try this: