# Particle Physics Planet

## October 03, 2015

### Georg von Hippel - Life on the lattice

Fundamental Parameters from Lattice QCD, Last Days
The last few days of our scientific programme were quite busy for me, since I had agreed to give the summary talk on the final day. I therefore did not get around to blogging, and will keep this much-delayed summary rather short.

On Wednesday, we had a talk by Michele Della Morte on non-perturbatively matched HQET on the lattice and its use to extract the b quark mass, and a talk by Jeremy Green on the lattice measurement of the nucleon strange electromagnetic form factors (which are purely disconnected quantities).

On Thursday, Sara Collins gave a review of heavy-light hadron spectra and decays, and Mike Creutz presented arguments for why the question of whether the up-quark is massless is scheme dependent (because the sum and difference of the light quark masses are protected by symmetries, but will in general renormalize differently).

On Friday, I gave the summary of the programme. The main themes that I identified were the question of how to estimate systematic errors, and how to treat them in averaging procedures, the issues of isospin breaking and scale setting ambiguities as major obstacles on the way to sub-percent overall precision, and the need for improved communication between the "producers" and "consumers" of lattice results. In the closing discussion, the point was raised that for groups like CKMfitter and UTfit the correlations between different lattice quantities are very important, and that lattice collaborations should provide the covariance matrices of the final results for different observables that they publish wherever possible.

### Clifford V. Johnson - Asymptotia

Benedict

I call this part of the garden Benedict, for obvious reasons... right?

The post Benedict appeared first on Asymptotia.

## October 02, 2015

### Emily Lakdawalla - The Planetary Society Blog

New Horizons releases new color pictures of Charon, high-resolution lookback photo of Pluto
Now that New Horizons is regularly sending back data, the mission is settling into a routine of releasing a set of captioned images on Thursdays, followed by raw LORRI images on Friday. The Thursday releases give us the opportunity to see lovely color data from the spacecraft's Ralph MVIC instrument. This week, the newly available color data set covered Charon.

### Christian P. Robert - xi'an's og

moral [dis]order

“For example, a religiously affiliated college that receives federal grants could fire a professor simply for being gay and still receive those grants. Or federal workers could refuse to process the tax returns of same-sex couples simply because of bigotry against their marriages. It doesn’t stop there. As critics of the bill quickly pointed out, the measure’s broad language — which also protects those who believe that “sexual relations are properly reserved to” heterosexual marriages alone — would permit discrimination against anyone who has sexual relations outside such a marriage. That would appear to include women who have children outside of marriage, a class generally protected by federal law.” The New York Time

An excerpt from this week New York Time Sunday Review editorial about what it qualifies as “a nasty bit of business congressional Republicans call the First Amendment Defense Act.” A bill which first line states to be intended to “prevent discriminatory treatment of any person on the basis of views held with respect to marriage” and which in essence would allow for discriminatory treatment of homosexual and unmarried couples not to be prosecuted. A fine example of Newspeak if any! (Maybe they could also borrow Orwell‘s notion of a Ministry of Love.) Another excerpt of the bill that similarly competes for Newspeak of the Year:

(5) Laws that protect the free exercise of religious beliefs and moral convictions about marriage will encourage private citizens and institutions to demonstrate tolerance for those beliefs and convictions and therefore contribute to a more respectful, diverse, and peaceful society.

This reminded me of a story I was recently told me about a friend of a friend who is currently employed by a Catholic school in Australia and is afraid of being fired if found being pregnant outside of marriage. Which kind of “freedom” is to be defended in such “tolerant” behaviours?!

Filed under: Kids, Travel Tagged: 1984, bigotry, discrimination, George Orwell, Newspeak, same-sex marriage, The New York Times

### Tommaso Dorigo - Scientificblogging

Thank You Guido
It is with great sadness that I heard (reading it here first) about the passing away of Guido Altarelli, a very distinguished Italian theoretical physicist. Altarelli is best known for the formulas that bear his name, the Altarelli-Parisi equations (also known as DGLAP since it was realized that due recognition for the equations had to be given also to Dokshitzer, Gribov, and Lipatov). But Altarelli was a star physicist who gave important contributions to Standard Model physics in a number of ways.

### Christian P. Robert - xi'an's og

Argentan, 30th and 17th and 7th edition(s)

When I started the ‘Og, in 2008, I was about to run the 23rd edition of the Argentan half-marathon… Seven years later, I am once again getting ready for the race, after a rather good training season, between the mountains of the North Cascade and the track of Malakoff. with the last week in England, Holland, and Canada having seen close to two trainings a day. (Borderline stress injury, maybe!) Weather does not look too bad this year, so we’ll see tomorrow how I fare against myself (and the other V2 runners, incidentally!).

Filed under: Running, Travel Tagged: Argentan, France, half-marathon, Malakoff, Normandy, North Cascades National Park, veteran (V2)

### Symmetrybreaking - Fermilab/SLAC

The burgeoning field of neutrino astronomy

A new breed of experiments seeks sources of cosmic rays and other astrophysics phenomena.

Ghostlike subatomic particles called neutrinos could hold clues to some of the greatest scientific questions about our universe: What extragalactic events create ultra-high-energy cosmic rays? What happened in the first seconds following the big bang? What is dark matter made of?

Scientists are asking these questions in a new and fast-developing field called neutrino astronomy, says JoAnne Hewett, director of Elementary Particle Physics at SLAC National Accelerator Laboratory.

“When I was a graduate student I never thought we’d be thinking about neutrino astronomy,” she says. “Now not only are we thinking about it, we’re already doing it. At some point it will be a standard technique.”

Neutrinos, the most abundant massive particles in the universe, are produced in a multitude of different processes. The new neutrino astronomers go after several types of neutrinos: ultra-high-energy neutrinos and neutrinos from supernovae, which they can already detect, and low-energy ones they have only measured indirectly so far.

“Every time we look for these astrophysical neutrinos, we’re hoping to learn two things,” says André de Gouvêa, a theoretical physicist at Northwestern University: what high-energy neutrinos can tell us about the processes that produced them, and what low-energy neutrinos can tell us about the conditions of the early universe.

#### Ultra-high-energy neutrinos

At the ultra-high-energy end of the spectrum, researchers hope to follow cosmic neutrinos like a trail of bread crumbs back to their sources. They are thought to originate in the universe’s most powerful, natural particle accelerators, such as supermassive black holes.

“We’re confident we’ve seen neutrinos coming from outside (our galaxy)—astrophysical sources,” says Kara Hoffman, a physics professor at the University of Maryland. She is a member of the international collaboration for IceCube, the largest neutrino telescope on the planet, which uses a cubic kilometer of South Pole ice as a massive, ultrasensitive detector.

Scientists have been tracking high-energy particles from space for decades. But cosmic neutrinos are different: Because they are neutral particles, they travel in a straight line, unaffected by the magnetic fields of space.

IceCube collaborators are exploring whether there is a correlation between ultra-high-energy neutrino events and observations of incredibly intense releases of energy known as gamma-ray bursts. Scientists also hope to learn whether there is a correlation between these neutrino events and with theorized phenomena known as gravitational waves.

Alexander Friedland, a theorist at SLAC, says high-energy neutrinos (which are less energetic than ultra-high-energy neutrinos) can provide a useful window into physics at the earliest stages of supernovae explosions.

“Neutrinos tell you about the explosion engine, and what happens later when the shock goes through,” Friedland says. “These are very rich conditions that you can never make on Earth. This is an amazing experiment that nature made for us.”

With modern detectors it may be possible to detect thousands of neutrinos and to reconstruct their energy on a second-by-second basis.

“Neutrinos basically give you a different eye to look at the universe and a unique probe of new physics,” Friedland says.

#### Low-energy neutrinos

At the low-energy end of the spectrum, researchers hope to find “relic” neutrinos produced at the start of the universe, leftovers from the big bang. Their energy is expected to be more than a quadrillion times lower than the highest-energy neutrinos.

The lower the energy of the neutrino, however, the harder it is to detect. So for now, the cosmic neutrino background remains somewhat out of reach.

“We already know a lot about it, even though we’ve never seen it directly,” de Gouvêa says. “If we look at the universe at very large scales, we can only explain things if this background exists. We can safely say: ‘Either this cosmic neutrino background exists, or there is something out there that behaves exactly like neutrinos do.’”

The European Space Agency’s Planck satellite has helped to shape our understanding of this relic neutrino background, and the planned ground-based Large Synoptic Survey Telescope will provide new data. These surveys provide bounds on the quantity and interaction of these relic neutrinos, and can give us information about neutrino mass.

As detectors become more sensitive, researchers may also learn whether a theorized particle called a “sterile neutrino” may be a component in dark matter, the invisible stuff we know accounts for most of the mass of the universe.

Some proposed experiments, such as PTOLEMY at Princeton Plasma Physics Laboratory and the Project 8 collaboration, led by scientists at the Massachusetts Institute of Technology and University of California, Santa Barbara, are working to establish properties of these neutrinos by watching for evidence of their production in a radioactive form of hydrogen called tritium.

There are several upgrades and new projects in the works in the fledgling field of neutrino astronomy.

A proposal called PINGU would extend the sensitivity of the IceCube array to a broader range of neutrino energies. It could look for neutrinos coming from the center of the sun, a possible sign of dark matter interactions, and could also look for neutrinos produced in Earth’s atmosphere.

Another project would greatly expand an underwater neutrino observatory in the Mediterranean called Antares. A third project would build a large-scale observatory in a lake in Siberia.

Scientists also hope to eventually establish the Askaryan Radio Array, a 100-cubic-kilometer neutrino detector in Antarctica.

The field of neutrino astronomy is young, but it’s constantly growing and improving, Hoffman says.

“It’s kind of like having a Polaroid that you’re waiting to develop, and you just start to see the shadow of something,” she says. “What the picture’s going to look like we don’t really know.”

Like what you see? Sign up for a free subscription to symmetry!

### Emily Lakdawalla - The Planetary Society Blog

Thousands of Photos by Apollo Astronauts now on Flickr
A cache of more than 8,400 unedited, high-resolution photos taken by Apollo astronauts during trips to the moon is now available for viewing and download on Flickr.

### Emily Lakdawalla - The Planetary Society Blog

Mars Exploration Rovers Update: Opportunity Rocks on Ancient Water During Walkabout
Opportunity continued her walkabout around Marathon Valley in September and sent home more evidence of significant water alteration and, perhaps, an ancient environment inviting enough for the emergence of life.

### ZapperZ - Physics and Physicists

25% Of Physics Nobel Laureates Are Immigrants
The people at Physics World have done an interesting but not surprising study on the number of Physics Nobel laureates who are/were immigrants. They found that this number is more than 1/4 of all Physics Nobel winners.

They discussed what they used as a criteria of an "immigrant", and the chart they showed certainly is very clear that there is a huge influx of these  talents into the US.

Still, it would be nice to see how many of these immigrants did their Nobel Prize winning work before they migrated. And I definitely want to see this statistics for the next 10-20 years, especially now that they US is severely cutting budgets into basic physics research, the effects of which will not be felt immediately.

In any case, it is that time of the year again where we all make our predictions or  guesses on who will win this prize this year. I am still pinning hopes that a woman will win this, considering that we have been having very strong candidates for several years.

Zz.

### Peter Coles - In the Dark

The 9 kinds of physics seminar

I just couldn’t resist reblogging this!! :-)

Originally posted on Many Worlds Theory:

As a public service, I hereby present my findings on physics seminars in convenient graph form.  In each case, you will see the Understanding of an Audience Member (assumed to be a run-of-the-mill PhD physicist) graphed as a function of Time Elapsed during the seminar.  All talks are normalized to be of length 1 hour, although this might not be the case in reality.

The “Typical” starts innocently enough: there are a few slides introducing the topic, and the speaker will talk clearly and generally about a field of physics you’re not really familiar with.  Somewhere around the 15 minute mark, though, the wheels will come off the bus.  Without you realizing it, the speaker will have crossed an invisible threshold and you will lose the thread entirely.  Your understanding by the end of the talk will rarely ever recover past 10%.

The “Ideal” is what physicists strive for in…

View original 763 more words

### CERN Bulletin

Guido Altarelli (1941 - 2015)

The CERN community was deeply saddened to learn that Guido Altarelli had passed away on 30 September.

He was a true giant of particle physics and of CERN. His contributions to physics span all subjects, from strong to electroweak interactions, from neutrinos to theories beyond the Standard Model, and from the study of precision measurements to the analysis of apparent anomalies, whose interpretation in terms of new physics he often exposed as naïve and unjustified. He left milestones in the progress of our field wherever he went. The awards of the Sakurai Prize in 2012 and of the EPS Prize in 2015 rank him among the greats, and reflect only in part the wealth of knowledge he gave to high-energy physics. Guido Altarelli was not only a great scientist, but also a person of great integrity.

He was always available to make the bridge between experiment and theory and to share his time and wisdom with the experiments and the wider laboratory. The scientific community has lost a great scientist and a great friend.

The Director-General has sent a letter of condolence to his family and a full obituary will follow in the CERN Courier.

### CERN Bulletin

Researchers' Night: POPScience in Balexert
European Researchers’ Night was held on 25 September 2015. With POP Science - the Researchers’ Night event for the Geneva region - CERN met the public at the Balexert shopping centre.

### CERN Bulletin

Collection for Refugee and Migration Crisis
Dear Colleagues, In response to the current refugee and migration crisis, we are starting a collection today and we are calling on your generosity. The funds will be forwarded to the International Federation of Red Cross and Red Crescent Societies to respond to the humanitarian needs of the refugees and migrants, providing immediate and longer-term relief, including emergency medical care and basic health services, psychological support, temporary shelter, distribution of food & water and other urgently needed items. We hope that your contributions to the above-mentioned appeal will not prevent you from sparing a thought for them and doing whatever you can to help them. Bank account details for donations: Bank account holder: Association du personnel CERN - 1211 GENEVE 23 Account number: 279-HU106832.1 IBAN: CH85 0027 9279 HU10 6832 1 BIC:  UBSWCHZH80A Please mention: Refugee and Migration Crisis

### CERN Bulletin

Cine club
Wednesday 7 October 2015 at 20:00 CERN Council Chamber The Day of the Beast (El día de la bestia) Directed by Álex de la Iglesia Spain, 1995, 100 minutes A Basque priest finds by means of a cabalistic study of the Bible that the Antichrist is going to be born on Christmas Day in Madrid. Assisted by a heavy-metal fan and the host of a TV show on the occult, he will try to summon the Devil to find out the place of birth and kill the baby. Original version spanish; english subtitles   Wednesday 14 October 2015 at 20:00 CERN Council Chamber Tesis Directed by Alejandro Amenábar Spain, 1996, 125 minutes   Why is death and violence so fascinating? Is it morally correct to show violence in movies? If so, is there a limit to what we should show? That's the subject of Ángela's examination paper. She is a young student at a film school in Madrid. Together with the student Chema (who is totally obsessed with violent movies) they find a snuff movie in which a young girl is tortured and killed. Soon they discover that the girl was a former student at their school... Original version spanish; english subtitles

### astrobites - astro-ph reader's digest

Cosmic Rays Make for Windy Galaxies

Title: Launching Cosmic Ray-Driven Outflows from the Magnetized Interstellar Medium
Authors: Philipp Girichidis et. al.
First author’s institution: Max-Planck-Institut für Astrophysik, Garching, Germany
Status: Submitted to ApJ

Galaxy evolution is a game of balance. Galaxies grow as they accumulate gas from what is called the intergalactic medium and as they merge with smaller galaxies. Over time, galaxies convert their gas into stars. This inflow of gas and the subsequent star formation is balanced out by the outflow of gas from galactic winds, through a process known as feedback. As suggested by observations and seen in simulations, these winds are driven by supernovae explosions that occur as stars die. Eventually, gas driven out may eventually fall back into the galaxy, continuing a stellar circle of life.

Although simulations have done a good job reproducing galaxy properties worrying about feedback from supernovae alone, this is far from the complete picture. Cosmic rays, or high energy protons, electrons, and nuclei moving near the speed of light, can create a significant pressure in galaxies through collisions with the gas in galaxies. Supernovae explosions are important sources of cosmic rays in galaxies. Simulating cosmic rays is computationally challenging, yet they may be very important for understanding the life cycle and structure of galaxies. In addition, since cosmic rays are charged particles, worrying about how they interact with magnetic fields in galaxies may be very important. The authors of today’s astrobite use hydrodynamic simulations with magnetic fields (or magnetohydrodynamics, MHD), cosmic rays, and supernovae to try and better understand their roles in driving galactic winds.

## Testing Feedback

FIgure 1: Density slices through the gas disk in each of three simulations. The vertical axis gives height above the disk. From left to right, the simulations include only thermal energy from supernovae explosions, only cosmic rays from supernovae, and both. (Source: Figure 1 of Girchidis et. al. 2015)

The authors aim to understand how supernovae, magnetic fields, and cosmic rays affect the evolution of gas contained within the disk of a galaxy. In particular, their test is to see what, if any, of the three best reproduces the density and temperature distribution of gas as a function of height above the gas disk. They perform three simulations of a galaxy disk, all three of which include magnetic fields. One simulation includes the thermal energy injected by supernovae explosions only, one includes cosmic rays generated by supernovae only, and the third includes both. The gas density of these three simulations are shown (left to right) in Figure 1, 250 million years after the start of the simulation.

Figure 2: This is a more quantitative view of what is shown in Figure 1. Shown is the gas density as a function of height above the disk (z) for the run with thermal supernovae energy only (black), cosmic rays only (blue), and both (red). These are compared against observations of the Milky Way (yellow). (Source: Figure 2 of Girchidis et. al. 2015)

Putting numbers to Figure 1, Figure 2 shows the gas density as a function of height above the disk for all three simulations: thermal supernovae energy only (black), cosmic rays only (blue), and both (red). The vertical lines show the position within which each simulation contains 90% of the gas mass. As shown, including only thermal supernovae energy produces a dense disk, with little gas above the disk. Adding in cosmic rays changes this significantly, driving out quite a lot of gas mass to large heights above the disk. This is in part because cosmic rays are able to quickly diffuse to large distances above the disk. The gas from the disk then flows out from the disk to large heights, following the large pressure gradient established by the cosmic rays. The cosmic ray simulations do a much better job of matching the yellow line, which gives observational estimates of the gas density above the disk of our Milky Way.

In addition, the authors go on to show that, over time, including cosmic rays serves to slowly grow the thickness of the gas disk, and quickly dumps gas at large heights above the disk. They also show that the mass flow rate of galactic winds generated by cosmic rays is nearly an order of magnitude greater than those generated by thermal energy injection alone.

## Developing an Accurate Model of Galaxy Formation

This research aims to better describe the evolution of galaxies by including the effects of supernovae feedback as well as the not-well-understood effects of cosmic rays and magnetic fields in their simulations. Their work shows that cosmic rays are able to dive out a significant amount of gas from the disks of galaxies, potentially tilting the balance between gas inflow and star formation, and gas outflow. Understanding this process better with further work will bring the properties of simulated galaxies in better agreement with observed galaxies in our Universe.

## October 01, 2015

### Emily Lakdawalla - The Planetary Society Blog

Cargo Craft Completes Six-Hour Schlep to ISS
A Russian cargo craft laden with more than three tons of food, fuel and supplies arrived at the International Space Station today.

### Christian P. Robert - xi'an's og

importance sampling with multiple MCMC sequences

Vivek Roy, Aixian Tan and James Flegal arXived a new paper, Estimating standard errors for importance sampling estimators with multiple Markov chains, where they obtain a central limit theorem and hence standard error estimates when using several MCMC chains to simulate from a mixture distribution as an importance sampling function. Just before I boarded my plane from Amsterdam to Calgary, which gave me the opportunity to read it completely (along with half a dozen other papers, since it is a long flight!) I first thought it was connecting to our AMIS algorithm (on which convergence Vivek spent a few frustrating weeks when he visited me at the end of his PhD), because of the mixture structure. This is actually altogether different, in that a mixture is made of unnormalised complex enough densities, to act as an importance sampler, and that, due to this complexity, the components can only be simulated via separate MCMC algorithms. Behind this characterisation lurks the challenging problem of estimating multiple normalising constants. The paper adopts the resolution by reverse logistic regression advocated in Charlie Geyer’s famous 1994 unpublished technical report. Beside the technical difficulties in establishing a CLT in this convoluted setup, the notion of mixing importance sampling and different Markov chains is quite appealing, especially in the domain of “tall” data and of splitting the likelihood in several or even many bits, since the mixture contains most of the information provided by the true posterior and can be corrected by an importance sampling step. In this very setting, I also think more adaptive schemes could be found to determine (estimate?!) the optimal weights of the mixture components.

Filed under: Mountains, pictures, Statistics, Travel, University life Tagged: adaptive MCMC, Ames, AMIS, Amsterdam, Charlie Geyer, importance sampling, Iowa, MCMC, Monte Carlo Statistical Methods, normalising constant, splitting data

### Emily Lakdawalla - The Planetary Society Blog

Favorite Astro Plots #1: Asteroid orbital parameters
This is the first in a series of posts in which scientists share favorite planetary science plots. For my #FaveAstroPlot, I explain what you can see when you look at how asteroid orbit eccentricity and inclination vary with distance from the Sun.

### arXiv blog

IQ Test Result: Advanced AI Machine Matches Four-Year-Old Child's Score

Artificial intelligence machines are rapidly gaining on humans, but they have some way to go on IQ tests.

### Tommaso Dorigo - Scientificblogging

Researchers' Night 2015
Last Friday I was invited by the University of Padova to talk about particle physics to the general public, in occasion of the "Researchers Night", a yearly event organized by the European Commission which takes place throughout Europe - in 280 cities this year. Of course I gladly accepted the invitation, although it caused some trouble to my travel schedule (I was in Austria for lectures until Friday morning, and you don't want to see me driving when I am in a hurry, especially on a 500km route).

### astrobites - astro-ph reader's digest

The first detection of an inverse Rossiter-McLaughlin effect

How do you observe an Earth transit, from Earth? You look at reflected sunlight from large highly reflective surfaces. Good candidates include planets, and their moons. They are the largest mirrors in the solar system.

On Jan 5 2014 Earth transited the Sun, as seen from Jupiter and its moons. Jupiter itself is not a good sunlight reflector, due to its high rotational velocity and its turbulent atmosphere. Its major solid satellites are better mirrors. Therefore, the authors observed Jupiter’s moons Europa and Ganymede during the transit (see Figure 2), and took spectra of the Sun from the reflected sunlight with HARPS, and HARPS-N, two very precise radial velocity spectrographs. The former spectrograph is located in La Silla in Chile, and the latter in La Palma in the Canary Islands. The authors’ goal was to measure the Rossiter-McLaughlin effect.

Fig 1: The Earth, and the Moon as they would appear to an observer on Jupiter on 5 January 2014, transiting the Sun. Figure 1 from the paper.

Fig 2: The geometric configuration of Jupiter and its moons, as seen from the Sun. Figure 2 from the paper.

Transits: sequential blocking of a star

The Rossiter-McLaughlin effect is a spectroscopic effect observed during transit events (see Figure 3). As a star rotates on its axis, half of the visible stellar photosphere moves towards the observer (blueshifted), while the other visible half the star moves away from the observer (redshifted). As a transiting objectin our case a planetmoves across the star, the planet will block out one quadrant of the star first, and then the other. This sequential blocking of blue-and redshifted regions on the star causes the observed stellar spectrum to vary. More specifically, the uneven contribution from the two stellar quadrants distorts the spectral line profiles, causing the apparent radial velocity of the star to change, when in fact it does not. The effect can give information on a) the planet radius, and b) the angle between the sky projections of the planet’s orbital axis, and the stellar rotational axis.

Fig 3: The Rossiter-McLaughlin effect: as a planet passes in front of a star it sequentially blocks blue-and redshifted regions of the star causing the star’s apparent radial velocity to change, when in fact it does not. The viewer is at the bottom. Figure from Wikipedia.

Observations of the transit

Figure 4 shows the whole set of corrected radial velocities taken of the transit, including observations of Jupiter’s moons the nights before and after. The transit, as seen from Jupiter’s moons, took about 9 hours and 40 minutes. The best available coverage of the event was for 6 hours from HARPS-N at La Palma Observatory. HARPS at La Silla Observatory was able to observe the transit for about an hour.

Fig 4: Corrected radial velocities measured on Jan 4-6, 2015. Vertical dashed lines denote the start, middle, and end of the transit. Observations of Europa from La Palma cover about 6 hours of the transit (black circles). Color observations are of Ganymede (cyan) and Europa (red) from La Silla. Figure 4 from the paper.

An anomalous drift

The expected modulation in the solar radial velocities due to the transit was on the order of 20cm/s. The Moon, which also partook in the transit (see Figure 1 again), added a few cm/s to this number.

Instead of detecting the expected 20cm/s modulation, the authors detected a much larger signal, on the order of -38m/sa modulation about 400 times higher and opposite in sign than expected (see peak in Figure 4): an inverse Rossiter-McLaughlin effect.

The authors ruled out that the observed modulation could be caused by instrumental effects, as the two spectrographs showed consistent results. Additionally, the authors rule out the possible dependence of the anomalous signal with magnetic activity of the Sun, from observations conducted simultaneously with The Birmingham Solar Oscillation Network (BiSON). They had another idea.

The culprit: Europa’s opposition surge

The authors suggest that the anomaly is produced by Europa’s opposition surge.

The opposition surge is a brightening of a rocky celestial surface when it is observed at opposition. An example of an object at opposition is the full moon. The “surge” part has to do with the increase, or “surge”, in reflected solar radiation observed at opposition. This is due to a combination of two effects. First, at opposition the reflective surface has almost no shadows. Second, at opposition photons can constructively interfere with dust particles close to the reflective surface, increasing its reflectivity. The latter effect is called coherent backscatter.

The authors created a simple model for Europa’s opposition surge, and compared it to their observations (see Figure 5). It works. As the Earth moves across the face of the Sun, rather than blocking the light (like in the Rossiter-McLaughlin effect shown in Figure 3), the net effect is that the light grazing the Earth is amplified. The Earth thus acts as a lens, compensating not only for the lost light during the eclipse—but makes the Sun appear much brighter! This explains the opposite sign, and the amplitude of the effect. Additionally, the amplification of reflected light is not fixed only to the transit, but happens gradually as Earth gets closer to transiting, and as Europa gets closer to being at opposition. The effect is symmetric, and is analogously observed as Earth moves out of transit.

Fig 5: The model of the opposition surge (thick black line) compared to observations from HARPS-N at La Palma (red), and HARPS at La Silla (blue). The dotted blue line shows the originally expected Rossiter-McLaughlin effect, amplified 50-fold for visibility. It is much smaller than the observed signal. Figure 7 from the paper.

Conclusion

This is the first time an inverse Rossiter-McLaughlin effect, caused by a moon’s opposition surge, has been detected. The authors predict the effect can be observed again during the next conjunction of Earth and Jupiter in 2016. Although, this will be a grazing transit with a smaller amplitude than the transit studied in this paper, the authors can now predict with confidence the extent of the newly discovered effect in the upcoming event.

## September 30, 2015

### Christian P. Robert - xi'an's og

a simulated annealing approach to Bayesian inference

A misleading title if any! Carlos Albert arXived a paper with this title this morning and I rushed to read it. Because it sounded like Bayesian analysis could be expressed as a special form of simulated annealing. But it happens to be a rather technical sequel [“that complies with physics standards”] to another paper I had missed, A simulated annealing approach to ABC, by Carlos Albert, Hans Künsch, and Andreas Scheidegger. Paper that appeared in Statistics and Computing last year, and which is most interesting!

“These update steps are associated with a flow of entropy from the system (the ensemble of particles in the product space of parameters and outputs) to the environment. Part of this flow is due to the decrease of entropy in the system when it transforms from the prior to the posterior state and constitutes the well-invested part of computation. Since the process happens in finite time, inevitably, additional entropy is produced. This entropy production is used as a measure of the wasted computation and minimized, as previously suggested for adaptive simulated annealing” (p.3)

The notion behind this simulated annealing intrusion into the ABC world is that the choice of the tolerance can be adapted along iterations according to a simulated annealing schedule. Both papers make use of thermodynamics notions that are completely foreign to me, like endoreversibility, but aim at minimising the “entropy production of the system, which is a measure for the waste of computation”. The central innovation is to introduce an augmented target on (θ,x) that is

f(x|θ)π(θ)exp{-ρ(x,y)/ε},

where ε is the tolerance, while ρ(x,y) is a measure of distance to the actual observations, and to treat ε as an annealing temperature. In an ABC-MCMC implementation, the acceptance probability of a random walk proposal (θ’,x’) is then

exp{ρ(x,y)/ε-ρ(x’,y)/ε}∧1.

Under some regularity constraints, the sequence of targets converges to

π(θ|y)exp{-ρ(x,y)},

if ε decreases slowly enough to zero. While the representation of ABC-MCMC through kernels other than the Heaviside function can be found in the earlier ABC literature, the embedding of tolerance updating within the modern theory of simulated annealing is rather exciting.

Furthermore, we will present an adaptive schedule that attempts convergence to the correct posterior while minimizing the required simulations from the likelihood. Both the jump distribution in parameter space and the tolerance are adapted using mean fields of the ensemble.” (p.2)

What I cannot infer from a rather quick perusal of the papers is whether or not the implementation gets into the way of the all-inclusive theory. For instance, how can the Markov chain keep moving as the tolerance gets to zero? Even with a particle population and a sequential Monte Carlo implementation, it is unclear why the proposal scale factor [as in equation (34)] does not collapse to zero in order to ensure a non-zero acceptance rate. In the published paper, the authors used the same toy mixture example as ours [from Sisson et al., 2007], where we earned the award of the “incredibly ugly squalid picture”, with improvements in the effective sample size, but this remains a toy example. (Hopefully a post to be continued in more depth…)

Filed under: Books, pictures, Statistics, University life Tagged: ABC, ABC-MCMC, ABC-SMC, Bayesian Analysis, endoreversibility, mixture, Monte Carlo Statistical Methods, particle system, sequential Monte Carlo, simulated annealing, Switzerland

### Symmetrybreaking - Fermilab/SLAC

Q&A with Fermilab’s first artist-in-residence

Symmetry sits down with Lindsay Olson as she wraps up a year of creating art inspired by particle physics.

S: How did you end up at Fermilab?

LO: In March 2014 I had an exhibition of my work at North Park College. Several members of the Fermilab art committee attended my talk. Hearing me speak about one of my residencies, Georgia Schwender, curator of Fermilab’s art gallery, invited me to help her establish a pilot residency that would continue Fermilab’s tradition of nurturing both art and science.

S: What did you do during your residency?

LO: During a residency, I want to have a full immersion experience. I worked closely with passionate scientists, including Don Lincoln, Sam Zeller and Debbie Harris. I read books and popular science journalism, attended public lectures, and watched videos. This immersive learning is the scaffolding from which I create my art.

S: What’s your artistic process like?

LO: I want to make engaging, accessible art about real, complicated science: art that will connect with the public and inspire them to ask their own questions about the nature of reality and the origin of the cosmos. When I converse with a scientist, I glean the key points and translate them in an artistic way. Many artists use oil paint, watercolor and other traditional materials. But when I work, I want to use media to reinforce the message in the art. Everyone uses textiles in their daily lives, so creating work in them felt like a natural choice.

S: What inspired you at Fermilab?

LO: The Standard Model was the first piece of physics I learned. This conceptual tool was not only an appropriate beginning for the project, but a door into a fascinating way to understand reality. Passionate scientists of the present and science heroes of the past, especially Ray Davis, Richard Feynman and Robert Wilson, also inspired me.

S: What is one of your most memorable experiences at Fermilab?

LO: I took several training courses, including radiation safety training. This allowed me to shadow operators into the guts of several experiments during a recent shutdown. It was thrilling. Accelerator science is about riding a bucking bronco of energetic particles. Understanding how the messy beam behaves showed me that nature is not just about forests, creatures and rocks. At the subatomic level, nature is wild, energetic and mysterious. I plan to make large-scale drawings based on what I have learned in the Accelerator Division.

S: Did anything surprise you?

LO: I’ve been surprised at every turn. As an artist, I’ve been trained to observe the surface of reality. Everything looks solid and unmoving. But the subatomic realm is far more spacious and energetic than I could have imagined.

S: How did you become interested in expressing science in your art?

LO: Before I created art about science, I painted landscapes. I created portraits of area waterways. I was editing out all the manmade features and creating idealized images of streams and rivers. One day I was canoeing past an aeration station on the Chicago Canal and became curious about the real story of water in a dense urban area. I approached the District about beginning an art project that would tell this story. I started a residency at the Metropolitan Water Reclamation District of Greater Chicago. Strange as it may sound, I fell in love with science in the middle of a wastewater treatment plant.

S: How did your residency at Fermilab differ from past residencies?

LO: The most striking difference is the amount of resources available at Fermilab. It’s hard to imagine any other government agency where you will find not only cutting-edge science, but also a buffalo herd, a beautiful art gallery, a concert hall, a restored prairie and a graveyard.

S: What will you take with you when you leave Fermilab?

LO: One of the most powerful lessons I learned with this residency is that I am not afraid to learn any kind of science. I have limits because I lack the background in math. Despite this, I feel confident about learning enough science to make meaningful art. If I can learn science, others can too.

S: What’s next?

LO: Once I’ve finished the art, the project is far from over. Finding places to show the work I made while at Fermilab will be the next challenge. I want to use the work to inspire viewers to take a closer look at science in general and particle physics in particular. I hope the project helps people with no technical training, like me, to appreciate the beauty and elegance of our universe.

I have no set plans for my next residency, but I have a few ideas simmering on the back burner. Perhaps I will be surprised by another opportunity. My residency with Fermilab has changed my view of reality enough for me to know that there are surprises out in the universe for any of us who take the time to discover what science can teach us.

### The n-Category Cafe

An exact square from a Reedy category

I first learned about exact squares from a blog post written by Mike Shulman on the $nn$-Category Café.

Today I want to describe a family of exact squares, which are also homotopy exact, that I had not encountered previously. These make a brief appearance in a new preprint, A necessary and sufficient condition for induced model structures, by Kathryn Hess, Magdalena Kedziorek, Brooke Shipley, and myself.

Proposition. If $RR$ is any (generalized) Reedy category, with ${R}^{+}\subset RR^+ \subset R$ the direct subcategory of degree-increasing morphisms and ${R}^{-}\subset RR^- \subset R$ the inverse subcategory of degree-decreasing morphisms, then the pullback square: $\begin{array}{ccc}\mathrm{iso}\left(R\right)& \to & {R}^{-}\\ ↓& ⇙\mathrm{id}& ↓\\ {R}^{+}& \to & R\end{array} \array\left\{ iso\left(R\right) & \to & R^- \\ \downarrow & \swArrow id & \downarrow \\ R^+ & \to & R\right\} $ is (homotopy) exact.

In summary, a Reedy category $\left(R,{R}^{+},{R}^{-}\right)\left(R,R^+,R^-\right)$ gives rise to a canonical exact square, which I’ll call the Reedy exact square.

## Exact squares and Kan extensions

Let’s recall the definition. Consider a square of functors inhabited by a natural transformation $\begin{array}{ccc}A& \stackrel{f}{\to }& B\\ {}^{u}↓& ⇙\alpha & {↓}^{v}\\ C& \underset{g}{\to }& D\end{array}\array\left\{A & \overset\left\{f\right\}\left\{\to\right\} & B\\ ^u\downarrow & \swArrow\alpha & \downarrow^v\\ C& \underset\left\{g\right\}\left\{\to\right\} & D\right\}$ For any category $MM$, precomposition defines a square $\begin{array}{ccc}{M}^{A}& \stackrel{{f}^{*}}{←}& {M}^{B}\\ {}^{{u}^{*}}↑& ⇙{\alpha }^{*}& {↑}^{{v}^{*}}\\ {M}^{C}& \underset{{g}^{*}}{←}& {M}^{D}\end{array}\array\left\{M^A & \overset\left\{f^\ast\right\}\left\{\leftarrow\right\} & M^B\\ ^\left\{u^\ast\right\}\uparrow & \swArrow \alpha^\ast & \uparrow^\left\{v^\ast\right\}\\ M^C& \underset\left\{g^\ast\right\}\left\{\leftarrow\right\} & M^D\right\}$ Supposing there exist left Kan extensions ${u}_{!}⊣{u}^{*}u_! \dashv u^\ast$ and ${v}_{!}⊣{v}^{*}v_! \dashv v^\ast$ and right Kan extensions ${f}^{*}⊣{f}_{*}f^\ast \dashv f_\ast$ and ${g}^{*}⊣{g}_{*}g^\ast \dashv g_\ast$, the mates of ${\alpha }^{*}\alpha^*$ define canonical Beck-Chevalley transformations: ${u}_{!}{f}^{*}⇒{g}^{*}{v}_{!}\phantom{\rule{1em}{0ex}}\mathrm{and}\phantom{\rule{1em}{0ex}}{v}^{*}{g}_{*}⇒{f}_{*}{u}^{*}. u_! f^\ast \Rightarrow g^\ast v_!\quad and \quad v^\ast g_\ast \Rightarrow f_\ast u^\ast. $ Note if either of the Beck-Chevalley transformations is an isomorphism, the other one is too by the (contravariant) correspondence between natural transformations between a pair of left adjoints and natural transformations between the corresponding right adjoints.

Definition. $\begin{array}{ccc}A& \stackrel{f}{\to }& B\\ {}^{u}↓& ⇙\alpha & {↓}^{v}\\ C& \underset{g}{\to }& D\end{array}\array\left\{A & \overset\left\{f\right\}\left\{\to\right\} & B\\ ^u\downarrow & \swArrow\alpha & \downarrow^v\\ C& \underset\left\{g\right\}\left\{\to\right\} & D\right\}$ is an exact square if, for any $MM$ admitting pointwise Kan extensions, the Beck-Chevalley transformations are isomorphisms.

Comma squares provide key examples, in which case the Beck-Chevalley isomorphisms recover the limit and colimit formulas for pointwise Kan extensions.

The notion of homotopy exact square is obtained by replacing $MM$ by some sort of homotopical category, the adjoints by derived functors, and “isomorphism” by “equivalence.”

## The proof

In the preprint we give a direct proof that these Reedy squares are exact by computing the Kan extensions, but exactness follows more immediately from the following characterization theorem, stated using comma categories. The natural transformation $\alpha :vf⇒gu\alpha \colon v f \Rightarrow g u$ induces a functor $B↓f{×}_{A}u↓C\to v↓g B \downarrow f \times_A u \downarrow C \to v \downarrow g$ over $C×BC \times B$ defined on objects by sending a pair $b\to f\left(a\right),u\left(a\right)\to cb \to f\left(a\right), u\left(a\right) \to c$ to the composite morphism $v\left(b\right)\to vf\left(a\right)\to gu\left(a\right)\to g\left(c\right)v\left(b\right) \to v f\left(a\right) \to g u\left(a\right) \to g\left(c\right)$. Fixing a pair of objects $bb$ in $BB$ and $cc$ in $CC$, this pulls back to define a functor $b↓f{×}_{A}u↓c\to \mathrm{vb}↓\mathrm{gc}. b \downarrow f \times_A u \downarrow c \to vb \downarrow gc.$

Theorem. A square $\begin{array}{ccc}A& \stackrel{f}{\to }& B\\ {}^{u}↓& ⇙\alpha & {↓}^{v}\\ C& \underset{g}{\to }& D\end{array}\array\left\{A & \overset\left\{f\right\}\left\{\to\right\} & B\\ ^u\downarrow & \swArrow\alpha & \downarrow^v\\ C& \underset\left\{g\right\}\left\{\to\right\} & D\right\}$ is exact if and only if each fiber of $b↓f{×}_{A}u↓c\to vb↓gc b \downarrow f \times_A u \downarrow c \to v b \downarrow g c$ is non-empty and connected.

See the nLab for a proof. Similarly, the square is homotopy exact if and only if each fiber of this functor has a contractible nerve.

In the case of a Reedy square $\begin{array}{ccc}\mathrm{iso}\left(R\right)& \to & {R}^{-}\\ ↓& ⇙\mathrm{id}& ↓\\ {R}^{+}& \to & R\end{array} \array\left\{ iso\left(R\right) & \to & R^- \\ \downarrow & \swArrow id & \downarrow \\ R^+ & \to & R\right\} $ these fibers are precisely the categories of Reedy factorizations of a fixed morphism. For an ordinary Reedy category $RR$, Reedy factorizations are unique, and so the fibers are terminal categories. For a generalized Reedy category, Reedy factorizations are unique up to unique isomorphism, so the fibers are contractible groupoids.

## Reedy diagrams as bialgebras

For any category $MM$, the objects in the lower right-hand square $\begin{array}{ccc}{M}^{\mathrm{iso}\left(R\right)}& ←& {M}^{{R}^{-}}\\ ↑& ⇙\mathrm{id}& ↑\\ {M}^{{R}^{+}}& ←& {M}^{R}\end{array} \array\left\{ M^\left\{iso\left(R\right)\right\} & \leftarrow & M^\left\{R^-\right\} \\ \uparrow & \swArrow id & \uparrow \\ M^\left\{R^+\right\} & \leftarrow & M^R\right\} $ are Reedy diagrams in $MM$, and the functors restrict to various subdiagrams. Because the indexing categories all have the same objects, if $MM$ is bicomplete each of these restriction functors is both monadic and comonadic. If we think of the ${M}^{{R}^{-}}M^\left\{R^-\right\}$ as being comonadic over ${M}^{\mathrm{iso}\left(R\right)}M^\left\{iso\left(R\right)\right\}$ and ${M}^{{R}^{+}}M^\left\{R^+\right\}$ as being monadic over ${M}^{\mathrm{iso}\left(R\right)}M^\left\{iso\left(R\right)\right\}$, then the Beck-Chevalley isomorphism exhibits ${M}^{R}M^R$ as the category of bialgebras for the monad induced by the direct subcategory ${R}^{+}R^+$ and the comonad induced by the inverse subcategory ${R}^{-}R^-$.

There is a homotopy-theoretic interpretation of this, which I’ll describe in the case where $RR$ is a strict Reedy category (so that $\mathrm{iso}\left(R\right)=\mathrm{ob}\left(R\right)iso\left(R\right)=ob\left(R\right)$), though it works in the generalized context as well. If $MM$ is a model category, then ${M}^{\mathrm{iso}\left(R\right)}M^\left\{iso\left(R\right)\right\}$ inherits a model structure, with everything defined objectwise. The Reedy model structure on ${M}^{{R}^{-}}M^\left\{R^-\right\}$ coincides with the injective model structure, which has cofibrations and weak equivalences created by the restriction functor ${M}^{{R}^{-}}\to {M}^{\mathrm{iso}\left(R\right)}M^\left\{R^-\right\} \to M^\left\{iso\left(R\right)\right\}$; we might say this model structure is “left-induced”. Dually, the Reedy model structure on ${M}^{{R}^{+}}M^\left\{R^+\right\}$ coincides with the projective model structure, which has fibrations and weak equivalences created by ${M}^{{R}^{+}}\to {M}^{\mathrm{iso}\left(R\right)}M^\left\{R^+\right\} \to M^\left\{iso\left(R\right)\right\}$; this is “right-induced”.

The Reedy model structure on ${M}^{R}M^R$ then has two interpretations: it is right-induced along the monadic restriction functor ${M}^{R}\to {M}^{{R}^{-}}M^R \to M^\left\{R^-\right\}$ and it is left-induced along the comonadic restriction functor ${M}^{R}\to {M}^{{R}^{+}}M^R \to M^\left\{R^+\right\}$. The paper A necessary and sufficient condition for induced model structures describes a general technique for inducing model structures on categories of bialgebras, which reproduces the Reedy model structure in this special case.

### Jester - Resonaances

Weekend plot: minimum BS conjecture
This weekend plot completes my last week's post:

It shows the phase diagram for models of natural electroweak symmetry breaking. These models can be characterized by 2 quantum numbers:

• B [Baroqueness], describing how complicated is the model relative to the standard model;
• S [Specialness], describing the fine-tuning needed to achieve electroweak symmetry breaking with the observed Higgs boson mass.

To allow for a fair comparison, in all models the cut-off scale is fixed to Λ=10 TeV. The standard model (SM) has, by definition,  B=1, while S≈(Λ/mZ)^2≈10^4.  The principle of naturalness postulates that S should be much smaller, S ≲ 10.  This requires introducing new hypothetical particles and interactions, therefore inevitably increasing B.

The most popular approach to reducing S is by introducing supersymmetry.  The minimal supersymmetric standard model (MSSM) does not make fine-tuning better than 10^3 in the bulk of its parameter space. To improve on that, one needs to introduce large A-terms (aMSSM), or  R-parity breaking interactions (RPV), or an additional scalar (NMSSM).  Another way to decrease S is achieved in models the Higgs arises as a composite Goldstone boson of new strong interactions. Unfortunately, in all of those models,  S cannot be smaller than 10^2 due to phenomenological constraints from colliders. To suppress S even further, one has to resort to the so-called neutral naturalness, where new particles beyond the standard model are not charged under the SU(3) color group. The twin Higgs - the simplest  model of neutral naturalness - can achieve S10 at the cost of introducing a whole parallel mirror world.

The parametrization proposed here leads to a striking observation. While one can increase B indefinitely (many examples have been proposed  the literature),  for a given S there seems to be a minimum value of B below which no models exist.  In fact, the conjecture is that the product B*S is bounded from below:
BS ≳ 10^4.
One robust prediction of the minimum BS conjecture is the existence of a very complicated (B=10^4) yet to be discovered model with no fine-tuning at all.  The take-home message is that one should always try to minimize BS, even if for fundamental reasons it cannot be avoided completely ;)

## September 29, 2015

### Symmetrybreaking - Fermilab/SLAC

New discovery? Or just another bump?

For physicists, seeing is not always believing.

In the 1960s physicists at the University of California, Berkeley saw evidence of new, unexpected particles popping up in data from their bubble chamber experiments.

But before throwing a party, the scientists did another experiment. They repeated their analysis, but instead of using the real data from the bubble chamber, they used fake data generated by a computer program, which assumed there were no new particles.

The scientists performed a statistical analysis on both sets of data, printed the histograms, pinned them to the wall of the physics lounge, and asked visitors to identify which plots showed the new particles and which plots were fakes.

No one could tell the difference. The fake plots had just as many impressive deviations from the theoretical predictions as the real plots.

Eventually, the scientists determined that some of the unexpected bumps in the real data were the fingerprints of new composite particles. But the bumps in the fake data remained the result of random statistical fluctuations.

So how do scientists differentiate between random statistical fluctuations and real discoveries?

Just like a baseball analyst can’t judge if a rookie is the next Babe Ruth after nine innings of play, physicists won’t claim a discovery until they know that their little bump-on-a-graph is the real deal.

After the histogram “social experiment” at Berkeley, scientists developed a one-size-fits-all rule to separate the “Hall of Fame” discoveries from the “few good games” anomalies: the five-sigma threshold.

“Five sigma is a measure of probability,” says Kyle Cranmer, a physicist from New York University working on the ATLAS experiment. “It means that if a bump in the data is the result of random statistical fluctuation and not the consequence of some new property of nature, then we could expect to see a bump at least this big again only if we repeated our experiment a few million more times.”

To put it another way, five sigma means that there is only a 0.00003 percent chance scientists would see this result due to statistical fluctuations alone—a good indication that there’s probably something hiding under that bump.

But the five-sigma threshold is more of a guideline than a golden rule, and it does not tell physicists whether they have made a discovery, according to Bob Cousins, a physicist at the University of California, Los Angeles working on the CMS experiment.

“A few years ago scientists posted a paper claiming that they had seen faster-than-light neutrinos,” Cousins says. But few people seemed to believe it—even though the result was six sigma. (A six-sigma result is a couple of hundred times stronger than a five-sigma result.)

The five-sigma rule is typically used as the standard for discovery in high-energy physics, but it does not incorporate another equally important scientific mantra: The more extraordinary the claim, the more evidence you need to convince the community.

“No one was arguing about the statistics behind the faster-than-light neutrinos observation,” Cranmer says. “But hardly anyone believed they got that result because the neutrinos were actually going faster than light.”

Within minutes of the announcement, physicists started dissecting every detail of the experiment to unearth an explanation. Anticlimactically, it turned out to be a loose fiber optic cable.

The “extraordinary claims, extraordinary evidence” philosophy also holds true for the inverse of the statement: If you see something you expected, then you don’t need as much evidence to claim a discovery. Physicists will sometime relax their stringent statistical standards if they are verifying processes predicted by the Standard Model of particle physics—a thoroughly vetted description of the microscopic world.

“But if you don’t have a well-defined hypothesis that you are testing, you increase your chances of finding something that looks impressive just because you are looking everywhere,” Cousins says. “If you perform 800 broad searches across huge mass ranges for new particles, you’re likely to see at least one impressive three-sigma bump that isn’t anything at all.”

In the end, there is no one-size-fits-all rule that separates discoveries from fluctuations. Two scientists could look at the same data, make the same histograms and still come to completely different conclusions.

So which results windup in textbooks and which results are buried in the archive?

“This decision comes down to two personal questions: What was your prior belief, and what is the cost of making an error?” Cousins says. “With the Higgs discovery, we waited until we had overwhelming evidence of a Higgs-like particle before announcing the discovery, because if we made an error it could weaken people’s confidence in the LHC research program.”

Experimental physicists have another way of verifying their results before making a discovery claim: comparable studies from independent experiments.

“If one experiment sees something but another experiment with similar capabilities doesn’t, the first thing we would do is find out why,” Cranmer says. “People won’t fully believe a discovery claim without a solid cross check.”

Like what you see? Sign up for a free subscription to symmetry!

### Lubos Motl - string vacua and pheno

CMS: a $$2.9\TeV$$ electron-positron pair resonance
Bonus: An ATLAS $$\mu\mu j$$ event with $$m=2.9\TeV$$ will be discussed at the end of this blog post.
A model with exactly this prediction was published in June

Two days ago, I discussed four LHC collisions suggesting a particle of mass $$5.2\TeV$$. Today, just two days later, Tommaso Dorigo described a spectacular dielectron event seen by CMS on August 22nd. See also the CERN document server; CERN graduate students have to prepare a PDF file for each of the several quadrillion collisions. ;-)

On that Tuesday, the world stock markets were just recovering from the two previous cataclysmic days while the CMS detector enjoyed a more pleasing day with one of the $$13\TeV$$ collisions that have turned the LHC into a rather new kind of a toy.

This is how the outcome of the collision looked from the direction of the beam. The electron and positron were flying almost exactly in the opposite direction, each having about $$1.25\TeV$$ of transverse energy. A perfectly balanced picture.

You may see the collision from another angle, too:

The electron-positron pair is the only notable thing that is going on.

The fun is that no such high-energy collision has been seen at the $$8\TeV$$ run – even though it has performed more than 100 times greater a number of collisions than the ongoing $$13\TeV$$ run in 2015. When you demand truly highly energetic particles in the final state, the weakness of the $$8\TeV$$ run in 2012 becomes self-evident.

The expected number of similar collisions with the invariant mass$M_{e^+e^-}\gt 2.5\TeV$ seen in the CMS dataset of 2015 (so far) has been estimated as $$\langle N \rangle =0.002$$. Clearly, this number – because it is so small that we may neglect the possibility that more than 1 such event arises – may be interpreted as the probability that one event (and not zero events) take place. For the mass above $$2.85\TeV$$, you would almost certainly get a probability $$0.001$$ or less.

If you take the estimate $$p=0.002$$ seriously, it means that either the CMS detector has been 1:500 "lucky" to see a high-energy event that is actually noise; or it is seeing a new particle that may decay to the electron-positron pair.

Such a new particle would probably be neutral from all points of view. It could be a heavier cousin of the $$Z$$-boson, a $$Z'$$-boson. That would be the gauge boson associated with a new $$U(1)_{\rm new}$$ gauge symmetry. Most types of vacua in string theory tend to predict lots of these additional $$U(1)$$ groups.

And your humble correspondent can even offer you a paper that predicts a $$Z'$$-boson of mass $$2.9\TeV$$. See the bottom of page 10 here. (Sadly, they made the prediction less accurate in v2 of their preprint.) The left-right-symmetric model in the paper also intends to explain the excesses near $$2\TeV$$ – as a $$W'$$-boson. The model is lepto-phobic (LP) which means that only right-handed quarks are arranged to doublets of $$SU(2)_R$$ while the right-handed leptons remain $$SU(2)_R$$ singlets. It's the model with the Higgs triplet (LPT) that gives the right $$Z'$$-boson mass.

Just for fun, let me show you the calculation of the invariant mass. The coordinates of the two electron-like particles are written as$\eq{ p_T &= 1.27863\TeV\\ \eta &= - 1.312\\ \phi &= 0.420 }$ and $\eq{ p_T &= 1.25620\TeV\\ \eta &= - 0.239\\ \phi &= -2.741 }$ One may convert these coordinates to the Cartesian coordinates$\eq{ p_x &= p_T\cos \phi\\ p_y &= p_T\sin \phi\\ p_z &= p_T \sinh \eta \\ E &= p_T \cosh \eta }$ in the approximation $$m_e\ll E$$ i.e. $$m_e\sim 0$$: feel free to check that the 4-vector above is identically light-like. The two 4-vectors (in the order I chose above) are therefore$\eq{ \frac{p_A^\mu }{ {\rm TeV}}&= (1.16750, 0.521375, -2.20200, 2.54631) \\ \frac{p_B^\mu }{ {\rm TeV}}&= (-1.15675, -0.48987, -0.30310, 1.29225) }$ where the last coordinate is the energy. Now, because these 4-vectors are null, $(p_A^\mu+p_B^\mu)^2 = 2p_A^\mu p_{B,\mu} = (2.908\TeV)^2$ in the West Coast metric convention. You're invited to check it. Thanks to the Higgs Kaggle contest, I gained some intuition for the $$(p_T,\eta,\phi)$$ coordinates. ;-)

In a few more weeks, we should see whether this highly energetic electron-positron event was a fluke or something much more interesting... You know, the progress on the energy frontier has been rather substantial. Note that $$13/8=1.625$$, an increase by 62.5%.

Lots of particles – the $$W$$-bosons, the $$Z$$-boson, the Higgs boson, and the top quark – are confined in the interval $$70\GeV,210\GeV$$ – safely four types of particles in an interval whose upper bound is thrice the lower bound. Now, we can produce particles with masses up to $$5\TeV$$ or so. Why shouldn't we find any new particles with masses between $$175\GeV$$ and $$4,900\GeV$$ – an interval whose ratio of limiting energies is twenty-eight?

It's quite some jump, isn't it? ;-) It could harbor lots of so far secret and elusive animals.

Next Monday, the full-fledged physics collisions should resume and continue through the early November.

### astrobites - astro-ph reader's digest

The APOGEE Treasure Trove
• Title: The Apache Point Observatory Galactic Evolution Experiment (APOGEE)
• Authors: Steven R. Majewski, Ricardo P. Schiavon, Peter M. Frinchaboy, et al. (there are more than 70 co-authors)
• First Author’s Institution: Dept. of Astronomy, University of Virginia, Charlottesville, VA (there are 50 institutions represented among the authors)
• Paper Status: Submitted to The Astronomical Journal

Apache Point Observatory in Sunspot, NM. The SDSS 2.5-m telescope is to the right, pointing toward the center of the Milky Way. The full moon and light pollution from nearby El Paso don’t stop APOGEE! Figure 15 in the paper.

What’s black, white, and re(a)d all over… and spent three years looking all around the sky? It’s APOGEE (the Apache Point Observatory Galactic Evolution Experiment), a three-year campaign that used a single 2.5-m telescope in New Mexico to collect half a million near-infrared spectra for 146,000 stars!

Black and white? As shown below, the raw spectra from APOGEE look black and white, but appearances can be deceiving. Each horizontal stripe is the spectrum of one star, spanning a range of colors redder than your eye can see. To get the spectra nicely stacked in an image like this, fiber-optic cables are plugged into metal plates which are specially drilled to let in slits of light from individual stars in different regions of the sky. Each star gets one fiber, which corresponds to one row on the detector. An image like this allows APOGEE to gather data for a multitude of stars quickly.

Part of a raw 2D APOGEE image from observations near the bulge of the Milky Way. Each horizontal stripe is a portion of one star’s near-infrared spectrum. The x-axis will correspond to wavelength once the spectra are processed. Bright vertical lines are from airglow, dark vertical lines common to all stars are from molecules in Earth’s atmosphere, and the dark vertical lines that vary from star to star are scientifically interesting stellar absorption lines that correspond to various elements. Fainter and brighter stars are intentionally interspersed to reduce contamination between stars. Figure 14 in the paper.

Re(a)d all over? Today’s paper accompanies the latest public data release, DR12, of the Sloan Digital Sky Survey (SDSS), a large collaboration which includes APOGEE. So in addition to focusing on red giant stars viewed in near-infrared light, all the APOGEE data are now freely available and may be read by anyone. Even so, the APOGEE team has been hard at work.

Probing to new galactic depths with the near-infrared

APOGEE is designed to primarily observe evolved red giant stars in the Milky Way using near-infrared light. What’s so special about this setup? First, red giants are some of the brightest stars, so it’s possible to see them farther away than Sun-like stars. Second, near-infrared light doesn’t get blocked by dust like visible light does, so it lets APOGEE observe stars toward the center of the Milky Way, which is otherwise obscured with thick dust lanes. This is really important if you want to understand how different stellar populations in the galaxy behave.

Mapping velocities and composition

Because APOGEE collects spectra of stars, not images, each observation contains lots of information. Spectra tell us how fast a star is moving towards or away from us (its radial velocity), how hot a star is, what its surface gravity is like, and what elements it is made of. Lots of work has gone into developing a pipeline to process the spectra and return this information reliably, because it’s not practical to look at hundreds of thousands of observations by hand.

APOGEE visits each star at least three times to check if it is varying for any reason. (For example, binary stars will have different radial velocities at different times, and the APOGEE team wants to exclude binaries when they use star velocities to measure the overall motion of the galaxy.) The figures below show how a subset of stars mapped by APOGEE vary in radial velocity (top) and chemical composition (i.e., metallicity, bottom). The stars in both figures lie within two kiloparsecs above or below the disk of the Milky Way, so we are essentially seeing a slice of the middle of the galaxy. Observations don’t exist for the lower right quadrant of either figure, because that region is only visible from Earth’s southern hemisphere.

A map of stars observed by APOGEE, color-coded by radial velocity. The Sun is located at the center of the “spoke” of observations, and is defined as having zero radial velocity (greenish). An artist impression of the Milky Way is superimposed for context. This figure illustrates how the galaxy as a whole rotates. The Sun moves with the galaxy, and other stars’ relative motions depend on how far in front or behind of us they are. This astrobite has more details. Figure 24 from the paper.

A map of stars observed by APOGEE, color-coded by metallicity. As above, the Sun is in the center of the observation “spokes” and an artist impression of the Milky Way is superimposed for context. The Sun is defined to have 0 metallicity (greenish). Stars that are more chemically enriched than the Sun are red, and stars that are have fewer metals than the Sun are blue. This figure illuminates an overall galactic metallicity gradient. Figure 25 from the paper.

Together, maps like these provide an unprecedented look into our galaxy’s past, present, and future by combining kinematics and the locations of stars with different chemistry. Thanks to APOGEE’s success, plans are now underway for APOGEE-2 in the southern hemisphere using a telescope in Chile. This treasure trove of data will undoubtedly be put to good use for years to come.

### Sean Carroll - Preposterous Universe

Core Theory T-Shirts

Way back when, for purposes of giving a talk, I made a figure that displayed the world of everyday experience in one equation. The label reflects the fact that the laws of physics underlying everyday life are completely understood.

So now there are T-shirts. (See below to purchase your own.)

It’s a good equation, representing the Feynman path-integral formulation of an amplitude for going from one field configuration to another one, in the effective field theory consisting of Einstein’s general theory of relativity plus the Standard Model of particle physics. It even made it onto an extremely cool guitar.

I’m not quite up to doing a comprehensive post explaining every term in detail, but here’s the general idea. Our everyday world is well-described by an effective field theory. So the fundamental stuff of the world is a set of quantum fields that interact with each other. Feynman figured out that you could calculate the transition between two configurations of such fields by integrating over every possible trajectory between them — that’s what this equation represents. The thing being integrated is the exponential of the action for this theory — as mentioned, general relativity plus the Standard Model. The GR part integrates over the metric, which characterizes the geometry of spacetime; the matter fields are a bunch of fermions, the quarks and leptons; the non-gravitational forces are gauge fields (photon, gluons, W and Z bosons); and of course the Higgs field breaks symmetry and gives mass to those fermions that deserve it. If none of that makes sense — maybe I’ll do it more carefully some other time.

Gravity is usually thought to be the odd force out when it comes to quantum mechanics, but that’s only if you really want a description of gravity that is valid everywhere, even at (for example) the Big Bang. But if you only want a theory that makes sense when gravity is weak, like here on Earth, there’s no problem at all. The little notation k < Λ at the bottom of the integral indicates that we only integrate over low-frequency (long-wavelength, low-energy) vibrations in the relevant fields. (That's what gives away that this is an "effective" theory.) In that case there's no trouble including gravity. The fact that gravity is readily included in the EFT of everyday life has long been emphasized by Frank Wilczek. As discussed in his latest book, A Beautiful Question, he therefore advocates lumping GR together with the Standard Model and calling it The Core Theory.

I couldn’t agree more, so I adopted the same nomenclature for my own upcoming book, The Big Picture. There’s a whole chapter (more, really) in there about the Core Theory. After finishing those chapters, I rewarded myself by doing something I’ve been meaning to do for a long time — put the equation on a T-shirt, which you see above.

I’ve had T-shirts made before, with pretty grim results as far as quality is concerned. I knew this one would be especially tricky, what with all those tiny symbols. But I tried out Design-A-Shirt, and the result seems pretty impressively good.

So I’m happy to let anyone who might be interested go ahead and purchase shirts for themselves and their loved ones. Here are the links for light/dark and men’s/women’s versions. I don’t actually make any money off of this — you’re just buying a T-shirt from Design-A-Shirt. They’re a little pricey, but that’s what you get for the quality. I believe you can even edit colors and all that — feel free to give it a whirl and report back with your experiences.

### ZapperZ - Physics and Physicists

Football Physics and Deflategate
This issue doesn't seem to want to go away.

Still, anyone who has been following this (at least here in the US) have heard of the "Deflategate" controversy from last year's NFL Football playoffs.

Chad Orzel has another look at this based on a recent paper out of The Physics Teacher, this time, from the physics involved with the football receivers.

Most of the coverage of “Deflategate” has focused on Patriots quarterback Tom Brady, and speculation that he arranged for the balls to be deflated so as to provide a better grip. The authors of the Physics Teacher paper, Gregory DiLisi and Richard Rarick look at the other end of the problem, where the ball is caught by the receiver, thinking about it in terms of energy, an issue with major implications for the existence of atomic matter.

It certainly is another angle to the issue. I hope to get a copy of the paper soon and see what it says.

Zz.

### Peter Coles - In the Dark

Little Sun Charge by Olafur Eliasson

You might remember a piece I did a while ago about Little Sun by the artist Olafur Eliasson. This is a solar-powered lamp that charges up during the day and provides night-time illumination for those, e.g. in sub-Saharan Africa, without access to an electricity grid. I supported this project myself, including writing a piece here as part of the Little Charter for Light and Energy.

Well, it seems that in his travels around the world promoting Little Sun, Olafur received a lot of comments about how great it would be if the same principle could be used to provide a solar-powered mobile phone charger. So now – lo and behold! – there is a new product called Little Sun Charge. Here’s a little video about it:

I’m mentioning this here because Olafur is attempting to crowdfund this project via a kickstarter campaign. The campaign has already exceeded its initial target, but there are five days still remaining and every penny raised will used to reduce the price of the charger so that it can be sold to off-grid customers for even less than originally planned.

So please visit the link and pledge some dosh! There are treats in store for those who do!

## September 28, 2015

### Peter Coles - In the Dark

Evidence for Liquid Water on Mars?

There’s been a lot of excitement this afternoon about possible evidence for water on Mars from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) on board the Mars Reconaissance Orbiter (MRO). Unfortunately, but I suppose inevitably, some of the media coverage has been a bit over the top, presenting the results as if they were proof of liquid water flowing on the Red Planet’s surface; NASA itself has pushed this interpretation. I think the results are indeed very interesting – but not altogether surprising, and by no means proof of the existence of flows of liquid water. And although they may indeed provide evidence confirming that there is water on Mars,  we knew that already (at least in the form of ice and water vapour).

The full results are reported in a paper in Nature Geoscience. The abstract reads:

Determining whether liquid water exists on the Martian surface is central to understanding the hydrologic cycle and potential for extant life on Mars. Recurring slope lineae, narrow streaks of low reflectance compared to the surrounding terrain, appear and grow incrementally in the downslope direction during warm seasons when temperatures reach about 250–300K, a pattern consistent with the transient flow of a volatile species1, 2, 3. Brine flows (or seeps) have been proposed to explain the formation of recurring slope lineae1, 2, 3, yet no direct evidence for either liquid water or hydrated salts has been found4. Here we analyse spectral data from the Compact Reconnaissance Imaging Spectrometer for Mars instrument onboard the Mars Reconnaissance Orbiter from four different locations where recurring slope lineae are present. We find evidence for hydrated salts at all four locations in the seasons when recurring slope lineae are most extensive, which suggests that the source of hydration is recurring slope lineae activity. The hydrated salts most consistent with the spectral absorption features we detect are magnesium perchlorate, magnesium chlorate and sodium perchlorate. Our findings strongly support the hypothesis that recurring slope lineae form as a result of contemporary water activity on Mars.

Here’s a picture taken with the High Resolution Imaging Science Experiment (HIRISE) on MRO showing some of the recurring slope lineae (RSL):

You can see a wonderful gallery of other HIRISE images of other such features here.

The dark streaky stains in this and other examples are visually very suggestive of the possibility they were produced by flowing liquid. They also come and go with the Martian seasons, which suggests that they might involve something that melts in the summer and freezes in the winter. Putting these two facts together raises the quite reasonable question of whether, if that is indeed how they’re made, that liquid might be water.

What is new about the latest results that adds to the superb detail revealed by the HIRISE images – is that there is spectroscopic information that yields clues about the chemical composition of the stuff in the RSLs:

The black lines denote spectra that are taken at two different locations; the upper one has been interpreted as indicating the presence of some mixture of hydrated Calcium, Magnesium and Sodium Perchlorates (i.e. salts). I’m not a chemical spectroscopist so I don’t know whether other interpretations are possible, though I can’t say that I’m overwhelmingly convinced by the match between the data from laboratory specimens and that from Mars…

Anyway, if that is indeed what the spectroscopy indicates then the obvious conclusion is that there is water present, for without water there can be no hydrated salts. This water could have been absorbed from the atmospheric vapour or from the ice below the surface. The presence of salts would lowers the melting point of water ice, so this could explain how there could be some form of liquid flow at the sub-zero temperatures prevalent even in a Martian summer. It would not be pure running water, however, but an extremely concentrated salt solution, much saltier than sea water, probably in the form of a rather sticky brine. This brine might flow – or perhaps creep – down the sloping terrain (briefly) in the summer and then freeze. But nothing has actually been observed to flow in such a way. It seems to me – as a non-expert – that the features could be caused not by a flow of liquid, but by the disruption of the Martian surface, caused by melting and freezing, involving  movement of solid material, or perhaps localized seeping. I’m not saying that it’s impossible that a flow of briny liquid is responsible for the features, just that I think it’s far from proven. But there’s no doubt that whatever is going on is fascinatingly complicated!

The last sentence of the abstract quoted above reads:

Our findings strongly support the hypothesis that recurring slope lineae form as a result of contemporary water activity on Mars.

I’m not sure about the “strongly support” but “contemporary water activity” is probably fair as it includes the possibilities I discussed above, but it does seem to have led quite a few people to jump to the conclusion that it means “flowing water”, which I don’t think it does. Am I wrong to be so sceptical? Let me know through the comments box!

### Sean Carroll - Preposterous Universe

The Big Picture

Once again I have not really been the world’s most conscientious blogger, have I? Sometimes other responsibilities have to take precedence — such as looming book deadlines. And I’m working on a new book, and that deadline is definitely looming!

And here it is. The title is The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. It’s scheduled to be published on May 17, 2016; you can pre-order it at Amazon and elsewhere right now.

An alternative subtitle was What Is, and What Matters. It’s a cheerfully grandiose (I’m supposed to say “ambitious”) attempt to connect our everyday lives to the underlying laws of nature. That’s a lot of ground to cover: I need to explain (what I take to be) the right way to think about the fundamental nature of reality, what the laws of physics actually are, sketch some cosmology and connect to the arrow of time, explore why there is something rather than nothing, show how interesting complex structures can arise in an undirected universe, talk about the meaning of consciousness and how it can be purely physical, and finally trying to understand meaning and morality in a universe devoid of transcendent purpose. I’m getting tired just thinking about it.

From another perspective, the book is an explication of, and argument for, naturalism — and in particular, a flavor I label Poetic Naturalism. The “Poetic” simply means that there are many ways of talking about the world, and any one that is both (1) useful, and (2) compatible with the underlying fundamental reality, deserves a place at the table. Some of those ways of talking will simply be emergent descriptions of physics and higher levels, but some will also be matters of judgment and meaning.

As of right now the book is organized into seven parts, each with several short chapters. All that is subject to change, of course. But this will give you the general idea.

* Part One: Being and Stories

How we think about the fundamental nature of reality. Poetic Naturalism: there is only one world, but there are many ways of talking about it. Suggestions of naturalism: the world moves by itself, time progresses by moments rather than toward a goal. What really exists.

* Part Two: Knowledge and Belief

Telling different stories about the same underlying truth. Acquiring and updating reliable beliefs. Knowledge of our actual world is never perfect. Constructing consistent planets of belief, guarding against our biases.

* Part Three: Time and Cosmos

The structure and development of our universe. Time’s arrow and cosmic history. The emergence of memories, causes, and reasons. Why is there a universe at all, and is it best explained by something outside itself?

* Part Four: Essence and Possibility

Drawing the boundary between known and unknown. The quantum nature of deep reality: observation, entanglement, uncertainty. Vibrating fields and the Core Theory underlying everyday life. What we can say with confidence about life and the soul.

* Part Five: Complexity and Evolution

Why complex structures naturally arise as the universe moves from order to disorder. Self-organization and incremental progress. The origin of life, and its physical purpose. The anthropic principle, environmental selection, and our role in the universe.

* Part Six: Thinking and Feeling

The mind, the brain, and the body. What consciousness is, and how it might have come to be. Contemplating other times and possible worlds. The emergence of inner experiences from non-conscious matter. How free will is compatible with physics.

* Part Seven: Caring and Mattering

Why we can’t derive ought from is, even if “is” is all there is. And why we nevertheless care about ourselves and others, and why that matters. Constructing meaning and morality in our universe. Confronting the finitude of life, deciding what stories we want to tell along the way.

Hope that whets the appetite a bit. Now back to work with me.

### astrobites - astro-ph reader's digest

Missing: Several Large Planets

Title: Hunting for planets in the HL Tau disk
Authors: L. Testi, A. Skemer, Th. Henning et al.
First author’s institution: ESO, Karl Schwarzschild str. 2, D-85748 Garching bei Muenchen, Germany
Status: Accepted for publication in ApJ Letters

ALMA image of a disc of gas and dust around the young star HL Tau. The dark rings in the disc are thought to be gaps, carved out by giant planets. Image Credit: ALMA (ESO/NAOJ/NRAO)

Nearly a year ago, the ALMA collaboration released this stunning image of the young star HL Tau. The sub-millimeter wavelengths of light that ALMA detects revealed a vast disc of gas and dust, several times larger than Neptune’s orbit. Intriguingly, the disc was divided up into a series of well-defined, concentric rings.

The cause of the rings seemed clear: There must be planets around HL Tau, their gravity sculpting the gas and sweeping out the dark gaps in the disc.

But there was an issue with this hypothesis. HL Tau is a very young star, less than a million years old. Many planetary formation models assume that planets take much longer to grow to the kind of sizes needed to shape the disc like that. If the gaps are being made by planets, then those models will need a serious rethink.

The authors of today’s paper decided to take a closer look, and see if they could spot the hypothetical planets. This isn’t as easy as it may appear. In the part of the electromagnetic spectrum probed by ALMA, the star is relatively dim, allowing the light from the disc to be discerned. However, any planets present would shine in infrared light, with a much shorter wavelength. In infrared, the blinding light from HL Tau would easily outshine that from a planet.

Two techniques were used to overcome this problem. The first was simple: Use the biggest telescope that they could get their hands on, in this case the unique Large Binocular Telescope Interferometer (LBTI).

The second trick was to use adaptive optics. This technique uses a light source, such as a laser or, in the case of the LBTI, a well-known star, to correct for the distortions in light caused by the Earth’s atmosphere. As the telescope’s computers know what the guide star “should” look like, it can continuously  flex a small mirror to counteract the effects of the atmosphere. This makes the images much clearer, enough to directly image planets around some stars.

But even adaptive optics wasn’t enough to show up planets around HL Tau. The last obstacle was the disc itself. The very reason for looking for planets had become a hindrance to spotting them, scattering the light from the star out to much greater distances than usual.

To remove this scattered light, the authors made two infrared observations of HL Tau, one at a slightly redder wavelength than the other. In both, any signal from planets was drowned out by the scattered light.

But the exoplanets were predicted to be much redder than the scattered light. This meant that they wouldn’t show up at all in the less-red image, regardless of the scattered light. However, they should have been somewhere in the second image, with the scattered light roughly the same in both. Subtract the first image from the second, and the scattered light would disappear, leaving just the planets.

Left: K-band infrared image, with scattered light only. Right: Slightly redder L’ band image, showing both scattered light and (if they are there) planets. Subtract one from the other and…

…No planets. Oh well. The subtracted image, with blue lines showing the most prominent gaps in the disc, the red star the position of HL Tau, and the green circle the position of a candidate planet from an older observation. Planets should show up as white dots near the rings, of which there are none to be seen.

When the authors did this, they spotted…nothing. Based on the precision of their data, they conclude that there are no planets larger than 10-15 times the mass of Jupiter near the gaps in HL Tau’s disc.

At first glace that doesn’t seem to be a problem. Planets that large aren’t all that common, and there could easily be planets too small for the LBTI to detect hiding in the gaps.

But planets any smaller than 10 Jupiter masses wouldn’t have enough gravity to shape the disc in the way seen in the ALMA image. Planets or no planets, a new explanation for the complex structure of HL Tau’s disc may be needed.

The authors point out one possible way to solve this problem. ALMA is most sensitive to dust grains around a millimeter across, whilst the disc is probably made of a range of particle sizes. Smaller planets may have just enough gravity to move only the millimeter-scale particles into the observed rings, leaving the rest of the disc relatively untouched.

So are the gaps in the ring really caused by planets, or something else that we haven’t thought of yet? The paper ends by charting out the ways that astronomers can explore this system in the future. Longer observations by ALMA could broaden the range of dust sizes seen, allowing a more complete image of the disc structure to be made. And searches for smaller planets could be carried out, although such precise measurements will probably need to wait for the next generation of truly giant telescopes.

### Peter Coles - In the Dark

September’s Baccalaureate

September’s Baccalaureate
A combination is
Of Crickets – Crows – and Retrospects
And a dissembling Breeze

That hints without assuming –
An Innuendo sear
That makes the Heart put up its Fun
And turn Philosopher.

by Emily Dickinson (1830-1886)

### arXiv blog

Moon-Landing Equivalent for Robots: Assembling an IKEA Chair

Robots are poor at many activities that humans find simple. Now roboticists are making progress on a task that exemplifies them all: the automated assembly of an IKEA chair.

Humans have long feared that robots are taking over the world. The truth, however, is more prosaic. It’s certainly the case that robots have revolutionized certain tasks such as car manufacturing, for example.

### Clifford V. Johnson - Asymptotia

Moon Line

(Click for larger view.) This was a heartening reminder that people still care about what's going on in the sky far above. This is a snap I took of a very long line of people (along the block and then around the corner and then some more) waiting for the shuttle bus to the Griffith Observatory to take part in the moon viewing activities up there tonight. (I took it at about 6:00pm, so I hope they all made it up in time!) The full moon is at close approach, and there was a total lunar eclipse as well. Knowing the people at the Observatory, I imagine they had arranged for lots of telescopes to be out on the lawn in front of the Observatory itself, as well as plenty of people on hand to explain things to curious visitors.

I hope you got to see some of the eclipse! (It is just coming off peak now as I type...)

The post Moon Line appeared first on Asymptotia.

## September 27, 2015

### Peter Coles - In the Dark

The Meaning of Cosmology

I know it’s Sunday, and it’s also sunny, but I’m in the office catching up with my ever-increasing backlog of work so I hope you’ll forgive me for posting one from the vaults, a rehash of an old piece that dates from 2008..

–o–

When asked what I do for a living, I’ve always avoided describing myself as an astronomer, because most people seem to think that involves star signs and horoscopes. Anyone can tell I’m not an astrologer anyway, because I’m not rich. Astrophysicist sounds more impressive, but perhaps a little scary. That’s why I usually settle on the “Cosmologist”. Grandiose, but at the same time somehow cuddly.

I had an inkling that this choice was going to be a mistake at the start of my first ever visit to the United States, which was to attend a conference in memory of the great physicist Yacov Borisovich Zel’dovich, who died in 1989. The meeting was held in Lawrence, Kansas, home of the University of Kansas, in May 1990. This event was notable for many reasons, including the fact that the effective ban on Russian physicists visiting the USA had been lifted after the arrival of glasnost to the Soviet Union. Many prominent scientists from there were going to be attending. I had also been invited to give a talk, the only connection with Zel’dovich that I could figure out was that the very first paper I wrote was cited in the very last paper to be written by the great man.

I think I flew in to Detroit from London and had to clear customs there in order to transfer to an internal flight to Kansas. On arriving at the customs area in the airport, the guy at the desk peered at my passport and asked me what was the purpose of my visit. I said “I’m attending a Conference”. He eyed me suspiciously and asked me my line of work. “Cosmologist,” I proudly announced. He frowned and asked me to open my bags. He looked in my suitcase, and his frown deepened. He looked at me accusingly and said “Where are your samples?”

I thought about pointing out that there was indeed a sample of the Universe in my bag but that it was way too small to be regarded as representative. Fortunately, I thought better of it. Eventually I realised he thought cosmologist was something to do with cosmetics, and was expecting me to be carrying little bottles of shampoo or make-up to a sales conference or something like that. I explained that I was a scientist, and showed him the poster for the conference I was going to attend. He seemed satisfied. As I gathered up my possessions thinking the formalities were over, he carried on looking through my passport. As I moved off he suddenly spoke again. “Is this your first visit to the States, son?”. My passport had no other entry stamps to the USA in it. “Yes,” I said. He was incredulous. “And you’re going to Kansas?”

This little confrontation turned out to be a forerunner of a more dramatic incident involving the same lexicographical confusion. One evening during the Zel’dovich meeting there was a reception held by the University of Kansas, to which the conference participants, local celebrities (including the famous writer William Burroughs, who lived nearby) and various (small) TV companies were invited. Clearly this meeting was big news for Lawrence. It was all organized by the University of Kansas and there was a charming lady called Eunice who was largely running the show. I got talking to her near the end of the party. As we chatted, the proceedings were clearly winding down and she suggested we go into Kansas City to go dancing. I’ve always been up for a boogie, Lawrence didn’t seem to be offering much in the way of nightlife, and my attempts to talk to William Burroughs were repelled by the bevy of handsome young men who formed his entourage, so off we went in her car.

Before I go on I’ll just point out that Eunice – full name Eunice H. Stallworth – passed away suddenly in 2009. I spent quite a lot of time with her during this and other trips to Lawrence, including a memorable day out at a pow wow at Haskell Indian Nations University where there was some amazing dancing.

Anyway, back to the story. It takes over an hour to drive into Kansas City from Lawrence but we got there safely enough. We went to several fun places and had a good time until well after midnight. We were about to drive back when Eunice suddenly remembered there was another nightclub she had heard of that had just opened. However, she didn’t really know where it was and we spent quite a while looking for it. We ended up on the State Line, a freeway that separates Kansas City Kansas from Kansas City Missouri, the main downtown area of Kansas City actually being for some reason in the state of Missouri. After only a few moments on the freeway a police car appeared behind us with its lights blazing and siren screeching, and ushered us off the road into a kind of parking lot.

Eunice stopped the car and we waited while a young cop got out of his car and approached us. I was surprised to see he was on his own. I always thought the police always went around in pairs, like low comedians. He asked for Eunice’s driver’s license, which she gave him. He then asked for mine. I don’t drive and don’t have a driver’s license, and explained this to the policeman. He found it difficult to comprehend. I then realised I hadn’t brought my passport along, so I had no ID at all.

I forgot to mention that Eunice was black and that her car had Alabama license plates.

I don’t know what particular thing caused this young cop to panic, but he dashed back to his car and got onto his radio to call for backup. Soon, another squad car arrived, drove part way into the entrance of the parking lot and stopped there, presumably so as to block any attempted escape. The doors of the second car opened and two policemen got out, kneeled down and and aimed pump-action shotguns at us as they hid behind the car doors which partly shielded them from view and presumably from gunfire. The rookie who had stopped us did the same thing from his car, but he only had a handgun.

“Put your hands on your heads. Get out of the car. Slowly. No sudden movements.” This was just like the movies.

We did as we were told. Eventually we both ended up with our hands on the roof of Eunice’s car being frisked by a large cop sporting an impressive walrus moustache. He reminded me of one of the Village People, although his uniform was not made of leather. I thought it unwise to point out the resemblance to him. Declaring us “clean”, he signalled to the other policemen to put their guns away. They had been covering him as he searched us.

I suddenly realised how terrified I was. It’s not nice having guns pointed at you.

Mr Walrus had found a packet of French cigarettes (Gauloises) in my coat pocket. I clearly looked scared so he handed them to me and suggested I have a smoke. I lit up, and offered him one (which he declined). Meanwhile the first cop was running the details of Eunice’s car through the vehicle check system, clearly thinking it must have been stolen. As he did this, the moustachioed policeman, who was by now very relaxed about the situation, started a conversation which I’ll never forget.

Policeman: “You’re not from around these parts, are you?” (Honestly, that’s exactly what he said.)

Me: “No, I’m from England.”

Policeman: “I see. What are you doing in Kansas?”

Me: “I’m attending a conference, in Lawrence..”

Policeman: “Oh yes? What kind of Conference?”

At this point, Mr Walrus nodded and walked slowly to the first car where the much younger cop was still fiddling with the computer.

“Son,” he said, “there’s no need to call for backup when all you got to deal with is a Limey hairdresser…”.

### Tommaso Dorigo - Scientificblogging

One Dollar On 5.3 TeV
This is just a short post to mention one thing I recently learned from a colleague - the ATLAS experiment also seems to have collected a 5.3 TeV dijet event, as CMS recently did (the way the communication took place indicates that this is a public information; if it is not, might you ATLAS folks let me know, so that I'll remove this short posting?). If any reader here from ATLAS can point me to the event display I would be grateful. These events are spectacular to look at: the CMS 5 TeV dijet event display was posted here a month ago if you like to have a look.

## September 26, 2015

### Jester - Resonaances

Weekend Plot: celebration of a femtobarn
The LHC run-2 has reached the psychologically important point where the amount the integrated luminosity exceeds one inverse femtobarn. To celebrate this event, here is a plot showing the ratio of the number of hypothetical resonances produced so far in run-2 and in run-1 collisions as a function of the resonance mass:
In the run-1 at 8 TeV, ATLAS and CMS collected around 20 fb-1. For 13 TeV collisions the amount of data is currently 1/20 of that, however the hypothetical cross section for producing hypothetical TeV scale particles is much larger. For heavy enough particles the gain in cross section is larger than 1/20, which means that run-2 now probes a previously unexplored parameter space (this simplistic argument ignores the fact that backgrounds are also larger at 13 TeV, but it's approximately correct at very high masses where backgrounds are small). Currently, the turning point is about 2.7 TeV for resonances produced, at the fundamental level, in quark-antiquark collisions, and even below that for those produced in gluon-gluon collisions. The current plan is to continue the physics run till early November which, at this pace, should give us around 3 fb-1 to brood upon during the winter break. This means that the 2015 run will stop short before sorting out the existence of the 2 TeV di-boson resonance indicated by run-1 data. Unless, of course, the physics run is extended at the expense of heavy-ion collisions scheduled for November ;)

## September 25, 2015

### arXiv blog

How Good Are You at Detecting Digital Forgeries?

Image-based forgery is becoming more common not least because humans seem to be particularly vulnerable even to obvious fakes.

Back in 2010, the Australian public was enthralled by a case of fraud in which the fraudster convinced people of his credentials by producing pictures of himself with Pope John Paul II, Bill Clinton, Bill Gates, and others. In this way, the fraudster raised £7 million from investors who were taken in. The pictures, of course, were forgeries.

### ZapperZ - Physics and Physicists

Why Do We Put Telescope In Space?
Here's the Minute Physics explanation:

Zz.

## September 24, 2015

### Symmetrybreaking - Fermilab/SLAC

Citizen scientists published

Amateurs and professionals share the credit in the newest publications from the Space Warps project.

When amateur astronomer Julianne Wilcox first moved and traded the star-covered firmament of Petervale, South Africa, for the light-cluttered sky of London, she feared that she would no longer be able to indulge in her passion for astronomy.

Then she discovered a new way of doing what she loves: online citizen science projects that engage amateurs like her in the analysis of real astronomical data.

Wilcox is one of 37,000 citizen scientists involved in two papers accepted for publication in the journal Monthly Notices of the Royal Astronomical Society. The papers report the discovery of 29 potential new gravitational lenses—objects such as massive galaxies and galaxy clusters that distort light from faraway galaxies behind them. An additional 30 promising objects may turn out to be lenses, too.

Amateur scientists from all walks of life identified the new objects using Space Warps, a web-based gravitational lens discovery platform. They did so by marking lens-like features in some 430,000 images of the Canada-France-Hawaii Telescope Legacy Survey.

Since gravitational lenses act like cosmic magnifying glasses, they help researchers look at very distant light sources. They also provide information about invisible dark matter, because dark matter affects the way gravitational lenses bend light.

Researchers can now point their telescopes at the newly identified objects and study them in more detail.

“In addition to its immediate scientific output, Space Warps is also a great platform to figure out how to get citizen scientists involved in future large-scale astronomical surveys,” says Phil Marshall, Space Warps principle investigator for the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of SLAC National Accelerator Laboratory and Stanford University.

The Large Synoptic Survey Telescope, for instance, will begin in the early 2020s to capture images of the entire southern night sky in unprecedented detail. In the process, it’ll generate about 6 million gigabytes of data per year. Researchers hope that the public can help with processing these gigantic streams of information.

Apart from distributing a lot of work among a large number of people, crowdsourcing also appears to be well suited for the analysis of complex data.

“In our experience, humans are doing much better than computer algorithms in identifying faint and complex objects such as gravitational lenses that are not that obvious,” says Anupreeta More, Space Warps principle investigator for the Kavli Institute for the Physics and Mathematics of the Universe in Tokyo. “We can use what we’ve learned about how volunteers identify new objects to develop smarter algorithms.”

Citizen scientists also excel at spotting unexpected things. For example, when asked to look for typically bluish lens-like features in images of another survey, Space Warps users spotted an object with strong red-colored arcs—a gravitational lens bending light from a particularly interesting star-forming galaxy behind it.

“Our users have identified several stunning objects like this,” says Aprajita Verma, Space Warps principal investigator for the University of Oxford. “It shows that citizen scientists are very flexible and understand the larger context of the images they’re shown.”

But crowdsourced science benefits more than just the researchers, says Wilcox, who avidly participates in a variety of astronomy-focused projects.

“Citizen science is a two-way process,” she says. “Getting astronomical objects classified is one aspect, but it also sparks off an interest in research in people without a science background.”

As one of Space Warps’ expert users, Wilcox not only looks for gravitational lenses but also moderates the project’s community discussions and helps further analyze identified objects—contributions that have earned her and her fellow moderators Elisabeth Baeten, Claude Cornen and Christine Macmillan a spot on the author lists of the two Space Warps papers.

“It’s great to be on the papers,” she says. “It really shows the amazing opportunities that are available to citizen scientists.” Wilcox hopes that her example could help getting even more volunteers interested in people-powered research.

The sky’s the limit; try it yourself at spacewarps.org, or get involved in the Zooniverse, a citizen science platform of currently 33 projects covering various scientific disciplines.

Like what you see? Sign up for a free subscription to symmetry!

### ATLAS Experiment

Top 2015 – Mass, Momentum, and the Conga

The top quark conference normally follows the same basic structure. The first few days are devoted to reports on the general status of the field and inclusive measurements; non-objectionable stuff that doesn’t cause controversy. The final few days are given over to more focused analyses; the sort of results that professors really enjoy arguing about. We got a taste of this earlier than usual this year as discussion on top transverse momenta (pT) broke out at least three times before we even managed to get to the session on Thursday! As a postdoc, I do love this sort of debate at a workshop, almost as much as I enjoy watching the students arrive at 9am, desperately hungover and probably assuming they were quiet as they crept back into the Hotel at 3am (no Joffrey, we definitely didn’t hear you knock over that sun lounger).

The CMS combination of measurements of top-quark mass, currently the most sensitive in the world.

DAY 3:

Top Mass is always a great topic at this conference. This year the theorists started by reminding us, for what feels like the millionth time, of the difference between various interpretations of “mass” in perturbative QCD, telling us which are well-defined and safe to use. The LHC and Tevatron experiments then showed staggeringly precise measurements using our ill-defined definition of “Monte Carlo mass” that theorists have been complaining about for decades. This year we’ve really outdone ourselves and CMS have combined their results to produce a measurement with an uncertainty of less than 0.5 GeV! Fine, we’re not sure ‘exactly’ what the Monte Carlo mass really is theoretically, but we did also provide well-interpreted pole-mass results (at the cost of having larger uncertainties), so let’s hope that’s enough to keep the theorists happy.

CONFERENCE DINNER:

While it cannot yet be said that starting a conga line qualifies as a tradition at the Top conference, it does seem to occur with increasing frequency. I have my own theories about how and why this occurs (and evidence of a certain ATLAS top convenor who seems to be close to the front of the line each time it happens…) and I find that there are few things as surreal as your bosses and ex-bosses dancing around in a semi-orderly line with their hands on your hips screaming “go faster” in your ear. Though this has little to nothing to do with top physics, I enjoy mentioning it.

Predictions at leading order (LO), next-to leading order (NLO), and next-to-next-to leading order (NNLO) of the top quark transverse momentum.

DAY 4:

Once upon a time, ATLAS and CMS measured the top quark’s pT distribution in data. At first, ATLAS and CMS simulations appeared to disagree with each other, and neither agreed well with the observed data. Though most of the differences between ATLAS and CMS were eventually explained (…sort of) the data itself remained stubbornly different from the simulation. Czakon et al. and their STRIPPER program to the rescue! David Haymes presented a differential top pT distribution at full next-to-next-to leading order (NNLO), calculated using STRIPPER, that agrees nicely with all of the data, proving that next-to-leading-order doesn’t go nearly far enough when it comes to the top quark.

You’ll notice that I didn’t explain what STRIPPER actually is. In short, it is a combination of an NNLO computational algorithm, capable of providing predictions of the top quarks kinematics, and a touch of theorist humour, in the form of an extremely contrived acronym. One can only hope that STRIPPER is meant to describe the stripping away of the complexities of NNLO calculations, but I suspect that would be generous to the point of naivety. At least the speaker wasn’t wearing a horrendous anime shirt. The result itself, however, is very impressive and desperately needed in order to understand the LHC data.

DAY 5:

Well, it’s been a very successful conference. We’ve seen the first 13 TeV results, some of the most precise results to come out of LHC Run1, and even a few Tevatron highlights! Next year we’ll be near Prague, in keeping with the tradition of the conference being held in places famous for either alcohol or beaches. See you in the conga line!

 James Howarth is a postdoctoral research fellow at DESY, working on top quark cross-sections and properties for ATLAS. He joined the ATLAS experiment in 2009 as a PhD student with the University of Manchester, before moving to DESY, Hamburg in 2013. In his spare time he enjoys drinking, arguing, and generally being difficult.

### Lubos Motl - string vacua and pheno

Naturalness is fuzzy, subjective, model-dependent, and uncertain, too

In an ordinary non-supersymmetric model of particle physics such as the Standard Model, the masses of (especially) scalar particles are "unprotected" which is why they "love" to be corrected by pretty much any corrections that offer their services.

For example, if you interpret the Standard Model as an effective theory approximating a better but non-supersymmetric theory that works up to the GUT scale or Planck scale, fifteen orders of magnitude above the Higgs mass, there will be assorted loop diagrams that contribute to the observable mass of the Higgs boson.$m_h^2 = \dots + 3.5 m_{Pl}^2 - 2.7 m_{Pl}^2 + 1.9 m_{GUT}^2 - \dots$ and when you add the terms up, you should better obtain the observed value$m_h^2 = [(125.1\pm 0.3)\GeV]^2$ or so. It seems that we have been insanely lucky to get this small result. Note that the lightness of all other known massive elementary particles is derived from the lightness of the Higgs. Terms that were $$10^{30}$$ times larger than the final observed Higgs mass came with both signs and (almost) cancelled themselves with a huge relative accuracy.

A curious scientist simply has to ask: Why? Why does he have to ask? Because the high-energy parameters that the individual terms depended upon had to be carefully adjusted, or fine-tuned, to obtain the very tiny final result. This means that the "qualitative hypothesis", the Standard Model (or its completion relevant near the GUT scale or the Planck scale) with arbitrary values of the parameters only predicts the outcome qualitatively similar to the observed one – a world with a light Higgs – with a very low probability.

If you assume that the probabilistic distribution on the parameter space is "reasonably quasi-uniform" in some sense, most of the points of the parameter space predict totally wrong outcomes. So the conditional probability $$P(LHC|SM)$$ where LHC indicates the masses approximately observed at the LHC and SM is the Standard Model with arbitrary parameters distributed according to some sensible, quasi-uniform distribution, is tiny, perhaps of order $$10^{-30}$$, because only a very tiny portion of the parameter space gives reasonable results.

By Bayes' theorem, we may also argue that $$P(SM|LHC)$$, the probability of the Standard Model given the qualitative observations at the LHC, is extremely tiny, perhaps $$10^{-30}$$, as well. Because the probability of the Standard Model is so small, we may say that in some sense, the Standard Model – with the extra statistical assumptions above – has been falsified. It's falsified just like any theory that predicts than an actually observed effect should be extremely unlikely. For example, a theory claiming that there is no Sun – the dot on the sky is just composed of photons that randomly arrive from that direction – becomes increasingly indefensible as you see additional photons coming from the same direction. ;-)

Before you throw the Lagrangian of the Standard Model to the trash bin, you should realize that what is actually wrong isn't the Lagrangian of the Standard Model, an effective field theory, itself. What's wrong are the statistical assumptions about the values of the parameters. Aside from the Standard Model Lagrangian, there exist additional laws in a more complete theory that actually guarantee that the value of the parameters is such that the terms contributing to the squared Higgs mass simply have to cancel each other almost exactly.

Supersymmetry is the main system of ideas that is able to achieve such a thing. The contribution of a particle, like the top quark, and its superpartner, the stop, to $$m_h^2$$ are exactly the same, up to the sign, so they cancel. More precisely, this cancellation holds for unbroken supersymmetry in which the top and stop are equally heavy. We know this not to be the case. The top and the stop have different masses – or at least, we know this for other particle species than the stop.

But even when the top and the stop have different masses and supersymmetry is spontaneously broken, it makes the fine-tuning problem much less severe. You may clump the contributions from the particles with the contributions from their superpartners into "packages". And these "couples" almost exactly cancel, up to terms comparable to the "superpartner scale". This may be around $$(1\TeV)^2$$, about 100 times higher than $$m_h^2$$. So as long as the superpartner masses are close enough to the Higgs mass, the fine-tuning problem of the Standard Model becomes much less severe once supersymmetry is added.

Realistically, $$m_h^2 \approx (125\GeV)^2$$ is obtained from the sum of terms that are about 100 times higher than the final result. About 99% of the largest term is cancelled by the remaining terms and 1% of it survives. Such a cancellation may still be viewed as "somewhat unlikely" but should you lose sleep over it? Or should you discard the supersymmetric model? I don't think so. You have still improved the problem with the probability from $$10^{-30}$$ in the non-supersymmetric model to something like $$10^{-2}$$ here. After all, supersymmetry is not the last insight about Nature that we will make and the following insights may reduce the degree of fine-tuning so that the number $$10^{-2}$$ will be raised to something even closer to one. When and if the complete theory of everything is understood and all the parameters are calculated, the probability that the right theory predicts the observed values of the parameters will reach 100%, of course.

I think that the text above makes it pretty clear that the "naturalness", the absence of unexplained excessively accurate cancellations, has some logic behind it. But at the same moment, it is a heuristic rule that depends on many ill-defined and fuzzy words such as "sensible", "tolerable", and many others. When you try to quantify the degree of fine-tuning, you may write the expressions in many different ways. They will yield slightly different results.

At the end, all these measures that quantify "how unnatural" a model is may turn out to be completely wrong, just like the non-supersymmetric result $$10^{-30}$$ was shown to be wrong once SUSY was added. When you add new principles or extract the effective field theory from a more constrained ultraviolet starting point, you are effectively choosing an extremely special subclass of the effective field theories that were possible before you embraced the new principle (such as SUSY, but it may also be grand unification, various integer-valued relationships that may follow from string theory and tons of related things).

So one can simply never assume that a calculation of the "degree of naturalness" is the final answer that may falsify a theory. It's fuzzy because none of the expressions are clearly better than others. It's subjective because people will disagree what "feels good". And it's model-dependent because qualitatively new models produce totally different probability distibutions on the parameter spaces.

Moreover, the naturalness as a principle – even when we admit it is fuzzy, subjective, and model-dependent – is still uncertain. It may actually contradict principles such as the Weak Gravity Conjecture – which is arguably supported by a more nontrivial, non-prejudiced body of evidence. And the smallness of the Higgs boson mass or the cosmological constant may be viewed as indications that something is wrong with the naturalness assumption, if not disproofs of it.

Today, rather well-known model builders Baer, Barger, and Savoy published a preprint that totally and entirely disagrees with the basic lore I wrote above. The paper is called
Upper bounds on sparticle masses from naturalness or how to disprove weak scale supersymmetry
They say that the "principle of naturalness" isn't fuzzy, subjective, model-dependent, and uncertain. Instead, it is objective, model-independent, demanding clear values of the bounds, and predictive. Wow. I honestly spent some time by reading more or less the whole paper. Can there be some actual evidence supporting these self-evidently wrong claims?

Unfortunately, what I got from the paper was just laughter, not insights. These folks can't possibly be serious!

They want to conclude that a measure of fine-tuning $$\Delta$$ has to obey $$\Delta\lt 30$$ and derive upper limits on the SUSY Higgs mixing parameter $$\mu$$ (it shall be below $$350\GeV$$) or the gluino mass (they raise the limit to $$4\TeV$$). But what are the exact equations or inequalities from which they deduce such conclusions and, especially, what is their evidence in favor of these inequalities?

I was searching hard and the only thing I found was a comment that some figure is "visually striking" (meaning that it wants you to say Someone had to fudge it). Are you serious? Will particle physicists calculate particle masses by measuring how much twisted their stomach becomes when they look at a picture?

"The visually striking" pictures obviously show columns that are not of the same order. But does it mean that the model is wrong? When stocks fell almost by 5% a day, the graphs of the stock prices were surely "visually striking" and many people couldn't believe and many of those couldn't sleep, either. But the drop was real. And it wasn't the only one. Moreover, it's obvious that different investors have different tolerance levels. In the same way, particle physicists have different tolerance levels when it comes to the degree of acceptable fine-tuning.

They want to impose $$\Delta\lt 30$$ on everyone, as a "principle", but it's silly. No finite number on the right hand side would define a good "robust law of physics" because there are no numbers that are so high that they couldn't occur naturally. :-) But if you want me to become sure about the falsification of a model – with some ideas about the distribution of parameters – you would need $$\Delta\geq 10^6$$ for me to feel the same "certainty" as I feel when a particle is discovered at 5 sigma, or $$10^3$$ to feel the 3-sigma-like certainty.

Their proposal to view $$\Delta\gt 30$$ as a nearly rigorous argument falsifying a theory is totally equivalent to using 2-sigma bumps to claim discoveries. The isomorphism is self-evident. When you add many terms of both signs, the probability that the absolute value of the result is less than 3 percent of the absolute value of the largest term is comparable to 3 percent. It's close to 5 percent, the probability that you get a 2-sigma or greater bump by chance!

And be sure that because we have only measured one Higgs mass, it's one bump. To say that $$m_h^2$$ isn't allowed to be more than 30 times smaller than the absolute value of the largest contribution is just like saying a 2-sigma bump is enough to settle any big Yes/No question in physics. Even more precisely, it's like saying that there won't ever be more than 2-sigma deviations from a correct theory. Sorry, it's simply not true. You may prefer a world in which the naturalness could be used to make similarly sharp conclusions and falsify theories. But it is not our world. In our world with the real laws of mathematics, this is simply not possible. Even $$\Delta\approx 300$$ is as possible as the emergence of a 3-sigma bump anywhere by chance. Such things simply may happen.

Obviously, once we start to embrace the anthropic reasoning or multiverse bias, much higher values of $$\Delta$$ may become totally tolerable. I don't want to slide into the anthropic wars here, however. My point is that even if we reject all forms of anthropic reasoning, much higher values of $$\Delta$$ than thirty may be OK.

But what I also find incredible is their degree of worshiping of random, arbitrary formulae. For example, Barbieri and Giudice introduced this naturalness measure$\Delta_{BG}=\max_i \abs{ \frac{ \partial\log m_Z^2 }{ \partial \log p_i } }$ You calculate the squared Z-boson mass out of many parameters $$p_i$$ of the theory at the high scale. The mass depends on each of them, you measure the slope of the dependence in the logarithmic fashion, and pick the parameter which leads to the steepest dependence. This steepest slope is then interpreted as the degree of fine-tuning.

Now, this BG formula – built from previous observations by Ellis and others – is more quantitative than the "almost purely aesthetic" appraisals of naturalness and fine-tuning that dominated the literature before that paper. But while this expression may be said to be well-defined, it's extremely arbitrary. The degree of arbitrariness almost exactly mimics the degree of vagueness that existed before. So even though we have a formula, it's not a real quantitative progress.

When I say that the formula is arbitrary, I have dozens of particular complaints in mind. And there surely exist hundreds of other complaints you could invent.

First, for example, the formula depends on a particular parameterization of the parameter space in terms of $$p_i$$. Any coordinate redefinition – a diffeomorphism on the parameter space – should be allowed, shouldn't it? But a coordinate transformation will yield a different value of $$\Delta$$. Why did you parameterize the space in one way and not another way? Even if you banned general diffeomorphisms, there surely exist transformations that are more innocent, right? Like the replacement of $$p_i$$ by their linear combinations. Or products and ratios, and so on.

Second, and it is related, why are there really the logarithms? Shouldn't the expression depend on the parameters themselves, if they are small? Shouldn't one define a natural metric on the parameter space and use this metric to define the naturalness measure?

Third, why are we picking the maximum?$\max K_i$ may be fine to pick a representative value of many values $$K_i$$. But we may also pick$\sum_i |K_i|$ or, perhaps more naturally,$\sqrt{ \sum_i |K_i|^2 }.$ For a "rough understanding", such changes usually don't change the picture dramatically. But if you wanted exact bounds, it's clearly important which of those expressions is picked and why. You would need some evidence that favors one formula or another. There is no theoretical evidence and there is no empirical evidence, either.

The bounds defined by the three alternative expressions above may be called a "cube", a "diamond", and a "ball". The corresponding limits on the superpartner masses may have similar shapes. The authors end up claiming things like $$\mu\lt 350\GeV$$ out of random assumptions of the form $$\Delta\lt 30$$ – even though the $$\Delta$$ could be replaced by a different formula and $$30$$ could be replaced by a different number. Why is their starting point better than the "product"? Why don't they directly postulate an inequality on $$\mu$$?

Fourth, shouldn't the formula for the naturalness measure get an explicit dependence on the number of parameters? If your theory has many soft parameters, you may view it as an unnatural theory regardless of the degree of fine-tuning in each parameter because it becomes easier to make things look natural if there are many moving parts that may conspire (or because there are many theories with many parameters which should make you reduce their priors). However, you could also present the arguments going exactly in the opposite direction. When there are many parameters $$p_i$$ leading to many slopes $$K_i$$, it's statistically guaranteed that at least one of them will turn out to be large, by chance, right? So perhaps, for a large number of parameters, you should tolerate higher values of $$\Delta$$, too.

One can invent infinitely many arguments and counter-arguments that will elevate or reduce the tolerable values of $$\Delta$$ for one class of theory differently than for another class of theories, arguments and counter-arguments that may take not just the number of parameters but any qualitative (and quantitative) property of the class of theories into account! The uncertainty and flexibility has virtually no limits, and for a simple reason: we are basically talking about the "right priors" for all hypotheses in physics. Well, quite generally in Bayesian inference, there can't be any universally "right priors". Priors are unavoidably subjective and fuzzy. Only the posterior or "final" answers or probabilities (close to zero or one) after a sufficient body of relevant scientific evidence is collected may be objective.

There are lots of technical complications like that. And even if you forgot about those and treated the formula $$\Delta_{BG}$$ as a canonical gift from the Heaven, which it's obviously not, what should be the upper bound that you find acceptable? Arguing whether it's $$\Delta\lt 20$$ or $$\Delta \lt 300$$ is exactly equivalent to arguments whether 2-sigma or 3-sigma bumps are enough to settle a qualitative question about the validity of a theory. You know, none of them is enough. But even if you ask "which of them is a really strong hint", the answer can't be sharp. The bound is unavoidably fuzzy.

They also discuss a similar naturalness measure$m_Z^2 = \sum_i K_i m_Z^2, \quad \Delta_{EW}=\max |K_i|$ You write the squared Z-boson mass as the sum of many terms. I wrote them as $$K_i$$ times $$m_Z^2$$ so that the sum $$\sum_i K_i = 1$$, and the degree of naturalness is the greatest absolute value among the values $$|K_i|$$. If there is a cancellation of large numbers, the theory is said to be highly EW-fine-tuned.

Again, when you write the squared Z-boson mass according to a particular template, the naturalness measure above becomes completely well-defined. But this well-definedness doesn't help you at all to answer the question Why. Why is it exactly this formula and not another one? Why are you told to group the terms in one way and not another way?

The grouping of terms is an extremely subtle thing. An important fact about physics is that only the total result for the mass, or a scattering amplitude, is physically meaningful. The way how you calculate it – how you organize the calculation or group the contributions – is clearly unphysical. There are many ways and none of them is "more correct" than all others.

At the beginning, I told you that the contributions of the top and the stop to the squared Higgs mass may be huge but when you combine them into the top-stop contribution, this contribution is much smaller than the GUT or Planck scale: it is comparable to the much lower superpartner scale. So the "impression about fine-tuning" clearly depends on how you write the thing which is unphysical.

There are lots of numbers in physics that are much smaller than the most naive order-of-magnitude estimates. The cosmological constant is the top example and its tiny size continues to be largely unexplained. The small Higgs boson mass would be unexplained without SUSY etc. But there are many more mundane examples. In those cases, there is no contradiction because we can explain why the numbers are surprisingly small.

The neutron's decay rate is very low – the lifetime is anomalously long, some 15 minutes, vastly longer than other lifetimes of things decaying by a similar beta-decay. It's because the phase space of the final 3 products to which neutron decays is tiny. It's because the neutron mass is so close to the sum of the proton mass and the electron mass (and the neutrino mass, if you don't neglect it). The suppression is by a power law.

But take an even more mundane example. The strongest spectral line emitted by the Hydrogen atom. I mean the line between $$n=1$$ and $$n=2$$. Its energy is $$13.6\eV(1-1/4)$$, some ten electronvolts. You could say that it is insanely fine-tuned because it's obtained as the difference between two energies/masses of the Hydrogen atom in two different states. The hydrogen atom's mass is almost $$1\GeV$$, mostly composed of the proton mass.

Why does the photon mass end up being $$10\eV$$, 100 million times lower than the latent energy of the Hydrogen atom? Well, we have lots of mundane explanations. First, we know why the two masses are almost the same because the proton is doing almost nothing during the transition. (This argument is totally analogous to the aforementioned claim in SUSY that the top and the stop may be assumed to be similar.) The complicated motion in the Hydrogen atom is only due to a part of the atom, the electron, whose rest mass is just $$511\keV$$, almost 2,000 times lighter than the proton. This rest mass of the electron is still 50,000 times larger than the energy of the photon. Why?

Well, it's because the binding energy of the electron in the hydrogen atom is comparable to the kinetic energy. And the kinetic energy is much lower than the rest mass because the speed of the electron is much smaller than the speed of light. It's basically the fine-structure constant times the speed of light, as one may derive easily.

Now, why is the fine-structure constant, the dimensionless strength of electromagnetism, so small? It is $$1/137.036$$ or so. Well, it is just what it is. We may find excuses as well as formulae deriving the value from constants considered more fundamental these days. First, one could argue that $$4\pi / 137$$ and not $$1/137$$ is the more natural constant to consider. So a part of the smallness of $$1/137$$ is because it implicitly contains some factor of $$1/4\pi$$, about one twelfth, you could be more careful about.

The remaining constant may be derived from the electroweak theory. The fine-structure constant ends up smaller than you could expect because 1) its smallness depends on the smallness of two electroweak coupling constants, 2) it's mostly a $$U(1)$$ coupling and such couplings are getting weaker at lower energies. So a decent value of the coupling at the GUT scale simply produces a rather small value of the fine-structure constant at low energies (below the electron mass).

We don't say that the fine-structure constant is unnaturally small because the GUT-like theories or, which is better, stringy vacua that we have in mind that may produce electromagnetism including predictions of parameters may produce values like $$1/137$$ easily. But before we knew these calculations, we could have considered the smallness of the fine-structure constant to be a fine-tuning problem.

My broader point is that there are ways to explain the surprise away. More objectively, we can derive the energy of the photon emitted by the Hydrogen atom from a more complete theory, the Standard Model or a GUT theory, and a part of the surprise about the smallness of the photon energy goes away. We would still need some explanation why the electron Yukawa coupling is so tiny and why the electron mass ends up being beneath the proton mass, and lots of other things. But there will always be a part of the explanation (of the low photon energy) of the kind "bound states where objects move much slower than the speed of light" produce small changes of the energy in the spectrum, and similar things. And there will be wisdoms such as "it's normal to get bound states with low speeds, relatively to the speed of light, because couplings often want to be orders of magnitude lower than the maximum values".

The attempts to sell naturalness as some strict, sharp, and objective law are nothing else than the denial of similar explanations in physics – in the future physics but maybe even in the past and established physics. Every explanation like that – a deeper theory from which we derive the current approximate theories; but even a method to organize the concepts and find previously overlooked patterns – change the game. They change our perception of what is natural. To say that one already has the right and final formulae deciding on how much a theory is natural – or right – is equivalent to saying that we won't learn anything important in physics in the future, and I think that it's obviously ludicrous.

The naturalness reasoning is a special example of Bayesian inference applied to probability distributions on the parameter spaces. So we need to emphasize that the conclusions depend on the Bayesian probability distributions. But a defining feature of Bayesian probabilities is that they change – or should be changed, by Bayes' theorem – whenever we get new evidence. It follows that in the future, after new papers, our perception of naturalness of one model or another will unavoidably change and attempts to codify a "current" formula for the naturalness are attempts to sell self-evidently incomplete knowledge as the complete one. More complete theories will tell us more about the values of parameters in the current approximate theories – and they will be able to say whether our probability distributions on the parameter spaces were successful bookmaker's guesses. The answer may be Yes, No, or something in between. In some cases, the guess will be right. In others, it will be wrong but it will look like the bookmaker's bad luck. But there will also be cases in which the bookmakers will be seen to have missed something – making it obvious that in general, the bookmakers' odds are something else than the actual results of the matches! It's the true results of the matches, and not some bookmakers' guesses at some point, that define the truth that physics wants to find.

At the end, I believe that physicists such as the authors of the paper I criticized above are motivated by some kind of "falsifiability wishful thinking". They would like if physics became an Olympic discipline where you may organize a straightforward race and you may declare the winners and losers. Pay \$10 billion for the LHC and it will tell you whether SUSY is relevant for the weak scale physics. But physics is not an Olympic discipline. Physics is the search for the laws of Nature. It is the search for the truth. And a part of the truth is that there are no extremely simple and universal solutions to problems or methods to answer difficult questions.

If a model can describe the observed physics – plus some bumps – with $$\Delta=10$$ instead of $$\Delta=1,000$$ of its similar competitor, I may prefer the former even though the value of $$\Delta$$ obviously won't be the only thing that matters.

But when you compare two extremely different theories or classes of theories – and supersymmetric vs non-supersymmetric models are a rather extreme example – it becomes virtually impossible to define a "calibration" of the naturalness measure that would be good for both different beasts. The more qualitatively the two theories or classes of theories differ, the more different their prior probabilities may be, and the larger is the possible multiplicative factor that you have to add to $$\Delta$$ of one theory to make the two values of $$\Delta$$ comparable.

And suggesting that people should embrace things like $$\Delta\lt 30$$ with some particular definition of $$\Delta$$ is just utterly ludicrous. It's a completely arbitrary bureaucratic restriction that someone extracted out of thin air. No scientist can take it seriously because there is zero evidence that there should be something right about such a particular choice.

If the LHC doesn't find supersymmetry during the $$13\TeV$$ run or the $$14\TeV$$ run, it won't mean that SUSY can't be hiding around the corner that would be accessible by a $$42\TeV$$ or $$100\TeV$$ collider. It's spectacularly obvious that no trustworthy argument that would imply such a thing may exist. If nothing will qualitatively change about the available theoretical paradigms, I would even say that most of the sensible phenomenologists will keep on saying that SUSY is most likely around the corner, despite the fact that similar predictions will have failed.

At least the phenomenologists who tend to pay attention to naturalness will say so. By my assumptions, SUSY will remain the most natural explanation of the weak scale on the market. At the same moment, the naturalness-avoiding research – I also mean the anthropic considerations but not only anthropic considerations – will or should strengthen. But up to abrupt flukes, all these developments will be gradual. It can't be otherwise. When experiments aren't creating radical, game-changing new data, there can't be any game-changing abrupt shift in the theoretical opinions, either.

## September 23, 2015

### astrobites - astro-ph reader's digest

When Stars Align

Title: Lens Masses and Distances from Microlens Parallax and Flux
Authors: J. C. Yee
First Author’s Institution: Harvard-Smithsonian Center for Astrophysics, Cambridge, MA
Status: Submitted to The Astrophysical Journal Letters

When stars were divine, and their journeys across the heavens foretold the events to unfold on our humble terrestrial sphere, the alignments of stars were studiously marked.  They signaled the rise of new dynasties, a lucky windfall, ill-fated love.  With the passage of a few millennia (and the realization that the wandering stars were planets), our modern sensibilities have been honed to instinctually interpret the apparent crossing of stellar paths as just a happy but natural coincidence with no deeper significance.  But perhaps mistakenly so.  For when the stars align, you just might catch a glimpse of things otherwise invisible:  binary stars in wide orbits, isolated black hole hermits, or the abandoned (or unruly) free-floating planet far from the star around which it was born.

These invisible wanderers are quite literally brought to light through a unique sequence of events that occurs when two celestial bodies align.  The force that orchestrates the event?  Gravity.  Black holes have gotten much fame for their gravitational brawn, which grants them the abilities to warp space and time and to bend light.  Such powers, however, are actually not limited to black holes alone—they’re bequested on anything with mass.  Your everyday celestial body—say, a star or planet—can do precisely the same.  The share of the limelight that black holes have been given in this regard is fairly earned, however, as the ease with which you can see the distortions caused by a massive object scales with its mass.  Such objects can act as a lens by virtue of their spheroidal shapes, focusing the stray light beams passing from behind into a distorted image.  This process is known as gravitational lensing.

For objects as small as stars, you can’t see these images.  The images would be tiny—about a million times smaller than the angular diameter of the Moon—hence the name of this class of gravitational lensing events, microlensing.  But these minuscule images can have a disproportionately large detectable effect.  When a star crosses paths with another, you’d observe the background star to brighten drastically—as much as a factor of a 1000!—then dim.  The two stars don’t have to pass directly in front of each other, but the closer they do, the more the light from the star behind (the “source” star) will be focused. Maximal brightening occurs when the source and the lens are at exactly the same position, a special point called the caustic. Over weeks to months, a distant observer would note a single brightening, then dimming of the star.  If the lensed star had any companions—a fairly likely scenario, as stars often come in pairs, and most (if not all!) are believed to have planets—its caustic can morph from a point into a series of closed curves (see Figure 1). If the source star approached or crossed these curves, we’d observe additional brief spikes in light. Depending on how the mass of the companion compares to the lensed star, these spikes can be as short as a day (for low mass companions such as planets)—a real challenge for planet hunters searching for microlensed systems.

Figure 1. The geometry of a a microlensing event.  The path of the background star (the “source” of light) is shown by thin gray curve; the arrow shows the direction it moved along this line.  The foreground lensing object is a binary system, likely a brown dwarf (M1) with a planet (M2).  The background is shaded in different shades of gray to show how much the binary could cause the background star to brighten (see Figure 2 for what was observed).  The dark black curves denote the “caustics” of the binary lens: when the background star crosses a caustic, it momentarily becomes infinitely bright if the background star was a point (which is unrealistic—we know stars have finite sizes!).  Figure from Han et al. 2013.

Figure 2. A microlensing lightcurve. The lightcurve (brightness over time, here in days) observed for the system shown in Figure 1. The two peaks occur when the background star crosses the caustic (which, as you can see from Figure 1, occurs twice). Figure from Han et al. 2013.

Microlensing events, however, lack one piece of information prized by astronomers—masses.  The lens mass affects how long a microlensing event lasts.  The duration of a microlensing event is easy to measure, but it also depends on three other things: how far away the lens is, how far the source is, and how fast they’re moving relative to each other.  Thus in order to derive the mass of the lens, we need to determine the other three.  The distance of the source is easy—typical sources are in the Galactic bulge, the concentration of stars at the center of our galaxy, which is a well known distance away (about 8 kpc).

The author of today’s paper suggest a two-step process to determine the remaining three unknowns.  The first is to make two microlensing observations of the same event, but at different places (and thus different angles).  Such a “microlens parallax” measurement—which has just recently become possible to obtain for many microlens events due to a new campaign to search for space-based microlensing events with the Spitzer Space Telescope—allows you to reduce the unknowns to the lens mass and distance.  This leaves you with the classic problem of having a single equation with two unknowns, for which there are an infinity of permitted combinations.

Figure 3. Disentangling the mass of the lensing star.  If you can observe a microlensing event from two different angles, you can derive a mass/magnitude-distance relation (black; the dashed line denotes the uncertainty).  Measuring the flux from the lensing star also produces a magnitude-distance relation (magenta).  The place where the two lines cross gives the mass and distance of the lens.  In the special case in which you have a binary lens, the size of the source star affects how bright it gets, which allows you to derive another mass-distance relation.  Figure from today’s paper.

The final key to the puzzle?  The flux from the lens, if it’s bright enough.  If the lens is a star, its flux allows us to measure how far away it is, given that we know how luminous it is.  Since we don’t often know how luminous a given object is, this yields a magnitude-distance relationship.  Based on our understanding of stars, the mass-distance relationship obtained from a microlens parallax measurement can be converted into a second (and very different!) magnitude-distance relationship (see Figure 3).  Whatever magnitude-distance combination that’s permissible by both relationships gives you the distance to the lens—which finally allows you to solve for the mass of the lens.

And there you have it.  It may seem like a long and difficult process to obtain the mass of a fleetingly visible object, but these mass measurements will help us to understand planets and stars of our galaxy that are currently unreachable by another means.  With additional Spitzer microlensing campaigns—the first of which is already returning a treasure trove of results—as well as the revamped Kepler mission, K2, and the upcoming mission WFIRST, space-based microlening surveys may become routine.  It’s an exciting time for microlensing—many new discoveries await!

Cover image:  A map of the amount of brightening you’d see if a distant star passed behind one of the stars in a equal-mass wide binary.  The black curves denote the lens’s caustic—if the background star crossed this curve, would momentarily become infinitely bright if it was a point source.   The path of the distant background (source) star would appear as a line across the image.  You can predict the lightcurve of the microlensing event by plotting the brightening along the source star’s path.  Figure from Han & Gaudi 2008.

### Symmetrybreaking - Fermilab/SLAC

Muon g-2 magnet successfully cooled down and powered up

It survived a month-long journey over 3200 miles, and now the delicate and complex electromagnet is well on its way to exploring the unknown.

Two years ago, scientists on the Muon g-2 experiment successfully brought a fragile, expensive and complex 17-ton electromagnet on a 3200-mile land and sea trek from Brookhaven National Laboratory in New York to Fermilab in Illinois. But that was just the start of its journey.

Now, the magnet is one step closer to serving its purpose as the centerpiece of an experiment to probe the mysteries of the universe with subatomic particles called muons. This week, the ring—now installed in a new, specially designed building at Fermilab—was successfully cooled down to operating temperature (minus 450 degrees Fahrenheit) and powered up, proving that even after a decade of inactivity, it remains a vital and viable scientific instrument.

Getting the electromagnet to this point took a team of dedicated people more than a year, and the results have that team breathing a collective sigh of relief. The magnet was built at Brookhaven in the 1990s for a similar muon experiment, and before the move to Fermilab, it spent more than 10 years sitting in a building, inactive.

“There were some questions about whether it would still work,” says Kelly Hardin, lead technician on the Muon g-2 experiment. “We didn’t know what to expect, so to see that it actually does work is very rewarding.”

Moving the ring from New York to Illinois cost roughly 10 times less than building a new one. But it was a tricky proposition—the 52-foot-wide, 17-ton magnet, essentially three rings of aluminum with superconducting coils inside, could not be taken apart, nor twisted more than a few degrees without causing irreparable damage.

Scientists sent the ring on a fantastic voyage, using a barge to bring it south around Florida and up a series of rivers to Illinois. A specially designed truck gently drove it the rest of the way to Fermilab.

The Muon g-2 experiment plans to use the magnet to build on the Brookhaven experiment but with a much more powerful particle beam. The experiment will trap muons in the magnetic field and use them to detect theoretical phantom particles that might be present, impacting the properties of the muons. But to do that, the team had to find out whether the machine could generate the needed magnetic field.

The magnet was moved into its own building on the Fermilab site. Over the past year, workers took on the painstaking task of reassembling the steel base. Two dozen 26-ton pieces of steel—and a dozen 11-ton pieces—had to be maneuvered into place with tremendous precision.

“It was like building a 750-ton Swiss watch,” says Chris Polly, project manager for the experiment.

While that assembly was taking place, other members of the team had to completely replace the control system for the magnet, redesigning it from scratch. Del Allspach, the project engineer, and Hogan Nguyen, one of the primary managers of the ring, oversaw much of this effort, as well as the construction of the infrastructure (helium lines, power conduits) needed before the ring could be cooled and powered.

“That work was very challenging,” Nguyen says. “We had to stay within very strict tolerances for the alignment of the equipment.”

The tightest of those tolerances was 10 microns. For comparison, the width of a human hair is 75 microns. A red blood cell is about 5 microns across.

While assembling the components around the ring, the team also tracked down and sealed a significant helium leak, one that had been previously documented at Brookhaven. Hardin says that the team was relieved to discover that the leak was in an area that could be accessed and fixed. The successful cool-down proved that the leak had been plugged.

“That’s where the big relief comes in,” says Hardin. “We had a good team, and we worked together well.”

Bringing the ring down to its operating temperature of minus 450 degrees Fahrenheit required cooling it with a helium refrigeration system and liquid nitrogen for more than two weeks. Polly noted that this was a tricky process, since the magnet as a whole shrank by at least an inch as it cooled down. This could have damaged the delicate coils inside if it was not done slowly.

Once cooling was complete, the ring had to be powered with 5300 amps of current to produce the magnetic field. This was another slow process, with technicians easing the ring up by less than 2 amps per second and stopping every 1000 amps to check the system.

“It proves we started with a good magnet,” Allspach says. “It had been off for more than a decade, then moved across the country, installed, cooled and powered. I’m very happy to be at this point. It’s a big success for all of us.”

The next step for the magnet is a long process of “shimming,” or adjusting the magnetic field to within extraordinarily small tolerances. Fermilab is in the process of constructing a beamline that will provide muons to the magnet, and scientists expect to start measuring those muons in 2017.

For Nguyen, that step—handing the magnet off to early-career scientists, who will help carry out the experiment—is exciting. One of the thrills of the process, he says, was watching these younger members of the team learn and grow as the experiment took shape.

“I can’t wait to see these younger people get to control this beautiful magnet,” he says.

Like what you see? Sign up for a free subscription to symmetry!

## September 22, 2015

### Lubos Motl - string vacua and pheno

A story on Nima Arkani-Hamed
LHC, vaguely related: ALICE confirms the CPT symmetry
Natalie Wolchover wrote a rather long article
Visions of Future Physics
about Nima Arkani-Hamed for the Quanta Magazine. You may read lots of stuff about Nima's life and career, his personality, what he considers to be his weaknesses etc.

There is also a big section about his plans to lead the Chinese nation – that has hired him – to build a new collider that is about as big as the Milky Way. ;-)

Some thoughts about the future of physics may be seen everywhere in the article.

TRF blog posts rarely follow the template of "linker not a thinker" but this one is surely one of those rare exceptions! ;-) I could write lots of things about Nima but the insane migration vote and other things have exhausted me for today.

### Symmetrybreaking - Fermilab/SLAC

Do protons decay?

Is it possible that these fundamental building blocks of atoms have a finite lifetime?

The stuff of daily existence is made of atoms, and all those atoms are made of the same three things: electrons, protons and neutrons.

Protons and neutrons are very similar particles in most respects. They’re made of the same quarks, which are even smaller particles, and they have almost exactly the same mass.

Yet neutrons appear to be different from protons in an important way: They aren’t stable. A neutron outside of an atomic nucleus decays in a matter of minutes into other particles.

A free proton is a pretty common sight in the cosmos. Much of the ordinary matter (as opposed to dark matter) in galaxies and beyond comes in the form of hydrogen plasma, a hot gas made of unattached protons and electrons. If protons were as unstable as neutrons, that plasma would eventually vanish.

But that isn’t happening. Protons—whether inside atoms or drifting free in space—appear to be remarkably stable. We’ve never seen one decay.

However, nothing essential in physics forbids a proton from decaying. In fact, a stable proton would be exceptional in the world of particle physics, and several theories demand that protons decay.

If protons are not immortal, what happens to them when they die, and what does that mean for the stability of atoms?

#### Following the rules

Fundamental physics relies on conservation laws: certain quantities that are preserved, such as energy, momentum and electric charge. The conservation of energy—combined with the famous equation E=mc2—means that lower-mass particles can’t change into higher-mass ones without an infusion of energy. Combining conservation of energy with conservation of electric charge tells us that electrons are probably stable forever: No lower-mass particle with a negative electric charge exists, to the best of our knowledge.

Protons aren’t constrained the same way: They are more massive than a number of other particles, and the fact that they are made of quarks allows for several possible ways for them to die.

For comparison, a neutron decays into a proton, an electron and a neutrino. Both energy and electric charge are preserved in the decay: A neutron is a wee bit heftier than a proton and electron combined, and the positively-charged proton balances out the negatively-charged electron to make sure the total electric charge is zero both before and after the decay. (The neutrino—or technically an antineutrino, the antimatter version—is necessary to balance other things, but that’s a story for another day.)

Because atoms are stable and we’ve never seen a proton die, perhaps protons are intrinsically stable. However, as Kaladi Babu of Oklahoma State University points out, there’s no “proton conservation law" like charge conservation to preserve a proton.

“You ask this question: What if the proton decays?” he says. “Does it violate any fundamental principle of physics? And the answer is no.”

#### No GUTs, no glory

So if there’s no rule against proton decay, is there a reason scientists expect to see it? Yes. Proton decay is the strongest testable prediction of several grand unified theories, or GUTs.

GUTs unify three of the four fundamental forces of nature: electromagnetism, the weak force and the strong force. (Gravity isn’t included because we don’t have a quantum theory for it yet.)

The first GUT, proposed in the 1970s, failed. Among other things, it predicted a proton lifetime short enough that experiments should have seen decays when they didn’t. However, the idea of grand unification was still valuable enough that particle physicists kept looking for it. (You might say they had a GUT feeling. Or you might not.)

“The idea of grand unification is really beautiful and explains many things that seem like bizarre coincidences,” says theorist Jonathan Feng, a physicist at the University of California, Irvine.

Feng is particularly interested in a GUT that involves Supersymmetry, a brand of particle physics that potentially could explain a wide variety of phenomena, including the invisible dark matter that binds galaxies together. Supersymmetric GUTs predict some new interactions that, as a pleasant side effect, result in a longer lifetime for protons, yet still leave proton decay within the realm of experimental detection. Because of the differences between supersymmetric and non-supersymmetric GUTs, Feng says the proton decay rate could be the first real sign of Supersymmetry in the lab.

However, Supersymmetry is not necessary for GUTs. Babu is fond of a GUT that shares many of the advantages of the supersymmetric versions. This GUT’s technical name is SO(10), named because its mathematical structure involves rotations in 10 imaginary dimensions. The theory includes important features absent from the Standard Model such as neutrino masses, and might explain why there is more matter than antimatter in the cosmos. Naturally, it predicts proton decay.

#### The search for proton decay

Much rests on the existence of proton decay, and yet we’ve never seen a proton die. The reason may simply be that protons rarely decay, a hypothesis borne out by both experiment and theory. Experiments say the proton lifetime has to be greater than about 1034 years: That’s a 1 followed by 34 zeroes.

For reference, the universe is only 13.8 billion years old, which is roughly a 1 followed by 10 zeros. Protons on average will outlast every star, galaxy and planet, even the ones not yet born.

The key phrase in that last sentence is “on average.” As Feng says, it’s not like “every single proton will last for 1034 years and then at 1034 years they all boom! poof! in a puff of smoke, they all disappear.”

Because of quantum physics, the time any given proton decays is random, so a tiny fraction will decay long before that 1034-year lifetime. So, “what you need to do is to get a whole bunch of protons together,” he says. Increasing the number of protons increases the chance that one of them will decay while you’re watching.

The second essential step is to isolate the experiment from particles that could mimic proton decay, so any realistic proton decay experiment must be located deep underground to isolate it from random particle passers-by. That’s the strategy pursued by the currently operating Super-Kamiokande experiment in Japan, which consists of a huge tank with 50,000 tons of water in a mine. The upcoming Deep Underground Neutrino Experiment, to be located in a former gold mine in South Dakota, will consist of 40,000 tons of liquid argon.

Because the two experiments are based on different types of atoms, they are sensitive to different ways protons might decay, which will reveal which GUT is correct … if any of the current models is right. Both Super-Kamiokande and DUNE are neutrino experiments first, Feng says, “but we're just as interested in the proton decay possibilities of these experiments as in the neutrino aspects.”

After all, proton decay follows from profound concepts of how the cosmos fundamentally operates. If protons do decay, it’s so rare that human bodies would be unaffected, but not our understanding. The impact of that knowledge would be immense, and worth a tiny bit of instability.

Like what you see? Sign up for a free subscription to symmetry!

### Quantum Diaries

Neutrinoless Double Beta Decay and the Quest for Majorana Neutrinos

Neutrinos have mass but are they their own antimatter partner?

The fortunate thing about international flights in and out of the US is that, likely, it is long enough for me to slip in a quick post. Today’s article is about the search for Majorana neutrinos.

Mexico City Airport. Credit: R. Ruiz

Neutrinos are a class of elementary particles that do not carry a color charge or electric charge, meaning that they do not interact with the strong nuclear force or electromagnetism. Though they are known to possess mass, their masses are so small experimentalists have not yet measured them. We are certain that they have mass because of neutrino oscillation data.

Neutrinos in their mass eigenstates, which are a combination of their flavor (orange, yellow, red) eigenstates. Credit: Particle Zoo

This history of neutrinos is rich. They were first proposed as a solution to the mystery of nuclear beta (β)-decay, a type of radioactive decay. Radioactive decay is the spontaneous and random disintegration of an unstable nucleus in an atom into two or more longer-lived, or more stable, nuclei. A free neutron (which is made up of two down-type quarks, one up-type quark, and lots of gluons holding everything together) is unstable and will eventually undergo radioactive decay. Its half-life is about 15 minutes, meaning that given a pile of free neutrons, roughly half will decay by the end of those 15 minutes. A neutron in a bound system, for example in a nucleus, is much more stable. When a neutron decays, a down quark will become an up-type quark by radiating a (virtual) W- boson. Two up-type quarks and a down-type quark are what make a proton, so when a neutron decays, it turns into a proton and a (virtual) W- boson. Due to conservation of energy, the boson is very restricted into what it can decay; the only choice is an electron and an antineutrino (the antiparticle partner of a neutrino). The image below represents how neutrons decay.

Since neutrinos are so light, and interact very weakly with other matter, when neutron decay was first observed, only the outgoing electron and proton (trapped inside of a nucleus) were ever observed. As electrons were historically called β-rays (β as in the Greek letter beta), this type of process is known as nuclear beta-decay (or β-decay). Observing only the outgoing electron and transmuted atom but not the neutrino caused much confusion at first. The process

Nucleus A → Nucleus B + electron

predicts, by conservation of energy and linear momentum, that the electron carries the same fixed amount of energy in each and every decay. However, outgoing electrons in β-decay do not always have the same energy: very often they come out with little energy, but other times they come out with a lot of energy. The plot below is an example distribution of how often (vertical axis) an electron in β-decay will be emitted carrying away a particular amount of energy (horizontal axis).

Electron spectrum in beta decay: Number of electrons/beta-particles (vertical axis) versus energy/kinetic energy (KE) or electrons (horizontal axis). Credit: R. Church

Scientists at the time, including Wolfgang Pauli, noted that the distribution was similar to the decay process where a nucleus decays into three particles instead of two:

Nucleus A → Nucleus B + electron + a third particle.

Furthermore, if the third particle had no mass, or at least an immeasurably small mass, then the energy spectrum of nuclear β-decay could be explained. This mysterious third particle is what we now call the neutrino.

One reason for neutrinos being so interesting is that they are chargeless. This is partially why neutrinos interact very weakly with other matter. However, since they carry no charge, they are actually nearly indistinguishable from their antiparticle partners. Antiparticles carry equal but opposite charges of their partners. For example: Antielectrons (or positrons) carry a +1 electric charge whereas the electron carries a -1 electric charge. Antiprotons carry a -1 electric charge were as protons carry a +1 electric charge. Etc. Neutrinos carry zero charge, so the charges of antineutrinos are still zero. Neutrinos and antineutrinos may in fact differ thanks to some charge that they both possess, but this has not been verified experimentally. Hence, it is possible that neutrinos and antineutrinos are actually the same particle. Such particles are called Majorana particles, named after the physicist Ettore Majorana, who first studied the possibility of neutrinos being their own antiparticles.

The Majorana nature of neutrinos is an open question in particle physics. We do not yet know the answer, but this possibility is actively being studied. One consequence of light Majorana neutrinos is the phenomenon called neutrinoless double β-decay (or 0νββ-decay). In the same spirit as nuclear β-decay (discussed above), double β-decay is when two β-decays occur simultaneously, releasing two electrons and two antineutrinos. Double β-decay proceeds through the following diagram (left):

Double beta decay (L) and neutrinoless double beta decay (R). Credit: CANDLES experiment

Neutrinoless double β-decay is a special process that can only occur if neutrinos are Majorana. In this case, neutrinos and antineutrinos are the same and we can connect the two outgoing neutrino lines in the double β-decay diagram, as shown above. In 0νββ-decay, a neutrino/antineutrino is exchanged between the two decaying neutrons instead of escaping like the electrons.

Having only four particles in the final state for 0νββ-decay (two protons and two electrons) instead of six in double β-decay (two protons, two electrons, and two neutrinos) has an important effect on the kinematics, or motion, of the electrons, i.e., the energy and momentum distributions. In double β-decay:

Nucleus A → Nucleus B + electron + electrons + neutrino + neutrino

the two protons are so heavy compared to the energy released by the decaying neutrons that there is hardly any energy to give them a kick. So for the most part, the protons remain at rest. The neutrinos and electrons then shoot off in various directions and various energies. In neutrinoless double β-decay:

Nucleus A → Nucleus B + electron + electrons

since the remnant nucleus are still roughly at rest, the electron pair take away all the remaining energy allowed by energy conservation. There are no neutrinos to take energy away from the electrons and broaden their distribution. This difference between ββ-decay and 0νββ-decay is stark, particularly in the likelihood of how often (vertical axis) the electrons in β-decay will be emitted carrying away a particular amount of energy (horizontal axis). As seen below, the electron energy distribution in double β-decay is very wide and is centered around smaller energies, whereas the 0νββ-decay is very narrow and is peaked at the maximum of the 2νββ-decay curve.

For double beta decay (blue) and neutrinoless double beta decay (red peak), the electron spectrum in beta decay: Number of electrons/beta-particles (vertical axis) versus energy/kinetic energy (KE) or electrons (horizontal axis). Credit: COBRA experiment

Unfortunately, searches for 0νββ-decay have not yielded any evidence for Majorana neutrinos. This could be because neutrinos are not their own antiparticle, in which case we will never observe the decay. Alternatively, it could be the case that current experiments are simply not yet sensitive to how rarely 0νββ-decay occurs. The rate at which the decay occurs is proportional to the mass of the intermediate neutrino: a zero neutrino mass implies a zero 0νββ-decay rate.

Experiments such as KATRIN hope to measure the mass of neutrinos in the next coming years. If a mass measurement is obtained, it would be a very impressive and impacting result. Furthermore, definitive predictions for 0νββ-decay can be made, at which point the current generation of experiments, such as MAJORANA, COURE, and EXO will be in a mad dash for testing whether or not neutrinos are indeed their own antiparticle.

Lower view of CUORE Cryostat. Credit: CUORE Experiment

Inside view of CUORE Cryostat. Credit: CUORE Experiment

Happy Hunting and Happy Colliding,

Richard Ruiz (@BraveLittleMuon)

PS Much gratitude to Yury Malyshkin,  Susanne Mertens, Gastón Moreno, and Martti Nirkko for discussions and inspiration for this post. Cheers!

Update 2015 September 25: Photos of the Cryogenic Underground Observatory for Rare Events (CUORE) experiment have been added. Much appreciate to QD-er Laura Gladstone.

## September 21, 2015

### Tommaso Dorigo - Scientificblogging

Statistics Lectures For Physicists In Traunkirchen
The challenge of providing Ph.D. students in Physics with an overview of statistical methods and concepts useful for data analysis in just three hours of lectures is definitely a serious one, so I decided to take it as I got invited to the "Indian Summer School" in the pleasant lakeside town of Traunkirchen, Austria.

## September 20, 2015

### The n-Category Cafe

The Free Modular Lattice on 3 Generators

The set of subspaces of a vector space, or submodules of some module of a ring, is a lattice. It’s typically not a distributive lattice. But it’s always modular, meaning that the distributive law

$a\vee \left(b\wedge c\right)=\left(a\vee b\right)\wedge \left(a\vee c\right) a \vee \left(b \wedge c\right) = \left(a \vee b\right) \wedge \left(a \vee c\right) $

holds when $a\le ba \le b$ or $a\le ca \le c$. Another way to say it is that a lattice is modular iff whenever you’ve got $a\le a\prime a \le a\text{'}$, then the existence of an element $xx$ with

$a\wedge x=a\prime \wedge x\phantom{\rule{thickmathspace}{0ex}}\mathrm{and}\phantom{\rule{thickmathspace}{0ex}}a\vee x=a\prime \vee x a \wedge x = a\text{'} \wedge x \; \mathrm\left\{and\right\} \; a \vee x = a\text{'} \vee x $

is enough to imply $a=a\prime a = a\text{'}$. Yet another way to say it is that there’s an order-preserving map from the interval $\left[a\wedge b,b\right]\left[a \wedge b,b\right]$ to the interval $\left[a,a\vee b\right]\left[a,a \vee b\right]$ that sends any element $xx$ to $x\vee ax \vee a$, with an order-preserving inverse that sends $yy$ to $y\wedge by \wedge b$:

Dedekind studied modular lattices near the end of the nineteenth century, and in 1900 he published a paper showing that the free modular lattice on 3 generators has 28 elements.

One reason this is interesting is that the free modular lattice on 4 or more generators is infinite. But the other interesting thing is that the free modular lattice on 3 generators has intimate relations with 8-dimensional space. I have some questions about this stuff.

One thing Dedekind did is concretely exhibit the free modular lattice on 3 generators as a sublattice of the lattice of subspaces of ${ℝ}^{8}\mathbb\left\{R\right\}^8$. If we pick a basis of this vector space and call it ${e}_{1},\dots ,{e}_{8}e_1, \dots, e_8$, he looked at the subspaces

$X=⟨{e}_{2},{e}_{4},{e}_{5},{e}_{8}⟩,\phantom{\rule{1em}{0ex}}Y=⟨{e}_{2},{e}_{3},{e}_{6},{e}_{7}⟩,\phantom{\rule{1em}{0ex}}Z=⟨{e}_{1},{e}_{4},{e}_{6},{e}_{7}+{e}_{8}⟩ X = \langle e_2, e_4, e_5, e_8 \rangle , \quad Y = \langle e_2, e_3, e_6, e_7 \rangle, \quad Z = \langle e_1, e_4, e_6, e_7 + e_8 \rangle $

By repeatedly taking intersections and unions, he built 28 subspaces starting from these three.

This proves the free modular lattice on 3 generators has at least 28 elements. In fact it has exactly 28 elements. I think Dedekind showed this by working out the free modular lattice ‘by hand’ and noting that it, too, has 28 elements. It looks like this:

This picture makes it a bit hard to see the ${S}_{3}S_3$ symmetry of the lattice, but if you look you can see it. (Can someone please draw a nice 3d picture that makes the symmetry manifest?)

If you look carefully here, as Hugh Thomas did, you will see 30 elements! That’s because the person who drew this picture, like me, defines a lattice to be a poset with upper bounds and lower bounds for all finite subsets. Dedekind defined it to be a poset with upper bounds and lower bounds for all nonempty finite subsets. In other words, Dedekind’s kind of lattice has operations $\vee \vee$ and $\wedge \wedge$, while mine also has a top and bottom element. So, Dedekind’s ‘free lattice on 3 generators’ did not include the top and bottom element of the picture here. So, it had just 28 elements.

Now, there’s something funny about how 8-dimensional space and the number 28 are showing up here. After all, the dimension of $\mathrm{SO}\left(8\right)\mathrm\left\{SO\right\}\left(8\right)$ is 28. This could be just a coincidence, but maybe not. Let me explain why.

The 3 subspace problem asks us to classify triples of subspaces of a finite-dimensional vector space $VV$, up to invertible linear transformations of $VV$. There are finitely many possibilities, unlike the situation for the 4 subspace problem. One way to see this is to note that 3 subspaces $X,Y,Z\subseteq VX, Y, Z \subseteq V$ give a representation of the ${D}_{4}D_4$ quiver, which is this little category here:

This fact is trivial: a representation of the ${D}_{4}D_4$ quiver is just 3 linear maps $X\to VX \to V$, $Y\to VY \to V$, $Z\to VZ \to V$, and here we are taking those to be inclusions. The nontrivial part is that indecomposable representations of any Dynkin quiver correspond in a natural one-to-one way with positive roots of the corresponding Lie algebra. The Lie algebra corresponding to ${D}_{4}D_4$ is $\mathrm{𝔰𝔬}\left(8\right)\mathfrak\left\{so\right\}\left(8\right)$, the Lie algebra of the group of rotations in 8 dimensions. This Lie algebra has 12 positive roots. So, the ${D}_{4}D_4$ quiver has 12 indecomposable representations. The representation coming from any triple of subspaces $X,Y,Z\subseteq VX, Y, Z \subseteq V$ must be a direct sum of these indecomposable representations, so we can classify the possibilities and solve the 3 subspace problem!

What’s going on here? On the one hand, Dedekind the free modular lattice on 3 generators shows up as a lattice of subspaces generated by 3 subspaces of ${ℝ}^{8}\mathbb\left\{R\right\}^8$. On the other hand, the 3 subspace problem is closely connected to classifying representations of the ${D}_{4}D_4$ quiver, whose corresponding Lie algebra happens to be $\mathrm{𝔰𝔬}\left(8\right)\mathfrak\left\{so\right\}\left(8\right)$. But what’s the relation between these two facts, if any?

Another way to put the question is this: what’s the relation between the 12 indecomposable representations of the ${D}_{4}D_4$ quiver and the 28 elements of the free modular lattice on 3 generators? Or, more numerogically speaking: what relationship between the numbers 12 and 28 is at work in this business?

Here’s one somewhat wacky guess. The Lie algebra of $\mathrm{𝔰𝔬}\left(8\right)\mathfrak\left\{so\right\}\left(8\right)$ has 12 positive roots, and its Cartan algebra has dimension 4. As usual, the Lie algebra is spanned by positive roots, an equal number of negative roots, and the Cartan subalgebra, so we get

$28=12+12+4 28 = 12 + 12 + 4 $

But I don’t really see how this is connected to anything I’d said previously. In particular, I don’t see why 24 of the 28 elements of the lattice of subspaces generated by

$X=⟨{e}_{2},{e}_{4},{e}_{5},{e}_{8}⟩,\phantom{\rule{thickmathspace}{0ex}}Y=⟨{e}_{2},{e}_{3},{e}_{6},{e}_{7}⟩,\phantom{\rule{thickmathspace}{0ex}}Z=⟨{e}_{1},{e}_{4},{e}_{6},{e}_{7}+{e}_{8}⟩ X = \langle e_2, e_4, e_5, e_8 \rangle , \; Y = \langle e_2, e_3, e_6, e_7 \rangle, \; Z = \langle e_1, e_4, e_6, e_7 + e_8 \rangle $

should be related to roots of ${D}_{4}D_4$.

I think a more sane, non-numerological approach to this network of issues is to take the ${D}_{4}D_4$ quiver representation corresponding to Dedekind’s choice of $X,Y,Z\subseteq {ℝ}^{8}X , Y, Z \subseteq \mathbb\left\{R\right\}^8$, decompose it into indecomposables, and see which positive roots those correspond to. I may try my hand at that in the comments, but I’m really looking for some help here.

## September 18, 2015

### Clifford V. Johnson - Asymptotia

Rearrangements

Just thought I'd share with you a snapshot (click for larger view) of my thinking process concerning my office move. I've been in the same tiny box of an office for 12 years, and quite happy too. For various reasons (mostly to do with one large window with lots of light), over the years I've turned down offers to move to nicer digs... but recently I've decided to make a change (giving up some of the light) and so after much to-ing and fro-ing, it seems that we've settled on where I'm going to go.

Part of the process involved me walking over there (it's an old lab space from several decades ago, hence the sink, which I want to stay) with a tape measure one day and making some notes in my notebook about the basic dimensions of some of the key things, including some of the existing [...] Click to continue reading this post

The post Rearrangements appeared first on Asymptotia.

### ZapperZ - Physics and Physicists

Quantum Cognition?
A lot of researchers and experts in other fields have tried to use various principles in physics in their own field. Economics have tried to invent something called Econophysics, to varying degree of success. And certainly many aspects of biology are starting to incorporate quantum effects.

Quantum mechanics has been used notoriously in many areas, including crackpottish application by the likes of Deepak Chopra etc. without really understanding the underlying physics. I don't know if this falls under the same category, but the news report out of The Atlantic doesn't do it any favor. I'm reading this article on quantum cognition, in which human behavior, and certain unpredictability and irrationality of human behavior, may be attributed to quantum effects!

Take, for example, the classic prisoner’s dilemma. Two criminals are offered the opportunity to rat each other out. If one rats, and the other doesn’t, the snitch goes free while the other serves a three-year sentence. If they both rat, they each get two years. If neither rats, they each get one year. If players always behaved in their own self-interest, they’d always rat. But research has shown that people often choose to cooperate.

Classical probability can’t explain this. If the first player knew for sure that the second was cooperating, it would make most sense to defect. If the first knew for sure that the second was defecting, it would also make most sense to defect. Since no matter what the other player is doing, it’s best to defect, then the first player should logically defect no matter what.

A quantum explanation for why player one might cooperate anyway would be that when one player is uncertain about what the other is doing, it’s like a Schrödinger’s cat situation. The other player has the potential to be cooperating and the potential to be defecting, at the same time, in the first player’s mind. Each of these possibilities is like a thought wave, Wang says. And as waves of all kinds (light, sound, water) are wont to do, they can interfere with each other. Depending on how they line up, the can cancel each other out to make a smaller wave, or build on each other to make a bigger one. If “the other guy’s going to cooperate” thought wave gets strengthened in a player’s mind, he might choose to cooperate too.

So you tell me if that made any sense or if this person has actually understood QM beyond what he read in a pop-science book. First of all, when wave cancellation occurs, it doesn't "make a smaller wave". It makes NO wave at that instant and time. Secondly, this person is espousing the existence of some kind of a "thought wave" that hasn't been verified, and somehow, the thought waves from the two different prisoners overlap each other (this, BTW, can be described via classical wave pictures, so why quantum picture in invoked here?).

But the fallacy comes in the claim that there is no other way to explain why different people act differently here without invoking quantum effects. Unlike physics systems where we can prepare two systems identically, we can find no such thing in human beings (even  with twins!). Two different people have different backgrounds and "baggage". We have different ethics, moral standards, etc. You'll never find two identical systems to test this out. That's why we have 9 judges on the US Supreme Court, and they can have wildly differing opinions on the identical issue! So why can't they use this to explain why people react differently under this same situation? Why can't they find the answer via the human psychology rather than invoking QM?

But it gets worse...

The act of answering a question can move people from wave to particle, from uncertainty to certainty. In quantum physics, the “observer effect” refers to how measuring the state of a particle can change the very state you’re trying to measure. In a similar way, asking someone a question about the state of her mind could very well change it. For example, if I’m telling a friend about a performance review I have coming up, and I’m not sure how I feel about it, if she asks me “Are you nervous?” that might get me thinking about all the reasons I should be nervous. I might not have been nervous before she asked me, but after the question, my answer might become, “Well, I am now!”

Of course, this smacks of the crackpottery done in "The Secret". Let's get this straight first of all, especially those who do not have a formal education in QM. There is no such thing as "wave-particle duality" in QM! QM/QFT etc. describe the system via a single, consistent formulation. We don't switch gears going from "wave" to "particle" and back to "wave" to describe things things. So the system doesn't move "from wave to particle", etc. It is the nature of the outcome that most people consider to be "wave-like" or "particle-like", but these are ALL produced by the same, single, consistent description!

The problem I have with this, and many other areas that tried to incorporate QM, is that they often start with the effects, and then say something like "Oh, it looks very much like a quantum effect". This is fine if there is an underlying, rigorous mathematical description, but often, there isn't! You cannot says that an idea is "complimentary" to another idea the same way position and momentum observables are non-commuting. The latter has a very set of rigorous mathematical rules and description. To argue that "... quantum models were able to predict order effects shown in 70 different national surveys... " is not very convincing because in physics, this would be quite unconvincing. It means that there are other factors that come in that are not predictable and can't be accounted for. What is there to argue that these other factors are also responsible for the outcome?

Again, the inability to test this out using identical systems makes it very difficult to be convincing. Human behavior can be irrational and unpredictable. That is know. Rather than considering this to be the result of quantum effects, why not consider this to be the result of a chaotic behavior over time, i.e. all of the various life experiences that an individual had all conspire to trigger the decision that he/she makes at a particular time. The "butterfly effect" in an individual's time line can easily cause a particular behavior at another time. To me, this is as valid of an explanation as any.

And that explanation is purely classical!

Zz.

### Axel Maas - Looking Inside the Standard Model

Something dark on the move
If you browse either through popular science physics or through the most recent publications on the particle physics' preprint server arxiv.org then there is one topic which you cannot miss: Dark matter.

What is dark matter? Well, we do not know. So why do we care? Because we know something is out there, something dark, and its moving. Or, more precisely, it moves stuff around. When we look to the skies and measure how stars and galaxies move, then we find something interesting. We think we know how these objects interact, and how they therefore influence each other's movement. But what we observe does not agree with our expectations. We think we have excluded any possibility that we are overlooking something known, like that there are many more black holes, intergalactic gas and dust, or any of the other particles we know filling up the cosmos. No, it seems there is more out there than we can detect right now directly and have a theory for.

Of course, it can be that we miss something about how stars and galaxies influence each other, and this possibility is also pursued. But actually the simplest explanation is that out there is a new type of matter. A type of matter which does not interact either by electromagnetism or the strong force, because otherwise we would have seen it in experiment. Since there is no interaction with electromagnetism, it does not reflect or emit light, and therefore we cannot see it using optics. Hence the name dark matter. Because it is dark.

It certainly acts gravitationally, since this is how stars and galaxies are influenced. It may still be that it either interacts by the weak interaction or with the Higgs. That is something which is currently investigated in many experiments around the world. Of course, it could also interact with the standard model particles by some unknown force we have yet to discover. This would make it even more mysterious.

Because it is so popular there are many resources on the web which discuss what we already know (or do not know) about dark matter. Rather than repeating that, I will here write why I start to be interested in it. Or at least in some possible types of it. Because dark matter which only interacts by gravitation is not particularly interesting right know, as we will likely not learn much about in the foreseeable future. So I am more interested in such types of dark matter which couple by some other means to the standard model. Until they are excluded by experiments.

If it should interact with the standard model by some new force then this new force will look likely at first just like a modification of the weak interactions and/or of the Higgs. This would be an effective description of it. Given time, we would also figure out the details, but we have not yet.

Thus, as a first shot, I will concentrate on the cases where it it could interact with the weak force or just with the Higgs. Interacting with the weak force is actually quite complicated if it should fulfill all experimental constraints we have. Modifications there, though possible, are thus unlikely. Leaves the Higgs.

Therefore, I would like to see how dark matter could interact with the Higgs. Such models are called Higgs portal models, because the Higgs act as the portal through which we see dark matter. So far, this is also pretty standard.

Now comes the new thing. I have written several times that I work on questions what the Higgs really is. That it could have an interesting self-similar structure. And here is the big deal for me: The presence of dark matter interacting with the Higgs could influence actually this structure. This is similar to what happens with other bound states: The constituents can change their identity, as we investigate in another project.

My aim is now to bring all these three things together: Dark matter, Higgs, and the structure of the Higgs. I want to know whether such a type of dark matter influences the structure of the Higgs, and if yes how. And whether this could have a measurable influence. The other way around is that I would like to know whether the Higgs influences the dark matter in some way. Combining these three things is a rather new idea, and it will be very fascinating to explore it. The best course of action will be to do this by simulating the Higgs together with dark matter. This will be neither simple nor cheap, so this may take a lot of time. I will keep you posted.

## September 17, 2015

### Lubos Motl - string vacua and pheno

What confirms a physical theory?
Guest blog by Richard Dawid, LMU Munich,
Munich Center for Mathematical Philosophy

Thanks, Lubos, for your kind invitation to write a guest blog on non-empirical theory confirmation (which I recently discussed in the book "String Theory and the Scientific Method", CUP 2013). As a long-time follower of this blog – who, I may add, fervently disagrees with much of its non-physical content – I am very glad to do so.

Fundamental physics today faces an unusual situation. Virtually all fundamental theories that have been developed during the last four decades still lack conclusive empirical confirmation. While the details with respect to empirical support and prospects for conclusive empirical testing vary from case to case, this general verdict applies to theories like low energy supersymmetry, grand unified theories, cosmic inflation or string theory.

The fact that physics is characterised by decades of continuous work on empirically unconfirmed theories turns the non-empirical assessment of those theories' chances of being viable into an important element of the scientific process. Despite the scarcity of empirical support, many physicists working on the above-mentioned theories have developed substantial trust in their theories' viability based on an overall assessment of the physical context and the theories' qualities.

In particular in the cases of string theory and cosmic inflation, that trust has been harshly criticised by others as unjustified and incompatible with basic principles of scientific reasoning. The critics argue that empirical confirmation is the only possible scientific basis for holding a theory viable. Relying on other considerations in their eyes amounts to abandoning necessary scientific restraint and leads to a relapse into pre-scientific modes of reasoning.

The critics' wholesale condemnation of non-empirical reasons for having trust in a theory's viability is based on an understanding of scientific confirmation that has dominated the philosophy of science throughout the 20th century. It can be found for example in classical hypothetico-deductivism and in most presentations of Bayesian confirmation theory. It consists of two basic ideas. First, theory confirmation is taken to be the only scientific method of generating trust in a theory's viability. Second, it is assumed that a theory can be confirmed only by empirical data that is predicted by that theory.

In my recent book, I argue that this understanding is inadequate. Not only does it prevent an adequate understanding of theory assessment in contemporary high energy physics and cosmology. It does not give an accurate understanding of the research process in 20th century physics either.

I propose an understanding of theory confirmation that is broader than the canonical understanding. My position is in agreement with the canonical understanding in assuming that our concept of confirmation should cover all observation-based scientifically supported reasons for believing in a theory's viability. I argue, however, that it is misguided and overly restrictive to assume that observations that instil trust in a theory must always be predicted by that theory. In fact, we can find cases of scientific reasoning where this assumption is quite obviously false.

A striking example is the Higgs hypothesis. High energy physicists were highly confident that some kind of Higgs particle (be it SM, SUSY, constituent or else) existed long before a Higgs particle was discovered in 2012. Their confidence was based on an assessment of the scientific context and their overall experience with predictive success in physics. Even before 2012, it would have been difficult to deny the scientific legitimacy of their assessment. It would be even more implausible today, after their assessment has been vindicated at the LHC.

Clearly, there is an important difference between the status of the Higgs hypothesis before and after its successful empirical testing in 2011/2012. That difference can be upheld by distinguishing two different kinds of confirmation. Empirical confirmation is based on the empirical testing of the theory's predictions. Non-empirical confirmation is based on observations that are not of the kind that can be predicted by the confirmed theory. Conclusive empirical confirmation is more powerful than non-empirical confirmation. But non-empirical confirmation can on its own provide fairly strong reasons for believing in a theory's viability.

At this point, I should explain why I use the term viability rather than truth and what I mean by it. The truth of a theory is a difficult concept. Often, physicist know that a given theory is not strictly speaking true (for example because it is not consistent beyond a certain regime) but that does not take anything from the theory's value within the regime where it works. What is more important that truth is a theory's capability of making correct predictions in a given regime. Roughly speaking, I call a theory viable at a given scale if can reproduce the empirical data at that scale.

What are the observations that generate non-empirical confirmation in physics today? Three main kinds of argument, each relying on one type of observation can be found when looking at the research process. They don't work in isolation but only acquire strength in conjunction.

The first and most straightforward argument is the no alternatives argument (NAA). Physicists have trust in the viability of a theory that solves a specific physical problem based on the observation that, despite extensive efforts to do so, no alternative theory that solves this problem has been found.

Trust in the Higgs hypothesis before empirical confirmation was crucially based on the fact that the Higgs hypothesis was the only known convincing theory for generating the observed mass spectrum of elementary particles within the empirically well-confirmed framework of gauge field theory. In the same vein, trust in string theory is based on the understanding that there is no other known approach for a coherent theory of all fundamental interactions.

On its own, NAA has one obvious weakness: scientists might just have not been clever enough to find the alternatives that do exist. In order to take NAA seriously, one therefore needs a method of assessing whether or not scientists in the field typically are capable of finding the viable theories. The argument of meta-inductive inference from predictive success in the research field (MIA) can provide that assessment. Scientists observe that, in similar contexts, theories without known alternatives turned out to be successful once empirically tested.

Both, the pre-discovery trust in the Higgs hypothesis and today's trust in string theory gain strength from the observation that standard model predictions were empirically highly successful. One important caveat remains, however. It often seems questionable whether previous examples of predictive success and the new theory under scrutiny are sufficiently similar to justify the use of MIA. In some cases, for example in the Higgs case, the concept under scrutiny and previous examples of predictive success are so closely related to each other that the deployment of MIA looks fairly unproblematic. NAA and MIA in conjunction thus were sufficient in the Higgs case for generating a high degree of trust in the theory. In other cases, like string theory, the comparison with earlier cases of predictive success is more contentious.

In many respects, string theory does constitute a direct continuation of the high energy physics research program that was so successful in the case of the standard model. But its evolution differs substantially from that of its predecessors. The far higher level of complexity of the mathematical problems involved makes it far more difficult to approach a complete theory. This higher level of complexity can throw the justification for a deployment of MIA into doubt. Therefore, it is important to provide a third argument indicating that, despite the high complexity of the theory in question, scientists are still capable of finding their way through the 'conceptual labyrinth' they face. The argument that can be used to that end is the argument from unexpected explanatory interconnections (UEA).

The observation on which UEA is based is the following: scientists develop a theory in order to solve a specific theory. Later it turns out that this theory also solves other conceptual problems it was not developed to solve. This is taken as an indicator of the theory's viability. UEA is the theory-based 'cousin' of the well known data-based argument of novel predictive success. The latter relies on the observation that a theory that was developed based on a given set of empirical data correctly predicts new data that had not entered the process of theory construction. UEA replaces novel empirical prediction by unexpected explanation.

The most well-known example of UEA in the context of string theory is based on the theory's role in understanding black hole entropy. String theory was proposed as a universal theory of all interactions because it was understood to imply the existence of a graviton and suspected to be capable of avoiding the problem of non-renormalizability faced by field theoretical approaches to quantum gravity. Closer investigations of the theory's structure later revealed that - at least in special cases - it allowed for the full derivation of the macro-physical black hole entropy law from micro-physical stringy structure. Considerations about black hole entropy, however, had not entered the construction of string theory. String physics offers a considerable number of unexpected explanatory interconnections that allow for the deployment of UEA. Arguably, many string physicists consider UEA type arguments the most important reason for having trust in their theory.

NAA, MIA and UEA are applicable in a wide range of cases in physics. Their deployment is by no means confined to empirically unconfirmed theories. NAA and MIA play a very important role in understanding the significance of empirical theory confirmation. The continuity between non-empirical confirmation and the assessment of empirical confirmation based on NAA and MIA can be seen nicely by having another look at the example of the Higgs discovery.

As argued above, the Higgs hypothesis was believed before 2012 based on NAA and MIA. But only the empirical discovery of a Higgs particle implied that all calculations of the background for future scattering experiments had to account for Higgs contributions. That implication is based on the fact that the discovery of a particle in a specific experimental context is taken to be a reliable basis for having trust in that particle's further empirical implications. But why is that so? It relies on the very same types of consideration that had generated trust in the Higgs hypothesis already prior to discovery. First, no alternative theoretical conception is available that can account for the measured signal without having those further empirical implications (NAA). And second, in comparable cases of particle discoveries in the past trust in the particle's further empirical implications was mostly vindicated by further experimentation (MIA).

Non-empirical confirmation in this light is no new mode of reasoning in physics. Very similar lines of reasoning have played a perfectly respectable role in the assessment of the conceptual significance of empirical confirmation throughout the 20th century. What has changed is the perceived power of non-empirical considerations already prior to empirical testing of the theory.

While NAA, MIA and UEA are firmly rooted in the history of physical reasoning, string theory does add one entirely new kind of argument that can contribute to the strength of non-empirical confirmation. String theory contains a final theory claim, i.e. the claim that, if string theory is a viable theory at its own characteristic scale, it won't ever have to be superseded by an empirically distinguishable new theory. Future theoretical conceptualization in that case would be devoted to fully developing the theory from the basic posits that are already known rather than to searching for new basic posits that are emprically more adequate. Though the character of string theory's final theory claim is not easy to understand from a philosophical perspective, it constitutes an interesting new twist to the question of non-empirical confirmation and may shed new light on the epistemic status of string theory.

For the remainder of this text, though, I want to confine my analysis to the role of the three 'classical' arguments NAA, MIA and UEA. Let me first address an important general point. In order to be convincing, theory confirmation must not be a one way street. If a certain type of observation has the potential to confirm a theory, it must also have the potential to dis-confirm it. Empirical confirmation trivially fulfils that condition: for any set of empirical data that agrees with a theory's prediction, there are others that disagree with it and therefore, if actually measured, would dis-confirm the theory.

NAA, MIA and UEA fulfil the stated condition as well. The observation that no alternatives to a theory have been found might be overridden by future observations that scientists do find alternatives. That later observation would reduce the trust in the initial theory and therefore amount to that theory's non-empirical dis-confirmation. Likewise, an observed trend of predictive success in a research field could later be overridden by a series of instances where a theory that was well trusted on non-empirical grounds turned out to disagree with empirical tests once they became possible. In the case of UEA, the observation that no unexpected explanatory interconnections show up would be taken to speak against a theory's viability. And once unexpected interconnections have been found, it could still happen that a more careful conceptual analysis reveals them to be the result of elementary structural characteristics of theory building in the given context that are not confined to the specific theory in question. To conclude, the three non-empirical arguments are not structurally biased in favour of confirmation but may just as well provide indications against a theory's viability.

Next, I briefly want to touch a more philosophical level of analysis. Empirical confirmation is based on a prediction of the confirmed theory that agrees with an observation. In the case of non-empirical confirmation, to the contrary, the confirming observations are not predicted by the theory. How can one understand the mechanism that makes those observations confirm the theory?

It turns out that an element of successful prediction is involved in non-empirical confirmation as well. That element, however, is placed at the meta-level of understanding the context of theory building. More specifically, the claim that is tested at the meta-level is a claim on the spectrum of possible scientific alternatives to the known theory. NAA, MIA and UEA all support the meta-level hypothesis that the spectrum of unconceived scientific alternatives to the theory in question is very limited. That implication can indeed be directly inferred from the fact that the metalevel hypothesis increases the probability of the observations on which NAA, MIA and UEA are based. So, at the metalevel we do find the same argumentative structure that can be found at the ground level in the case of empirical confirmation.

Let us, for the sake of simplicity, just consider the most extreme form of the meta-level hypothesis, namely the hypothesis that, in all research contexts in the scientific field, there are no possible alternatives to the viable theory at all. This radical hypothesis predicts 1: that no alternatives will be found because there aren't any (NAA); 2: that, given that there exists a predictively successful theory at all, a theory that has been developed in agreement with the available data will always be predictively successful (MIA); and 3: that that a theory that has been developed for one specific reason will explain all other aspects of the given research context as well, because there are no alternatives that could do so (UEA).

A more careful formulation of non-empirical confirmation based on the concept of limitations to the spectrum of possible alternative theories would need to say more on the criteria for accepting a theory as scientific, on how to individuate theories, etc. In this short presentation, it shall suffice to give the general flavour of the line of reasoning: non-empirical confirmation is a natural extension of empirical confirmation that places the agreement between observation and the prediction of a hypothesis at the meta-level of theory dynamics rather than at the ground level of the theory's predictions.

An instructive way of clarifying the mechanism of non-empirical confirmation and its close relation to empirical confirmation consists in formalizing the arguments within the framework of Bayesian confirmation theory. An analysis of this kind has been carried out for NAA (which is the simplest case) in "The No Alternatives Argument", Dawid, Hartmann and Sprenger BJPS 66(1), 213-34, 2015.

A number of worries have been raised with respect to the concept of non-empirical confirmation. Let me, in the last part of this text, address a few of them.

It has been argued (e.g. by Sabine Hossenfelder) that arguments of non-empirical confirmation are sociological and therefore don't constitute proper scientific reasoning. This claim may be read in two different ways. In its radical form, it would amount to the statement that there is no factual scientific basis to non-empirical confirmation at all. Confidence in a theory on that account would be driven entirely by sociological mechanisms in the physics community and only be camouflaged ex post by fake rational reasoning. The present text in its entirety aims at demonstrating that such an understanding of non-empirical confirmation is highly inadequate.

A more moderate reading of the sociology claim is the following: there may be a factual core to non-empirical confirmation, but it is so difficult to disentangle from sociological factors that science is better off if non-empirical confirmation is discarded. I concede that the role of sociology is trickier with respect to deployments of non-empirical confirmation than in cases where conclusive empirical confirmation is to be had. But I would argue that it must always be the aim of good science to extract all factual information that is provided by an investigation. If the existence of a sociological element in scientific analysis would justify discarding that analysis, quite some empirical data analysis had to be discarded as well.

To give a recent example, the year 2015 witnessed considerable differences of opinion among physicists interpreting the empirical data collected by BICEP2. Those differences of opinion may be explained to a certain degree by sociological factors involved. No-one would have suggested to discard the debate on the interpretation of the BICEP2 data as scientifically worthless on those grounds. I suggest that the very same point of view should also be taken with respect to non-empirical confirmation.

It has also been suggested (e.g. by George Ellis and Joseph Silk) that non-empirical confirmation may lead to a disregard for empirical data and therefore to the abandonment of a pivotal principle of scientific reasoning.

This worry is based on a misreading of non-empirical confirmation. Accepting the importance of non-empirical confirmation by no means devaluates the search for empirical confirmation. To the contrary, empirical confirmation is crucial for the functioning of non-empirical confirmation in two ways. Firstly, non-empirical confirmation indicates the viability of a theory. But, as I said earlier, a theory's viability is defined as: the theory's empirical predictions would turn out correct if they could be specified and empirically tested. Conclusive empirical confirmation therefore remains the ultimate judge of a theory's viability - and thus the ultimate goal of science.

Secondly MIA, which is one cornerstone of non-empirical confirmation, relies on empirical confirmation elsewhere in the research field. Therefore, if empirical confirmation was terminated in the entire research field, that would remove the possibility of testing non-empirical confirmation strategies and, in the long run, make them dysfunctional. Non-empirical confirmation itself thus highlights the importance of testing theories empirically whenever possible. It implies, though, that the absence of empirical confirmation must not be equated with knowing nothing about the theory's chances of being viable.

Finally, it has been argued (e.g. by Lee Smolin) that non-empirical confirmation further strengthens the dominant research programs and therefore in an unhealthy way contributes to thinning out the search for alternative perspectives that may turn out productive later on.

To a given extent, that is correct. Taking non-empirical confirmation seriously does support the focus on those research strategies that generate theories with a considerable degree of non-empirical confirmation. I would argue, however, that this is, by and large, a positive effect. It is an important element of successful science to understand which approaches merit further investigations and which don't.

But a very important second point must be added. As discussed above, non-empirical confirmation is a technique for understanding the spectrum of possible alternatives to the theory one knows. One crucial test in that respect is to check whether serious and extensive searches for alternatives have produced any coherent alternative theories (This is the basis for NAA). Therefore, the search for alternatives is a crucial element of non-empirical confirmation. Far from denying the value of the search for alternatives, non-empirical confirmation adds a new reason why it is important: even if the alternative strands of research fail to produce coherent theories, the observation that none of those approaches has succeeded makes an important contribution to the non-empirical confirmation of the theory that is available.

So what is the status of non-empirical confirmation? The arguments I present support the general relevance of non-empirical confirmation in physics. In the absence of empirical confirmation, non-empirical confirmation can provide a strong case for taking a theory to be viable. This does by no means render empirical confirmation obsolete. Conclusive empirical testing will always trump non-empirical confirmation and therefore remains the ultimate goal in science. Arguments of non-empirical confirmation can in some cases lead to a nearly consensual assessment in the physics community (see the trust in the Higgs particle before 2012). In other cases, they can be more controversial.

As in all contexts of scientific inquiry, argumentation stressing non-empirical confirmation can be balanced and well founded but may, in some cases, also be exaggerated and unsound. The actual strength of each specific case of non-empirical confirmation has to be assessed and discussed by the physicists concerned with the given theory based on a careful scientific analysis of the particular case. Criticism of cases of non-empirical confirmation at that level constitutes an integral and important part of theory assessment. I suggest, however, that the whole-sale verdict that non-empirical theory confirmation is unscientific and should not be taken seriously does not do justice to the research process in physics and obscures the actual state of contemporary physics by disregarding an important element of scientific analysis.

Richard Dawid, LMU Munich

### Symmetrybreaking - Fermilab/SLAC

Hitting the neutrino floor

Dark matter experiments are becoming so sensitive, even the ghostliest of particles will soon get in the way.

The scientist who first detected the neutrino called the strange new particle “the most tiny quantity of reality ever imagined by a human being.” They are so absurdly small and interact with other matter so weakly that about 100 trillion of them pass unnoticed through your body every second, most of them streaming down on us from the sun.

And yet, new experiments to hunt for dark matter are becoming so sensitive that these ephemeral particles will soon show up as background. It’s a phenomenon some physicists are calling the “neutrino floor,” and we may reach it in as little as five years.

The neutrino floor applies only to direct detection experiments, which search for the scattering of a dark matter particle off of a nucleus. Many of these experiments look for WIMPs, or weakly interacting massive particles. If dark matter is indeed made of WIMPs, it will interact in the detector in nearly the same way as solar neutrinos.

We don’t know what dark matter is made of. Experiments around the world are working toward detecting a wide range of particles.

“What’s amazing is now the experimenters are trying to measure dark matter interactions that are at the same strength or even smaller than the strength of neutrino interactions,” says Thomas Rizzo, a theoretical physicist at SLAC National Accelerator Laboratory. “Neutrinos hardly interact at all, and yet we’re trying to measure something even weaker than that in the hunt for dark matter.”

This isn’t the first time the hunt for dark matter has been linked to the detection of solar neutrinos. In the 1980s, physicists stumped by what appeared to be missing solar neutrinos envisioned massive detectors that could fix the discrepancy. They eventually solved the solar neutrino problem using different methods (discovering that the neutrinos weren’t missing; they were just changing as they traveled to the Earth), and instead put the technology to work hunting dark matter.

In recent years, as the dark matter program has grown in size and scope, scientists realized the neutrino floor was no longer an abstract problem for future researchers to handle. In 2009, Louis Strigari, an astrophysicist at Texas A&M University, published the first specific predictions of when detectors would reach the floor. His work was widely discussed at a 2013 planning meeting for the US particle physics community, turning the neutrino floor into an active dilemma for dark matter physicists.

“At some point these things are going to appear,” Strigari says, “and the question is, how big do these detectors have to be in order for the solar neutrinos to show up?”

Strigari predicts that the first experiment to hit the floor will be the SuperCDMS experiment, which will hunt for WIMPs from SNOLAB in the Vale Inco Mine in Canada.

While hitting the floor complicates some aspects of the dark matter hunt, Rupak Mahapatra, a principal investigator for SuperCDMS at Texas A&M, says he hopes they reach it sooner rather than later—a know-thy-enemy kind of thing.

“It is extremely important to know the neutrino floor very precisely,” Mahapatra says. “Once you hit it first, that’s a benchmark. You understand what exactly that number should be, and it helps you build a next-generation experiment.”

Much of the work of untangling a dark matter signal from neutrino background will come during data analysis. One strategy involves taking advantage of the natural ebbs and flows in the amount of dark matter and neutrinos hitting Earth. Dark matter’s natural flux, which arises from the motion of the sun through the Milky Way, peaks in June and reaches its lowest point in December. Solar neutrinos, on the other hand, peak in January, when the Earth is closest to the sun.

“That could help you disentangle how much is signal and how much is background,” Rizzo says.

There’s also the possibility that dark matter is not, in fact, a WIMP. Another potentially viable candidate is the axion, a hypothetical particle that solves a lingering mystery of the strong nuclear force. While WIMP and neutrino interactions look very similar, axion interactions would appear differently in a detector, making the neutrino floor a non-issue.

But that doesn’t mean physicists can abandon the WIMP search in favor of axions, says JoAnne Hewett, a theoretical physicist at SLAC. “WIMPs are still favored for many reasons. The neutrino floor just makes it more difficult to detect. It doesn’t make it less likely to exist.”

Physicists are confident that they’ll eventually be able to separate a dark matter signal from neutrino noise. Next-generation experiments might even be able to distinguish the direction a particle is coming from when it hits the detector, something the detectors being built today just can’t do. If an interaction seemed to come from the direction of the sun, that would be a clear indication that it was likely a solar neutrino.

“There’s certainly avenues to go here,” Strigari says. “It’s not game over, we don’t think, for dark matter direct detection.”

Like what you see? Sign up for a free subscription to symmetry!

## September 16, 2015

### Jester - Resonaances

What can we learn from LHC Higgs combination
Recently, ATLAS and CMS released the first combination of their Higgs results. Of course, one should not expect any big news here: combination of two datasets that agree very well with the Standard Model predictions has to agree very well with the Standard Model predictions...  However, it is interesting to ask what the new results change at the quantitative level concerning our constraints on Higgs boson couplings to matter.

First, experiments quote the overall signal strength μ, which measures how many Higgs events were detected at the LHC in all possible production and decay channels compared to the expectations in the Standard Model. The latter, by definition, is μ=1.  Now, if you had been impatient to wait for the official combination, you could have made a naive one using the previous ATLAS (μ=1.18±0.14) and CMS (μ=1±0.14) results. Assuming the errors are Gaussian and uncorrelated, one would obtains this way the combined μ=1.09±0.10. Instead, the true number is (drum roll)
So, the official and naive numbers are practically the same.  This result puts important constraints on certain models of new physics. One important corollary is that the Higgs boson branching fraction to invisible (or any undetected exotic) decays is limited as  Br(h → invisible) ≤ 13% at 95% confidence level, assuming the Higgs production is not affected by new physics.

From the fact that, for the overall signal strength, the naive and official combinations coincide one should not conclude that the work ATLAS and CMS has done together is useless. As one can see above, the statistical and systematic errors are comparable for that measurement, therefore a naive combination is not guaranteed to work. It happens in this particular case that the multiple nuisance parameters considered in the analysis pull essentially in random directions. But it could well have been different. Indeed, the more one enters into details, the more the impact of the official combination becomes relevant.  For the signal strength measured in particular final states of the Higgs decay the differences are more pronounced:
One can see that the naive combination somewhat underestimates the errors. Moreover, for the WW final state the central value is shifted by half a sigma (this is mainly because, in this channel, the individual ATLAS and CMS measurements that go into the combination seem to be different than the previously published ones). The difference is even more clearly visible for 2-dimensional fits, where the Higgs production cross section via the gluon fusion (ggf) and vector boson fusion (vbf) are treated as free parameters. This plot compares the regions preferred at 68% confidence level by the official and naive combinations:
There is a significant shift of the WW and also of the ττ ellipse. All in all, the LHC Higgs combination brings no revolution, but it allows one to obtain more precise and more reliable constraints on some new physics models.  The more detailed information is released, the more useful the combined results become.

### Symmetrybreaking - Fermilab/SLAC

A light in the dark

The MiniCLEAN dark matter experiment prepares for its debut.

Getting to an experimental cavern 6800 feet below the surface in Sudbury, Ontario, requires an unusual commute. The Cage, an elevator that takes people into the SNOLAB facility, descends twice every morning at 6 a.m. and 8 a.m. Before entering the lab, individuals shower and change so they don’t contaminate the experimental areas.

A thick layer of natural rock shields the clean laboratory where air quality, humidity and temperature are highly regulated. These conditions allow scientists to carry out extremely sensitive searches for elusive particles such as dark matter and neutrinos.

The Cage returns to the surface at 3:45 p.m. each day. During the winter months, researchers go underground before the sun rises and emerge as it sets. Steve Linden, a postdoctoral researcher from Boston University, makes the trek every morning to work on MiniCLEAN, which scientists will use to test a novel technique for directly detecting dark matter.

“It’s a long day,” Linden says.

Scientists and engineers have spent the past eight years designing and building the MiniCLEAN detector. Today that task is complete; they have begun commissioning and cooling the detector to fill it with liquid argon to start its search for dark matter.

Though dark matter is much more abundant than the visible matter that makes up planets, stars and everything we can see, no one has ever identified it. Dark matter particles are chargeless, don’t absorb or emit light, and interact very weakly with matter, making them incredibly difficult to detect.

#### Spotting the WIMPs

MiniCLEAN (CLEAN stands for Cryogenic Low-Energy Astrophysics with Nobles) aims to detect weakly interacting massive particles, or WIMPs, the current favorite dark matter candidate. Scientists will search for these rare particles by observing their interactions with atoms in the detector.

To make this possible, the detector will be filled with over 500 kilograms of very cold, dense, ultra-pure materials—argon at first, and later neon. If a WIMP passes through and collides with an atom’s nucleus, it will produce a pulse of light with a unique signature. Scientists can collect and analyze this light to determine whether what they saw was a dark matter particle or some other background event.

The use of both argon and neon will allow MiniCLEAN to double-check any possible signals. Argon is more sensitive than neon, so a true dark matter signal would disappear when liquid argon is replaced with liquid neon. Only an intrinsic background signal from the detector would persist. Scientists would like to eventually scale this experiment up to a larger version called CLEAN.

#### Overcoming obstacles

MiniCLEAN is a small experiment, with about 15 members in the collaboration and the project lead at Pacific Northwest National Laboratory. While working on this experiment underground with few hands to spare, the team has run into some unexpected roadblocks.

One such obstacle appeared while transporting the inner vessel, a detector component that will contain the liquid argon or neon.

“Last November, as we finished assembling the inner vessel and were getting ready to move it to where it needed to end up, we realized it wouldn’t fit between the doors into the hallway we had to wheel it down,” Linden explains.

When this happened, the team was faced with two options: somehow reduce the size of the vessel, or cut away a part of the door—not a simple thing to do in a clean lab. Fortunately, temporarily replacing some of the vessel’s parts reduced the size enough to make it fit. They got it through the doorway with about an eighth of an inch clearance on each side.

“What gives me the energy to persist on this project is that the CLEAN approach is unique, and there isn’t another approach to dark matter that is like it,” says Pacific Northwest National Laboratory scientist Andrew Hime, MiniCLEAN spokesperson and principal investigator. “It’s been eight years since we starting pushing hard on this program, and finally getting real data from the detector will be a breath of fresh air.”

Like what you see? Sign up for a free subscription to symmetry!

### ATLAS Experiment

TOP 2015 – Top quarks come to Italy!

The annual top conference! This year we’re in Ischia, Italy. The hotel is nice, the pool is tropical and heated, but you don’t want to hear about that, you want to hear about the latest news in the Standard Model’s heaviest and coolest particle, the top quark! You won’t be disappointed.

DAY 1:

Our keynote speaker is Michael Peskin. For those of you who have a PhD in particle physics, you already know Peskin. He wrote that textbook you fear. His talk is very good and accessible, even for an experimentalist like myself, and he gives us a very nice overview of the status of theory calculations in top physics, highlighting a few areas he’d like to see more work on.

The highlights of my day though are the ATLAS and CMS physics objects talks. Normally, these can be a little dull. However this year we have performance plots for the first time at 13 TeV, and most people are closely scrutinising the performance of both experiments. All except a guy who looks suspiciously like Game of Thrones character Joffrey Baratheon, who is sitting completely upright, eyes closed and snoring lightly.

POSTER SESSION:

The poster session, two hours in (photo from @JoshMcfayden)

If you’ve never been to a poster session then this is how they work: a group of students and young postdocs, eager to present their own work (a rare treat in collaborations as large as ATLAS and CMS) stand around, proudly showcasing how they managed to make powerpoint do something that it really wasn’t designed to do.

My poster (approved only hours before) gets a fair bit of attention, but not as much as I expected. Suddenly I regret not slapping a huge “New 13 TeV Results!” banner on the top of it.

After 3 hours (yes, 3 hours!) of standing by my poster I decide that everyone who wants to see it will have done by now, grab 3 (or 10) canapés and head to the laptop in the corner to cast my vote. For a brief moment I consider not voting for myself, but the moment passes and I type in my own name.

DAY 2:

I sit down next to Joffrey Baratheon and smile at him politely. It’s not his fault he’s an evil king after all. We start the morning with some theory, because we’re mostly experimentalists and everyone knows our attention spans are limited if they give us wine with lunch. As with last year, the hot topic is ever more precise calculations.

Next we have a very professional talk from a very professional-looking CMS experimentalist. People who wear shirts and sensible shoes to give a plenary talk either means serious business or a terrified student giving their first conference talk. From the polished introduction on top cross-section, you can tell it’s the former.

CMS have clearly put a lot of effort in to these results (and I’m secretly relieved that I already know our results are equally impressive), and despite a spine-chillingly large luminosity uncertainty of 12%, they have achieved remarkable precision.

Finally, we’ve arrived at the talk that I’ve been waiting for; The ATLAS Run2 cross-section results.

A summary of the latest top anti-top cross-section measurements from ATLAS.

The speaker starts by flashing our already released cross-section in the eµ channel at 13 TeV. Even with an integrated luminosity uncertainty of 9%, it’s still a fantastic early result. We show an updated eµ result in which we measure the ratio with the Z-boson cross-section (effectively cancelling the luminosity uncertainty). People seem pretty impressed by that, as they should. Getting the top group to release results this early is hard enough, getting the standard model group to release an inclusive Z cross section is nothing short of a miracle.

Now the speaker moves on to the precision 8 TeV results. Wait a minute? What’s going on? There are other 13 TeV results to show? What is he DOING?! Months of working on the ee and µµ cross section results and we’ve skipped past them? I turn to my colleague, who led the also-skipped lepton+jets cross section analysis. His face is stoic, as is his way, but inside I know he’s ready to storm the stage with me. I begin to whisper to my boss, sat one seat ahead of me, about the injustice of it all. Somehow it’s coming out as a childish tantrum, despite sounding perfectly reasonable in my head.

… and then the speaker shows the result. My boss rolls her eyes at me and returns to her laptop, possibly rethinking my contract extension. Joffrey Baratheon scowls at the disturbance I’ve caused and I consider strangling him with his pullover.

Stay tuned for part 2! Where we learn about new single-top results, new mass measurements, and ttH!

 James Howarth is a postdoctoral research fellow at DESY, working on top quark cross-sections and properties for ATLAS. He joined the ATLAS experiment in 2009 as a PhD student with the University of Manchester, before moving to DESY, Hamburg in 2013. In his spare time he enjoys drinking, arguing, and generally being difficult.

## September 15, 2015

### Symmetrybreaking - Fermilab/SLAC

Where the Higgs belongs

The Higgs doesn’t quite fit in with the other particles of the Standard Model of particle physics.

If you were Luke Skywalker in Star Wars, and you carried a tiny green Jedi master on your back through the jungles of Dagobah for long enough, you could eventually raise your submerged X-wing out of the swamp just by using the Force.

But if you were a boson in the Standard Model of particle physics, you could skip the training—you would be the force.

Bosons are particles that carry the four fundamental forces. These forces push and pull what would otherwise have been an unwieldy soup of particles into the beautiful mosaic of stars and galaxies that permeate the visible universe.

The fundamental forces keep protons incredibly stable (the strong force holds them together), cause compasses to point north (the electromagnetic force attracts the needle), make apples fall off trees (gravity attracts the fruit to the ground), and keep the sun shining (the weak force allows nuclear fusion to occur).

In 2012, the Higgs boson became an officially recognized member of this family of fundamental bosons.

The Higgs is called a boson because of a quantum mechanical property called spin—which represents a particle’s intrinsic angular momentum and characterizes how a particle plays with its Standard Model friends.

Bosons have an integer spin (0, 1, 2) which makes them the touchy-feely types. They have no need for personal space. Fermions, on the other hand, have a non-integer spin (1/2, 3/2, etc.), which makes them a bit more isolated; they prefer to keep their distance from other particles.

The Higgs has a spin of 0, making it officially a boson.

“Every boson is associated with one of the four fundamental forces,” says Kyle Cranmer, an associate professor of physics at New York University. “So if we discover a new boson, it seems natural that we should find a new force.”

Scientists think that a Higgs force does exist. But it’s the Higgs boson’s relationship to that force that makes it a bit of a black sheep. It’s the reason that, when the Higgs is added to the Standard Model of particle physics, it’s often pictured apart from the rest of the boson family.

#### What the Higgs is for

The Higgs boson is an excitation of the Higgs field, which interacts with some of the fundamental particles to give them mass.

“The way the Higgs field gives masses to particles is its own unique feature, which is different from all other known fields in the universe,” says Matt Strassler, a Harvard University theoretical physicist. “When the Higgs field turns on, it changes the environment for all particles; it changes the nature of empty space itself. The way particles interact with this field is based on their intrinsic properties.”

There are three inherent qualifications required for a field to generate a force: The field must be able to switch on and off. It must have a preferred direction. And it must be able to attract or repel.

Normally the Higgs field fails the first two requirements—it’s always on, with no preferred direction. But in the presence of a Higgs boson, the field is distorted, theoretically allowing it to generate a force.

“We think that two particles can pull on each other using the Higgs field,” Strassler says. “The same equations we used to predict that the Higgs particle should exist, and how it should decay to other particles, also predict this force will exist.”

Just what role that force might play in our greater understanding of the universe is still a mystery.

“We know the Higgs field is essential in the formation of stable matter,” Strassler says. “But the Higgs force—as far as we know—is not.”

The Higgs force could be important in some other way, Strassler says. It could be related to how much dark matter exists in the universe or the huge imbalance between matter and antimatter. “It’s too early to write it off,” he says.

During this run of the Large Hadron Collider, physicists expect to produce roughly 10 times as many Higgs bosons as they did during the first run. This will enable scientists to examine the properties of this peculiar particle more deeply.

Like what you see? Sign up for a free subscription to symmetry!

### Georg von Hippel - Life on the lattice

Fundamental Parameters from Lattice QCD, Day Seven
Today's programme featured two talks about the interplay between the strong and the electroweak interactions. The first speaker was Gregorio Herdoíza, who reviewed the determination of hadronic corrections to electroweak observables. In essence these determinations are all very similar to the determination of the leading hadronic correction to (g-2)μ since they involve the lattice calculation of the hadronic vacuum polarisation. In the case of the electromagnetic coupling α, its low-energy value is known to a precision of 0.3 ppb, but the value of α(mZ2) is known only to 0.1 ‰, and a larger portion of the difference in uncertainty is due to the hadronic contribution to the running of α, i.e. the hadronic vacuum polarization. Phenomenologically this can be estimated through the R-ratio, but this results in relatively large errors at low Q2. On the lattice, the hadronic vacuum polarization can be measured through the correlator of vector currents, and currently a determination of the running of α in agreement with phenomenology and with similar errors can be achieved, so that in the future lattice results are likely to take the lead here. In the case of the electroweak mixing angle, sin2θw is known well at the Z pole, but only poorly at low energy, although a number of experiments (including the P2 experiment at Mainz) are aiming to reduce the uncertainty at lower energies. Again, the running can be determined from the Z-γ mixing through the associated current-current correlator, and current efforts are under way, including an estimation of the systematic error caused by the omission of quark-disconnected diagrams.

The second speaker was Vittorio Lubicz, who looked at the opposite problem, i.e. the electroweak corrections to hadronic observables. Since approximately α=1/137, electromagnetic corrections at the one-loop level will become important once the 1% level of precision is being aimed for, and since the up and down quarks have different electrical charges, this is an isospin-breaking effect which also necessitates at the same time considering the strong isospin breaking caused by the difference in the up and down quark masses. There are two main methods to include QED effects into lattice simulations; the first is direct simulations of QCD+QED, and the second is the method of incorporating isospin-breaking effects in a systematic expansion pioneered by Vittorio and colleagues in Rome. Either method requires a systematic treatment of the IR divergences arising from the lack of a mass gap in QED. In the Rome approach this is done through splitting the Bloch-Nordsieck treatment of IR divergences and soft bremsstrahlung into two pieces, whose large-volume limits can be taken separately. There are many other technical issues to be dealt with, but first physical results from this method should be forthcoming soon.

In the afternoon there was a discussion about QED effects and the range of approaches used to treat them.

## September 14, 2015

### The n-Category Cafe

Where Does The Spectrum Come From?

Perhaps you, like me, are going to spend some of this semester teaching students about eigenvalues. At some point in our lives, we absorbed the lesson that eigenvalues are important, and we came to appreciate that the invariant par excellence of a linear operator on a finite-dimensional vector space is its spectrum: the set-with-multiplicities of eigenvalues. We duly transmit this to our students.

There are lots of good ways to motivate the concept of eigenvalue, from lots of points of view (geometric, algebraic, etc). But one might also seek a categorical explanation. In this post, I’ll address the following two related questions:

1. If you’d never heard of eigenvalues and knew no linear algebra, and someone handed you the category $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$ of finite-dimensional vector spaces, what would lead you to identify the spectrum as an interesting invariant of endomorphisms in $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$?

2. What is the analogue of the spectrum in other categories?

I’ll give a fairly complete answer to question 1, and, with the help of that answer, speculate on question 2.

(New, simplified version posted at 22:55 UTC, 2015-09-14.)

Famously, trace has a kind of cyclicity property: given maps

$X\stackrel{f}{\to }Y\stackrel{g}{\to }X X \stackrel\left\{f\right\}\left\{\to\right\} Y \stackrel\left\{g\right\}\left\{\to\right\} X $

in $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$, we have

$\mathrm{tr}\left(g\circ f\right)=\mathrm{tr}\left(f\circ g\right). tr\left(g \circ f\right) = tr\left(f \circ g\right). $

I call this “cyclicity” because it implies the more general property that for any cycle

${X}_{0}\stackrel{{f}_{1}}{\to }{X}_{1}\stackrel{{f}_{2}}{\to }\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\cdots \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\stackrel{{f}_{n-1}}{\to }{X}_{n-1}\stackrel{{f}_{n}}{\to }{X}_{0} X_0 \stackrel\left\{f_1\right\}\left\{\to\right\} X_1 \stackrel\left\{f_2\right\}\left\{\to\right\} \,\, \cdots\,\, \stackrel\left\{f_\left\{n-1\right\}\right\}\left\{\to\right\} X_\left\{n - 1\right\} \stackrel\left\{f_n\right\}\left\{\to\right\} X_0 $

of linear maps, the scalar

$\mathrm{tr}\left({f}_{i}\circ \cdots \circ {f}_{1}\circ {f}_{n}\circ \cdots \circ {f}_{i+1}\right) tr\left(f_i \circ \cdots \circ f_1 \circ f_n \circ \cdots \circ f_\left\{i + 1\right\}\right) $

is independent of $ii$.

A slightly less famous fact is that the same cyclicity property is enjoyed by a finer invariant than trace: the set-with-multiplicities of nonzero eigenvalues. In other words, the operators $g\circ fg\circ f$ and $f\circ gf\circ g$ have the same nonzero eigenvalues, with the same (algebraic) multiplicities. Zero has to be excluded to make this true: for instance, if we take $ff$ and $gg$ to be the projection and inclusion associated with a direct sum decomposition, then one composite operator has $00$ as an eigenvalue and the other does not.

I’ll write $\mathrm{Spec}\left(T\right)Spec\left(T\right)$ for the set-with-multiplicities of eigenvalues of a linear operator $TT$, and $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ for the set-with-multiplicities of nonzero eigenvalues. Everything we’ll do is on finite-dimensional vector spaces over an algebraically closed field $kk$. Thus, $\mathrm{Spec}\left(T\right)Spec\left(T\right)$ is a finite subset-with-multiplicity of $kk$ and $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ is a finite subset-with-multiplicity of ${k}^{×}=k\setminus \left\{0\right\}k^\times = k \setminus \\left\{0\\right\}$.

I’ll call $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ the invertible spectrum of $TT$. Why? Because every operator $TT$ decomposes uniquely as a direct sum of operators ${T}_{\mathrm{nil}}\oplus {T}_{\mathrm{inv}}T_\left\{nil\right\} \oplus T_\left\{inv\right\}$, where every eigenvalue of ${T}_{\mathrm{nil}}T_\left\{nil\right\}$ is $00$ (or equivalently, ${T}_{\mathrm{nil}}T_\left\{nil\right\}$ is nilpotent) and no eigenvalue of ${T}_{\mathrm{inv}}T_\left\{inv\right\}$ is $00$ (or equivalently, ${T}_{\mathrm{inv}}T_\left\{inv\right\}$ is invertible). Then the invertible spectrum of $TT$ is the spectrum of its invertible part ${T}_{\mathrm{inv}}T_\left\{inv\right\}$.

If excluding zero seems forced or unnatural, perhaps it helps to consider the “reciprocal spectrum”

$\mathrm{RecSpec}\left(T\right)=\left\{\lambda \in k:\mathrm{ker}\left(\lambda T-I\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{is nontrivial}\right\}. RecSpec\left(T\right) = \\left\{\lambda \in k : ker\left(\lambda T - I\right) \,\,\text\left\{ is nontrivial\right\} \\right\}. $

There’s a canonical bijection between $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ and $\mathrm{RecSpec}\left(T\right)RecSpec\left(T\right)$ given by $\lambda ↔1/\lambda \lambda \leftrightarrow 1/\lambda$. So the invariants $\mathrm{Spec}\prime Spec\text{'}$ and $\mathrm{RecSpec}RecSpec$ carry the same information, and if $\mathrm{RecSpec}RecSpec$ seems natural to you then $\mathrm{Spec}\prime Spec\text{'}$ should too.

Moreover, if you know the space $XX$ that your operator $TT$ is acting on, then to know the invertible spectrum $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ is to know the full spectrum $\mathrm{Spec}\left(T\right)Spec\left(T\right)$. That’s because the multiplicities of the eigenvalues of $TT$ sum to $\mathrm{dim}\left(X\right)dim\left(X\right)$, and so the multiplicity of $00$ in $\mathrm{Spec}\left(T\right)Spec\left(T\right)$ is $\mathrm{dim}\left(X\right)dim\left(X\right)$ minus the sum of the multiplicities of the nonzero eigenvalues.

The cyclicity equation

$\mathrm{Spec}\prime \left(g\circ f\right)=\mathrm{Spec}\prime \left(f\circ g\right) Spec\text{'}\left(g\circ f\right) = Spec\text{'}\left(f\circ g\right) $

is a very strong property of $\mathrm{Spec}\prime Spec\text{'}$. A second, seemingly more mundane, property is that for any operators ${T}_{1}T_1$ and ${T}_{2}T_2$ on the same space, and any scalar $\lambda \lambda$,

$\mathrm{Spec}\prime \left({T}_{1}\right)=\mathrm{Spec}\prime \left({T}_{2}\right)⇒\mathrm{Spec}\prime \left({T}_{1}+\lambda I\right)=\mathrm{Spec}\prime \left({T}_{2}+\lambda I\right). Spec\text{'}\left(T_1\right) = Spec\text{'}\left(T_2\right) \implies Spec\text{'}\left(T_1 + \lambda I\right) = Spec\text{'}\left(T_2 + \lambda I\right). $

In other words, for an operator $TT$, if you know $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ and you know the space that $TT$ acts on, then you know $\mathrm{Spec}\prime \left(T+\lambda I\right)Spec\text{'}\left(T + \lambda I\right)$ for each scalar $\lambda \lambda$. Why? Well, we noted above that if you know the invertible spectrum of an operator and you know the space it acts on, then you know the full spectrum. So $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ determines $\mathrm{Spec}\left(T\right)Spec\left(T\right)$, which determines $\mathrm{Spec}\left(T+\lambda I\right)Spec\left(T + \lambda I\right)$ (as $\mathrm{Spec}\left(T\right)+\lambda Spec\left(T\right) + \lambda$), which in turn determines $\mathrm{Spec}\prime \left(T+\lambda I\right)Spec\text{'}\left(T + \lambda I\right)$.

I claim that the invariant $\mathrm{Spec}\prime Spec\text{'}$ is universal with these two properties, in the following sense.

Theorem   Let $\Omega \Omega$ be a set and let $\Phi :\left\{\text{linear operators}\right\}\to \Omega \Phi : \\left\{ \text\left\{linear operators\right\} \\right\} \to \Omega$ be a function satisfying:

1. $\Phi \left(g\circ f\right)=\Phi \left(f\circ g\right)\Phi\left(g \circ f\right) = \Phi\left(f \circ g\right)$ for all $X\stackrel{f}{\to }Y\stackrel{g}{\to }XX \stackrel\left\{f\right\}\left\{\to\right\} Y \stackrel\left\{g\right\}\left\{\to\right\} X$

2. $\Phi \left({T}_{1}\right)=\Phi \left({T}_{2}\right)\Phi\left(T_1\right) = \Phi\left(T_2\right)$ $\implies$ $\Phi \left({T}_{1}+\lambda I\right)=\Phi \left({T}_{2}+\lambda I\right)\Phi\left(T_1 + \lambda I\right) = \Phi\left(T_2 + \lambda I\right)$ for all operators ${T}_{1},{T}_{2}T_1, T_2$ on the same space, and all scalars $\lambda \lambda$.

Then $\Phi \Phi$ is a specialization of $\mathrm{Spec}\prime Spec\text{'}$, that is, $\mathrm{Spec}\prime \left({T}_{1}\right)=\mathrm{Spec}\prime \left({T}_{2}\right)⇒\Phi \left({T}_{1}\right)=\Phi \left({T}_{2}\right) Spec\text{'}\left(T_1\right) = Spec\text{'}\left(T_2\right) \implies \Phi\left(T_1\right) = \Phi\left(T_2\right) $ for all ${T}_{1},{T}_{2}T_1, T_2$. Equivalently, there is a unique function $\overline{\Phi }:\left\{\text{finite subsets-with-multiplicity of}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{k}^{×}\right\}\to \Omega \bar\left\{\Phi\right\} : \\left\{ \text\left\{finite subsets-with-multiplicity of \right\}\,\, k^\times\\right\} \to \Omega $ such that $\Phi \left(T\right)=\overline{\Phi }\left(\mathrm{Spec}\prime \left(T\right)\right)\Phi\left(T\right) = \bar\left\{\Phi\right\}\left(Spec\text{'}\left(T\right)\right)$ for all operators $TT$.

For example, take $\Phi \Phi$ to be trace. Then conditions 1 and 2 are satisfied, so the theorem implies that trace is a specialization of $\mathrm{Spec}\prime Spec\text{'}$. That’s clear anyway, since the trace of an operator is the sum-with-multiplicities of the nonzero eigenvalues.

I’ll say just a little about the proof.

The invertible spectrum of a nilpotent operator is empty. Now, the Jordan normal form theorem invites us to pay special attention to the special nilpotent operators ${P}_{n}P_n$ on ${k}^{n}k^n$ defined as follows: writing ${e}_{1},\dots ,{e}_{n}e_1, \ldots, e_n$ for the standard basis of ${k}^{n}k^n$, the operator ${P}_{n}P_n$ is given by

${e}_{n}↦{e}_{n-1}↦\cdots ↦{e}_{1}↦0. e_n \mapsto e_\left\{n - 1\right\} \mapsto \cdots \mapsto e_1 \mapsto 0. $

So if the theorem is to be true then, in particular, $\Phi \left({P}_{n}\right)\Phi\left(P_n\right)$ must be independent of $nn$.

But it’s not hard to cook up maps $f:{k}^{n}\to {k}^{n-1}f: k^n \to k^\left\{n - 1\right\}$ and $g:{k}^{n-1}\to {k}^{n}g: k^\left\{n - 1\right\} \to k^n$ such that $g\circ f={P}_{n}g\circ f = P_n$ and $f\circ g={P}_{n-1}f \circ g = P_\left\{n - 1\right\}$. Thus, condition 1 implies that $\Phi \left({P}_{n}\right)=\Phi \left({P}_{n-1}\right)\Phi\left(P_n\right) = \Phi\left(P_\left\{n - 1\right\}\right)$. It follows that $\Phi \left({P}_{n}\right)\Phi\left(P_n\right)$ is independent of $nn$, as claimed.

Of course, that doesn’t prove the theorem. But the rest of the proof is straightforward, given the Jordan normal form theorem and condition 2, and in this way, we arrive at the conclusion of the theorem:

$\mathrm{Spec}\prime \left({T}_{1}\right)=\mathrm{Spec}\prime \left({T}_{2}\right)⇒\Phi \left({T}_{1}\right)=\Phi \left({T}_{2}\right) Spec\text{'}\left(T_1\right) = Spec\text{'}\left(T_2\right) \implies \Phi\left(T_1\right) = \Phi\left(T_2\right) $

for any operators ${T}_{1}T_1$ and ${T}_{2}T_2$.

One way to interpret the theorem is as follows. Let $\sim \sim$ be the smallest equivalence relation on $\left\{\text{linear operators}\right\}\\left\{\text\left\{linear operators\right\}\\right\}$ such that:

1. $g\circ f\sim f\circ gg\circ f \sim f \circ g$

2. ${T}_{1}\sim {T}_{2}T_1 \sim T_2$ $\implies$ ${T}_{1}+\lambda I\sim {T}_{2}+\lambda IT_1 + \lambda I \sim T_2 + \lambda I$

(where $ff$, $gg$, etc. are quantified as in the theorem). Then the natural surjection

$\left\{\text{linear operators}\right\}⟶\left\{\text{linear operators}\right\}/\sim \\left\{ \text\left\{linear operators\right\} \\right\} \longrightarrow \\left\{ \text\left\{linear operators\right\} \\right\}/\sim $

is isomorphic to

$\mathrm{Spec}\prime :\left\{\text{linear operators}\right\}⟶\left\{\text{finite subsets-with-multiplicity of}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{k}^{×}\right\}. Spec\text{'}: \\left\{ \text\left\{linear operators\right\} \\right\} \longrightarrow \\left\{ \text\left\{finite subsets-with-multiplicity of \right\}\,\, k^\times\\right\}. $

That is, there is a bijection between $\left\{\text{linear operators}\right\}/\sim \\left\{ \text\left\{linear operators\right\} \\right\}/\sim$ and $\left\{\text{finite subsets-with-multiplicity of}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{k}^{×}\right\}\\left\{ \text\left\{finite subsets-with-multiplicity of \right\}\,\, k^\times\\right\}$ making the evident triangle commute.

So, we’ve characterized the invariant $\mathrm{Spec}\prime Spec\text{'}$ in terms of conditions 1 and 2. These conditions seem reasonably natural, and don’t depend on any prior concepts such as “eigenvalue”.

Condition 2 does appear to refer to some special features of the category $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$ of finite-dimensional vector spaces. But let’s now think about how it could be interpreted in other categories. That is, for a category $\mathcal\left\{E\right\}$ (in place of $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$) and a function

$\Phi :\left\{\text{endomorphisms in}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\}\to \Omega \Phi: \\left\{ \text\left\{endomorphisms in \right\}\,\, \mathcal\left\{E\right\} \\right\} \to \Omega $

into some set $\Omega \Omega$, how can we make sense of condition 2?

Write $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$ for the category of endomorphisms in $\mathcal\left\{E\right\}$, with maps preserving those endomorphisms in the sense that the evident square commutes. (It’s the category of functors from the additive monoid $\mathbb\left\{N\right\}$, seen as a one-object category, into $\mathcal\left\{E\right\}$.)

For any scalars $\kappa \ne 0\kappa \neq 0$ and $\lambda \lambda$, there’s an automorphism ${F}_{\kappa ,\lambda }F_\left\{\kappa, \lambda\right\}$ of the category $\mathrm{Endo}\left(\mathrm{FDVect}\right)\mathbf\left\{Endo\right\}\left(\mathbf\left\{FDVect\right\}\right)$ given by

${F}_{\kappa ,\lambda }\left(T\right)=\kappa T+\lambda I. F_\left\{\kappa, \lambda\right\}\left(T\right) = \kappa T + \lambda I. $

I guess, but haven’t proved, that these are the only automorphisms of $\mathrm{Endo}\left(\mathrm{FDVect}\right)\mathbf\left\{Endo\right\}\left(\mathbf\left\{FDVect\right\}\right)$ that leave the underlying vector space unchanged. In what follows, I’ll assume this guess is right.

Now, condition 2 says that $\Phi \left(T\right)\Phi\left(T\right)$ determines $\Phi \left(T+\lambda I\right)\Phi\left(T + \lambda I\right)$ for each $\lambda \lambda$, for operators $TT$ on a known space. That’s weaker than the statement that $\Phi \left(T\right)\Phi\left(T\right)$ determines $\Phi \left(\kappa T+\lambda I\right)\Phi\left(\kappa T + \lambda I\right)$ for each $\kappa \ne 0\kappa \neq 0$ and $\lambda \lambda$ — but $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ does determine $\mathrm{Spec}\prime \left(\kappa T+\lambda I\right)Spec\text{'}\left(\kappa T + \lambda I\right)$. So the theorem remains true if we replace condition 2 with the statement that $\Phi \left(T\right)\Phi\left(T\right)$ determines $\Phi \left(F\left(T\right)\right)\Phi\left(F\left(T\right)\right)$ for each automorphism $FF$ of $\mathrm{Endo}\left(\mathrm{FDVect}\right)\mathbf\left\{Endo\right\}\left(\mathbf\left\{FDVect\right\}\right)$ “over $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$” (that is, leaving the underlying vector space unchanged).

This suggests the following definition:

Definition   Let $\mathcal\left\{E\right\}$ be a category. Let $\sim \sim$ be the equivalence relation on $\left\{\text{endomorphisms in}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\}\\left\{ \text\left\{endomorphisms in \right\}\,\, \mathcal\left\{E\right\} \\right\}$ generated by:

1. $g\circ f\sim f\circ gg\circ f \sim f \circ g$ for all $X\stackrel{f}{\to }Y\stackrel{g}{\to }XX \stackrel\left\{f\right\}\left\{\to\right\} Y \stackrel\left\{g\right\}\left\{\to\right\} X$ in $\mathcal\left\{E\right\}$

2. ${T}_{1}\sim {T}_{2}T_1 \sim T_2$ $\implies$ $F\left({T}_{1}\right)\sim F\left({T}_{2}\right)F\left(T_1\right) \sim F\left(T_2\right)$ for all endomorphisms ${T}_{1},{T}_{2}T_1, T_2$ on the same object of $\mathcal\left\{E\right\}$ and all automorphisms $FF$ of $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$ over $\mathcal\left\{E\right\}$.

Call $\left\{\text{endomorphisms in}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\}/\sim \\left\{ \text\left\{endomorphisms in \right\}\,\, \mathcal\left\{E\right\}\\right\}/\sim$ the set of invertible spectral values of $\mathcal\left\{E\right\}$. Write $\mathrm{Spec}\prime :\left\{\text{endomorphisms in}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\}\to \left\{\text{invertible spectral values of}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\} Spec\text{'}: \\left\{ \text\left\{endomorphisms in \right\}\,\, \mathcal\left\{E\right\} \\right\} \to \\left\{ \text\left\{invertible spectral values of\right\}\,\, \mathcal\left\{E\right\} \\right\} $ for the natural surjection. The invertible spectrum of an endomorphism $TT$ in $\mathcal\left\{E\right\}$ is $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$.

In the case $ℰ=\mathrm{FDVect}\mathcal\left\{E\right\} = \mathbf\left\{FDVect\right\}$, the invertible spectral values are the finite subsets-with-multiplicity of ${k}^{×}k^\times$, and the invertible spectrum $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ is as defined at the start of this post — namely, the set of nonzero eigenvalues with their algebraic multiplicities.

Aside   At least, that’s the case up to isomorphism. You might feel that we’ve lost something, though. After all, the spectrum of a linear operator is a subset-with-multiplicities of the base field, not just an element of some abstract set.

But the theorem does give us some structure on the set of invertible spectral values. This remark of mine below (written after I wrote a first version of this post, but before I wrote the revised version you’re now reading) shows that if $\mathcal\left\{E\right\}$ has finite coproducts then $\sim \sim$ is a congruence for them; that is, if ${S}_{1}\sim {S}_{2}S_1 \sim S_2$ and ${T}_{1}\sim {T}_{2}T_1 \sim T_2$ then ${S}_{1}+{T}_{1}\sim {S}_{2}+{T}_{2}S_1 + T_1 \sim S_2 + T_2$. (Here $++$ is the coproduct in $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$, which comes from the coproduct in $\mathcal\left\{E\right\}$ in the obvious way.) So the coproduct structure on endomorphisms induces a binary operation $\vee \vee$ on the set of invertible spectral values, satisfying

$\mathrm{Spec}\prime \left(S\oplus T\right)=\mathrm{Spec}\prime \left(S\right)\vee \mathrm{Spec}\prime \left(T\right). Spec\text{'}\left(S \oplus T\right) = Spec\text{'}\left(S\right) \vee Spec\text{'}\left(T\right). $

In the case $ℰ=\mathrm{FDVect}\mathcal\left\{E\right\} = \mathbf\left\{FDVect\right\}$, this is the union of finite subsets-with-multiplicity of ${k}^{×}k^\times$ (adding multiplicities). And in general, the algebraic properties of coproduct imply that $\vee \vee$ gives the set of invertible spectral values the structure of a commutative monoid.

Similarly, condition 2 implies that the automorphism group of $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$ acts on the set of invertible spectral values; and since automorphisms preserve coproducts (if they exist), it acts by monoid homomorphisms.

We can now ask what this general definition produces for other categories. I’ve only just begun to think about this, and only in one particular case: when $\mathcal\left\{E\right\}$ is $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$, the category of finite sets.

I believe the category of endomorphisms in $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$ has no nontrivial automorphisms over $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$. After all, given an endomorphism $TT$ of a finite set $XX$, what natural ways are there of producing another endomorphism of $XX$? There are only the powers ${T}^{n}T^n$, I think, and the process $T↦{T}^{n}T \mapsto T^n$ is only invertible when $n=1n = 1$.

So, condition 2 is trivial. We’re therefore looking for the smallest equivalence relation on $\left\{\text{endomorphisms of finite sets}\right\}\\left\{ \text\left\{endomorphisms of finite sets\right\} \\right\}$ such that $g\circ f\sim f\circ gg \circ f \sim f \circ g$ for all maps $ff$ and $gg$ pointing in opposite directions. I believe, but haven’t proved, that ${T}_{1}\sim {T}_{2}T_1 \sim T_2$ if and only if ${T}_{1}T_1$ and ${T}_{2}T_2$ have the same number of cycles

${x}_{1}↦{x}_{2}↦\cdots ↦{x}_{p}↦{x}_{1} x_1 \mapsto x_2 \mapsto \cdots \mapsto x_p \mapsto x_1 $

of each period $pp$. Thus, the invertible spectral values of $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$ are the finite sets-with-multiplicity of positive integers, and if $TT$ is an endomorphism of a finite set then $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ is the set-with-multiplicities of periods of cycles of $TT$.

All of the above is a record of thoughts I had in spare moments at this workshop I just attended in Louvain-la-Neuve, so I haven’t had much time to reflect. I’ve noted where I’m not sure of the facts, but I’m also not sure of the aesthetics:

In other words, do the theorem and definition above represent the best approach? Here are three quite specific reservations:

1. I’m not altogether satisfied with the fact that it’s the invertible spectrum, rather than the full spectrum, that comes out. Perhaps there’s something to be done with the observation that if you know the invertible spectrum, then knowing the full spectrum is equivalent to knowing (the dimension of) the space that your operator acts on.

2. Condition 2 of the theorem states that $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ determines $\mathrm{Spec}\prime \left(T+\lambda I\right)Spec\text{'}\left(T + \lambda I\right)$ for an operator $TT$ on a known space (and, of course, for known $\lambda \right)\lambda\right)$. That was enough to prove the theorem. But there’s also a much stronger true statement: $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ determines $\mathrm{Spec}\prime \left(p\left(T\right)\right)Spec\text{'}\left(p\left(T\right)\right)$ for any polynomial $pp$ over $kk$ (again, for an operator $TT$ on a known space). Any polynomial $pp$ gives an endomorphism $T↦p\left(T\right)T \mapsto p\left(T\right)$ of $\mathrm{Endo}\left(\mathrm{FDVect}\right)\mathbf\left\{Endo\right\}\left(\mathbf\left\{FDVect\right\}\right)$ over $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$, and I guess these are the only endomorphisms. So, we could generalize condition 2 by using endomorphisms rather than automorphisms of $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$. Should we?

### Tommaso Dorigo - Scientificblogging

Kick-Off Meeting Of AMVA4NewPhysics
The European network I am coordinating will have its kick-off meeting at CERN on September 16th. This will be a short event where we give a sort of "orientation" to the participants, in terms of who we are, what we have to deliver, how we plan to do it. It is not a redundant proposition, as the AMVA4NewPhysics programme is quite varied: it includes two big experiments (ATLAS and CMS), plus two Statistics institutes, and several industrial partners; it will organize workshops in statistics, outreach, soft skills, and software tools such as MatLab, RooStats, Madgraph; and it will send our 10 students flying around like businessmen.

### ZapperZ - Physics and Physicists

A Physics App To Teach Physics
A group of educational researcher has created an app for iOS, Android, PCs, and Macs, that teaches physics to 9-graders.

The app, Exploring Physics, is meant to take particular physics curriculum already being taught in a number of public school districts, including Columbia's, and make it available digitally. The Exploring Physics curriculum app is designed to replace traditional lecture-based learning with discussions and hands-on experiments.
“The idea in the app is to have students learn by doing stuff,” said Meera Chandrasekhar, the co-creator of the app and a curators' teaching professor in the MU Department of Physics and Astronomy. “Even though it’s a digital app, it actually involves using quite a lot of hands-on materials.”

I haven't look at it. If any of you have, and better still, is using it, I very much like to hear your opinion.

Zz.

## September 13, 2015

### John Baez - Azimuth

Biology, Networks and Control Theory

The Institute for Mathematics and its Applications (or IMA, in Minneapolis, Minnesota), is teaming up with the Mathematical Biosciences Institute (or MBI, in Columbus, Ohio). They’re having a big program on control theory and networks:

### At the IMA

Here’s what’s happening at the Institute for Mathematics and its Applications:

Concepts and techniques from control theory are becoming increasingly interdisciplinary. At the same time, trends in modern control theory are influenced and inspired by other disciplines. As a result, the systems and control community is rapidly broadening its scope in a variety of directions. The IMA program is designed to encourage true interdisciplinary research and the cross fertilization of ideas. An important element for success is that ideas flow across disciplines in a timely manner and that the cross-fertilization takes place in unison.

Due to the usefulness of control, talent from control theory is drawn and often migrates to other important areas, such as biology, computer science, and biomedical research, to apply its mathematical tools and expertise. It is vital that while the links are strong, we bring together researchers who have successfully bridged into other disciplines to promote the role of control theory and to focus on the efforts of the controls community. An IMA investment in this area will be a catalyst for many advances and will provide the controls community with a cohesive research agenda.

In all topics of the program the need for research is pressing. For instance, viable implementations of control algorithms for smart grids are an urgent and clearly recognized need with considerable implications for the environment and quality of life. The mathematics of control will undoubtedly influence technology and vice-versa. The urgency for these new technologies suggests that the greatest impact of the program is to have it sooner rather than later.

First trimester (Fall 2015): Networks, whether social, biological, swarms of animals or vehicles, the Internet, etc., constitute an increasingly important subject in science and engineering. Their connectivity and feedback pathways affect robustness and functionality. Such concepts are at the core of a new and rapidly evolving frontier in the theory of dynamical systems and control. Embedded systems and networks are already pervasive in automotive, biological, aerospace, and telecommunications technologies and soon are expected to impact the power infrastructure (smart grids). In this new technological and scientific realm, the modeling and representation of systems, the role of feedback, and the value and cost of information need to be re-evaluated and understood. Traditional thinking that is relevant to a limited number of feedback loops with practically unlimited bandwidth is no longer applicable. Feedback control and stability of network dynamics is a relatively new endeavor. Analysis and control of network dynamics will occupy mostly the first trimester while applications to power networks will be a separate theme during the third trimester. The first trimester will be divided into three workshops on the topics of analysis of network dynamics and regulation, communication and cooperative control over networks, and a separate one on biological systems and networks.

The second trimester (Winter 2016) will have two workshops. The first will be on modeling and estimation (Workshop 4) and the second one on distributed parameter systems and partial differential equations (Workshop 5). The theme of Workshop 4 will be on structure and parsimony in models. The goal is to explore recent relevant theories and techniques that allow sparse representations, rank constrained optimization, and structural constraints in models and control designs. Our intent is to blend a group of researchers in the aforementioned topics with a select group of researchers with interests in a statistical viewpoint. Workshop 5 will focus on distributed systems and related computational issues. One of our aims is to bring control theorists with an interest in optimal control of distributed parameter systems together with mathematicians working on optimal transport theory (in essence an optimal control problem). The subject of optimal transport is rapidly developing with ramifications in probability and statistics (of essence in system modeling and hence of interest to participants in Workshop 4 as well) and in distributed control of PDE’s. Emphasis will also be placed on new tools and new mathematical developments (in PDE’s, computational methods, optimization). The workshops will be closely spaced to facilitate participation in more than one.

The third trimester (Spring 2016) will target applications where the mathematics of systems and control may soon prove to have a timely impact. From the invention of atomic force microscopy at the nanoscale to micro-mirror arrays for a next generation of telescopes, control has played a critical role in sensing and imaging of challenging new realms. At present, thanks to recent technological advances of AFM and optical tweezers, great strides are taking place making it possible to manipulate the biological transport of protein molecules as well as the control of individual atoms. Two intertwined subject areas, quantum and nano control and scientific instrumentation, are seen to blend together (Workshop 6) with partial focus on the role of feedback control and optimal filtering in achieving resolution and performance at such scales. A second theme (Workshop 7) will aim at control issues in distributed hybrid systems, at a macro scale, with a specific focus the “smart grid” and energy applications.

• Workshop 1, Distributed Control and Decision Making Over Networks, 28 September – 2 October 2015.

• Workshop 2, Analysis and Control of Network Dynamics, 19-23 October 2015.

• Workshop 3, Biological Systems and Networks, 11-16 November 2015.

• Workshop 4, Optimization and Parsimonious Modeling, 25-29 January 2016.

• Workshop 5, Computational Methods for Control of Infinite-dimensional Systems, 14-18 March 2016.

• Workshop 6, Quantum and Nano Control, 11-15 April 2016.

• Workshop 7, Control at Large Scales: Energy Markets and Responsive Grids, 9-13 March 2016.

### At the MBI

Here’s what’s going on at the Mathematical Biology Institute:

The MBI network program is part of a yearlong cooperative program with IMA.

Networks and deterministic and stochastic dynamical systems on networks are used as models in many areas of biology. This underscores the importance of developing tools to understand the interplay between network structures and dynamical processes, as well as how network dynamics can be controlled. The dynamics associated with such models are often different from what one might traditionally expect from a large system of equations, and these differences present the opportunity to develop exciting new theories and methods that should facilitate the analysis of specific models. Moreover, a nascent area of research is the dynamics of networks in which the networks themselves change in time, which occurs, for example, in plasticity in neuroscience and in up regulation and down regulation of enzymes in biochemical systems.

There are many areas in biology (including neuroscience, gene networks, and epidemiology) in which network analysis is now standard. Techniques from network science have yielded many biological insights in these fields and their study has yielded many theorems. Moreover, these areas continue to be exciting areas that contain both concrete and general mathematical problems. Workshop 1 explores the mathematics behind the applications in which restrictions on general coupled systems are important. Examples of such restrictions include symmetry, Boolean dynamics, and mass-action kinetics; and each of these special properties permits the proof of theorems about dynamics on these special networks.

Workshop 2 focuses on the interplay between stochastic and deterministic behavior in biological networks. An important related problem is to understand how stochasticity affects parameter estimation. Analyzing the relationship between stochastic changes, network structure, and network dynamics poses mathematical questions that are new, difficult, and fascinating.

In recent years, an increasing number of biological systems have been modeled using networks whose structure changes in time or which use multiple kinds of couplings between the same nodes or couplings that are not just pairwise. General theories such as groupoids and hypergraphs have been developed to handle the structure in some of these more general coupled systems, and specific application models have been studied by simulation. Workshop 3 will bring together theorists, modelers, and experimentalists to address the modeling of biological systems using new network structures and the analysis of such structures.

Biological systems use control to achieve desired dynamics and prevent undesirable behaviors. Consequently, the study of network control is important both to reveal naturally evolved control mechanisms that underlie the functioning of biological systems and to develop human-designed control interventions to recover lost function, mitigate failures, or repurpose biological networks. Workshop 4 will address the challenging subjects of control and observability of network dynamics.

#### Events

Workshop 1: Dynamics in Networks with Special Properties, 25-29 January, 2016.

Workshop 2: The Interplay of Stochastic and Deterministic Dynamics in Networks, 22-26 February, 2016.

Workshop 3: Generalized Network Structures and Dynamics, 21-15 March, 2016.

Workshop 4: Control and Observability of Network Dynamics, 11-15 April, 2016.

You can get more schedule information on these posters:

### Clifford V. Johnson - Asymptotia

Face the Morning…

With the new semester and a return to the routine of campus life comes taking the subway train regularly in the morning again, which I'm pleased to return to. It means odd characters, snippets of all sort of conversations, and - if I get a seat and a good look - the opportunity to practice a bit of quick sketching of faces. I'm slow and rusty from no recent regular practice, so I imagine that it was mostly luck that helped me get a reasonable likeness [...] Click to continue reading this post

The post Face the Morning… appeared first on Asymptotia.