# Particle Physics Planet

## May 04, 2016

### Peter Coles - In the Dark

Farewell to Whitchurch..

One of the things that happened over the Bank Holiday Weekend was the closure of Whitchurch Hospital on April 30th 2016. I read about this here, from which source I also took the photograph below:

Whitchurch Hospital was built in 1908 and was originally known as Cardiff City Asylum. After over a hundred years of providing care for the mentally ill – including soldiers treated for shell shock in two world wars,  the remaining patients have now been transferred to site will be redeveloped for residential housing and the remaining inpatients transferred to a brand new psychiatric care unit at Llandough.

It was strange reading about the closure of Whitchurch Hospital. Having spent more time myself there than I wish I had, including an extended period an acute ward, I never thought I would feel nostalgic about the place. Quite apart from the fact that it looked like something out of a Gothic novel, it was in dire need of refurbishment and modernisation. Looking back, however, I have the greatest admiration for the staff who worked there and deep gratitude for the patience and kindness they showed me while I was there.

The first extended period I spent in a psychiatric institution, back in the 1980s, was in Hellingly Hospital in Sussex. That place also had something of the Hammer House of Horror about it. I was completely terrified from the moment I arrived there to the moment I was discharged and don’t feel any nostalgia for it. However, when I recently looked at what it is like now – abandoned and decaying – it gave me more than a shudder.

### Christian P. Robert - xi'an's og

CRiSM workshop on estimating constants [slides]

A short announcement that the slides of almost all talks at the CRiSM workshop on estimating constants last April 20-22 are now available. Enjoy (and dicuss)!

Filed under: Books, pictures, Statistics, Travel, University life Tagged: Bayesian computing, CRiSM, evidence, Monte Carlo Statistical Methods, normalising constant, partition, slides, University of Warwick, workshop

### CERN Bulletin

CERN Bulletin Issue No. 17-18/2016
Link to e-Bulletin Issue No. 17-18/2016Link to all articles in this issue No.

## May 03, 2016

### Christian P. Robert - xi'an's og

global-local mixtures

Anindya Bhadra, Jyotishka Datta, Nick Polson and Brandon Willard have arXived this morning a short paper on global-local mixtures. Although the definition given in the paper (p.1) is rather unclear, those mixtures are distributions of a sample that are marginals over component-wise (local) and common (global) parameters. The observations of the sample are (marginally) exchangeable if not independent.

“The Cauchy-Schlömilch transformation not only guarantees an ‘astonishingly simple’ normalizing constant for f(·), it also establishes the wide class of unimodal densities as global-local scale mixtures.”

The paper relies on the Cauchy-Schlömilch identity

$\int_0^\infty f(\{x-g(x)\}^2)\text{d}x=\int_0^\infty f(y^2)\text{d}y\qquad \text{with}\quad g(x)=g^{-1}(x)$

a self-inverse function. This generic result proves helpful in deriving demarginalisations of a Gaussian distribution for densities outside the exponential family like Laplace’s. (This is getting very local for me as Cauchy‘s house is up the hill, while Laplace lived two train stations away. Before train was invented, of course.) And for logistic regression. The paper also briefly mentions Etienne Halphen for his introduction of generalised inverse Gaussian distributions, Halphen who was one of the rare French Bayesians, worked for the State Electricity Company (EDF) and briefly with Lucien Le Cam (before the latter left for the USA). Halphen introduced some families of distributions during the early 1940’s, including the generalised inverse Gaussian family, which were first presented by his friend Daniel Dugué to the Académie des Sciences maybe because of the Vichy racial laws… A second result of interest in the paper is that, given a density g and a transform s on positive real numbers that is decreasing and self-inverse, the function f(x)=2g(x-s(x)) is again a density, which can again be represented as a global-local mixture. [I wonder if these representations could be useful in studying the Cauchy conjecture solved last year by Natesh and Xiao-Li.]

Filed under: Books, pictures, Running, Statistics, Travel Tagged: Cauchy, Pierre Simon de Laplace, Schlömilch

### Symmetrybreaking - Fermilab/SLAC

EXO-200 resumes its underground quest

The upgraded experiment aims to discover if neutrinos are their own antiparticles.

Science is often about serendipity: being open to new results, looking for the unexpected.

The dark side of serendipity is sheer bad luck, which is what put the Enriched Xenon Observatory experiment, or EXO-200, on hiatus for almost two years.

Accidents at the Department of Energy’s underground Waste Isolation Pilot Project (WIPP) facility near Carlsbad, New Mexico, kept researchers from continuing their search for signs of neutrinos and their antimatter pairs. Designed as storage for nuclear waste, the site had both a fire and a release of radiation in early 2014 in a distant part of the facility from where the experiment is housed. No one at the site was injured. Nonetheless, the accidents, and the subsequent efforts of repair and remediation, resulted in a nearly two-year suspension of the EXO-200 effort.

Things are looking up now, though: Repairs to the affected area of the site are complete, new safety measures are in place, and scientists are back at work in their separate area of the site, where the experiment is once again collecting data. That’s good news, since EXO-200 is one of a handful of projects looking to answer a fundamental question in particle physics: Are neutrinos and antineutrinos the same thing?

### The neutrino that wasn't there

Each type of particle has its own nemesis: its antimatter partner. Electrons have positrons—which have the same mass but opposite electric charge—quarks have antiquarks and protons have antiprotons. When a particle meets its antimatter version, the result is often mutual annihilation. Neutrinos may also have antimatter counterparts, known as antineutrinos. However, unlike electrons and quarks, neutrinos are electrically neutral, so antineutrinos look a lot like neutrinos in many circumstances.

In fact, one hypothesis is that they are one and the same. To test this, EXO-200 uses 110 kilograms of liquid xenon (of its 200kg total) as both a particle source and particle detector. The experiment hinges on a process called double beta decay, in which an isotope of xenon has two simultaneous decays, spitting out two electrons and two antineutrinos. (“Beta particle” is a nuclear physics term for electrons and positrons.)

If neutrinos and antineutrinos are the same thing, sometimes the result will be neutrinoless double beta decay. In that case, the antineutrino from one decay is absorbed by the second decay, canceling out what would normally be another antineutrino emission. The challenge is to determine if neutrinos are there or not, without being able to detect them directly.

“Neutrinoless double beta decay is kind of a nuclear physics trick to answer a particle physics problem,” says Michelle Dolinski, one of the spokespeople for EXO-200 and a physicist at Drexel University. It’s not an easy experiment to do.

EXO-200 and similar experiments look for indirect signs of neutrinoless double beta decay. Most of the xenon atoms in EXO-200 are a special isotope containing 82 neutrons, four more than the most common version found in nature. The isotope decays by emitting two electrons, changing the atom from xenon into barium. Detectors in the EXO-200 experiment collect the electrons and measure the light produced when the beta particles are stopped in the xenon. These measurements together are what determine whether double beta decay happened, and whether the decay was likely to be neutrinoless.

EXO-200 isn’t the only neutrinoless double beta decay experiment, but many of the others use solid detectors instead of liquid xenon. Dolinski got her start on the CUORE experiment, a large solid-state detector, but later changed directions in her research.

“I joined EXO-200 as a postdoc in 2008 because I thought that the large liquid detectors were a more scalable solution,” she says. "If you want a more sensitive liquid-state experiment, you can build a bigger tank and fill it with more xenon.”

Neutrinoless or not, double beta decay is very rare. A given xenon atom decays randomly, with an average lifetime of a quadrillion times the age of the universe. However, if you use a sufficient number of atoms, a few of them will decay while your experiment is running.

“We need to sample enough nuclei so that you would detect these putative decays before the researcher retires,” says Martin Breidenbach, one of the EXO-200 project leaders and a physicist at the Department of Energy’s SLAC National Accelerator Laboratory.

But the experiment is not just detecting neutrinoless events. Heavier neutrinos mean more frequent decays, so measuring the rate reveals the neutrino mass — something very hard to measure otherwise.

Prior runs of EXO-200 and other experiments failed to see neutrinoless double beta decay, so either neutrinos and antineutrinos aren’t the same particle after all, or the neutrino mass is small enough to make decays too rare to be seen during the experiment’s lifetime. The current limit for the neutrino mass is less than 0.38 electronvolts—for comparison, electrons are about 500,000 electronvolts in mass.

SLAC National Accelerator Laboratory's Jon Davis checks the enriched xenon storage bottles before the refilling of the TPC.

Brian Dozier, Los Alamos National Laboratory

### Working in the salt mines

Cindy Lin is a Drexel University graduate student who spends part of her time working on the EXO-200 detector at the mine. Getting to work is fairly involved.

“In the morning we take the cage elevator half a mile down to the mine,” she says. Additionally, she and the other workers at WIPP have to take a 40-hour safety training to ensure their wellbeing, and wear protective gear in addition to normal lab clothes.

“As part of the effort to minimize salt dust particles in our cleanroom, EXO-200 scientists also cover our hair and wear coveralls,” Lin adds.

The sheer amount of earth over the detector shields it from electrons and other charged particles from space, which would make it too hard to spot the signal from double beta decay. WIPP is carved out of a sodium chloride deposit—the same stuff as table salt—that has very little uranium or the other radioactive minerals you find in solid rock caverns. But it has its drawbacks, too.

“Salt is very dynamic: It moves at the level of centimeters a year, so you can't build a nice concrete structure,” says Breidenbach. To compensate, the EXO-200 team has opted for a more modular design.

The inadvertent shutdown provided extra challenges. EXO-200, like most experiments, isn’t well suited for being neglected for more than a few days at a time. However, Lin and other researchers worked hard to get the equipment running for new data this year, and the downtime also allowed researchers to install some upgraded equipment.

The next phase of the experiment, nEXO, is at a conceptual stage based on what has been learned from EXO200. Experimenters are considering the benefits of moving the project deeper underground, perhaps at a facility like the Sudbury Neutrino Observatory (SNOlab) in Canada. Dolinski is optimistic that if there are any neutrinoless double beta decays to see, nEXO or similar experiments should see them in the next 15 years or so.

Then, maybe we’ll know if neutrinos and antineutrinos are the same and find out more about these weird low-mass particles.

### Axel Maas - Looking Inside the Standard Model

Digging into a particle
This time I would like to write about a new paper which I have just put out. In this paper, I investigate a particular class of particles.

This class of particles is actually quite similar to the Higgs boson. I. e. the particles are bosons and they have the same spin as the Higgs boson. This spin is zero. This class of particles is called scalars. These particular sclars also have the same type of charges, they interact with the weak interaction.

But there are fundamental differences as well. One is that I have switched off the back reaction between these particles and the weak interactions: The scalars are affected by the weak interaction, but they do not influence the W and Z bosons. I have also switched off the interactions between the scalars. Therefore, no Brout-Englert-Higgs effect occurs. On the other hand, I have looked at them for several different masses. This set of conditions is known as quenched, because all the interactions are shut-off (quenched), and the only feature which remains to be manipulated is the mass.

Why did I do this? There are two reasons.

One is a quite technical reason. Even in this quenched situation, the scalars are affected by quantum corrections, the radiative corrections. Due to them, the mass changes, and the way the particles move changes. These effects are quantitative. And this is precisely the reason to study them in this setting. Being quenched it is much easier to actually determine the quantitative behavior of these effects. Much easier than when looking at the full theory with back reactions, which is a quite important part of our research. I have learned a lot about these quantitative effects, and am now much more confident in how they behave. This will be very valuable in studies beyond this quenched case. As was expected, there was not many surprises found. Hence, it was essentially a necessary but unspectacular numerical exercise.

Much more interesting was the second aspect. When quenching, this theory becomes very different from the normal standard model. Without the Brout-Englert-Higgs effect, the theory actually looks very much like the strong interaction. Especially, in this case the scalars would be confined in bound states, just like quarks are in hadrons. How this occurs is not really understood. I wanted to study this using these scalars.

Justifiable, you may ask why I would do this. Why would I not just have a look at the quarks themselves. There is a conceptual and a technical reason. The conceptual reason is that quarks are fermions. Fermions have non-zero spin, in contrast to scalars. This entails that they are mathematically more complicated. These complications mix in with the original question about confinement. This is disentangled for scalars. Hence, by choosing scalars, these complications are avoided. This is also one of the reasons to look at the quenched case. The back-reaction, irrespective of with quarks or scalars, obscures the interesting features. Thus, quenching and scalars isolates the interesting feature.

The other is that the investigations were performed using simulations. Fermions are much, much more expensive than scalars in such simulations in terms of computer time. Hence, with scalars it is possible to do much more at the same expense in computing time. Thus, simplicity and cost made scalars for this purpose attractive.

Did it work? Well, no. At least not in any simple form. The original anticipation was that confinement should be imprinted into how the scalars move. This was not seen. Though the scalars are very peculiar in their properties, they in no obvious way show confinement. It may still be that there is an indirect way. But so far nobody has any idea how. Though disappointing, this is not bad. It only tells us that our simple ideas were wrong. It also requires us to think harder on the problem.

An interesting observation could be made nonetheless. As said above, the scalars were investigated for different masses. These masses are, in a sense, not the observed masses. What they really are is the mass of the particle before quantum effects are taken into account. These quantum effects change the mass. These changes were also measured. Surprisingly, the measured mass was larger than the input mass. The interactions created mass, even if the input mass was zero. The strong interaction is known to do so. However, it was believed that this feature is strongly tied to fermions. For scalars it was not expected to happen, at least not in the observed way. Actually, the mass is even of a similar size as for the quarks. This is surprising. This implies that the kind of interaction is generically introducing a mass scale.

This triggered for me the question whether the mass scale also survives when having the backcoupling in once more. If it remains even when there is a Brout-Englert-Higgs effect then this could have interesting implications for the mass of the Higgs. But this remains to be seen. It may as well be that this will not endure when not being quenched.

### Peter Coles - In the Dark

50 Years of the Astronomy Centre at the University of Sussex

It is my pleasure to share here the announcement that there will be a  special celebration for the 50th Anniversary of the Astronomy Centre at the University of Sussex whose first students began their studies here in 1966.

Lord Martin Rees – Astronomer Royal, Fellow of Trinity College, Cambridge, Past President of the Royal Society and Sussex Honorary – will be joining alumni and other former faculty for the celebratory lunch and has kindly agreed to deliver a short speech as part of the event.

Organised by the Astronomy Centre and the Development and Alumni Office, and supported by the School of Mathematical and Physical Sciences , this celebration is open to all former students and their partners. Please make a note of the date and time:

Date: Saturday 15th October 2016
Venue: 3rd Floor, Bramber House, University of Sussex
Time: 12 – 3pm
Cost:  £20 per person, to include lunch and refreshments

You can book online here to secure your place(s).

We are very much looking forward to welcoming you back to campus to share in the celebrations. If you are in touch with other alumni or faculty from Sussex who have connections with the Astronomy Centre, please let them know!

### Emily Lakdawalla - The Planetary Society Blog

What's up in the solar system, May 2016 edition: Good news in cruise for Juno and ExoMars Trace Gas Orbiter
May 2016 will be yet another month of fairly routine operations across the solar system -- if you can ever use the word "routine" to describe autonomous robots exploring other planets. ExoMars' cruise to Mars has started smoothly, and Juno is only two months away from Jupiter orbit insertion. Earthlings will witness a Mercury transit of the Sun on May 9.

### Peter Coles - In the Dark

Afterwards, by Thomas Hardy

When the Present has latched its postern behind my tremulous stay,
And the May month flaps its glad green leaves like wings,
Delicate-filmed as new-spun silk, will the neighbours say,
“He was a man who used to notice such things”?

If it be in the dusk when, like an eyelid’s soundless blink,
The dewfall-hawk comes crossing the shades to alight
Upon the wind-warped upland thorn, a gazer may think,
“To him this must have been a familiar sight.”

If I pass during some nocturnal blackness, mothy and warm,
When the hedgehog travels furtively over the lawn,
One may say, “He strove that such innocent creatures should come to no harm,
But he could do little for them; and now he is gone.”

If, when hearing that I have been stilled at last, they stand at the door,
Watching the full-starred heavens that winter sees,
Will this thought rise on those who will meet my face no more,
“He was one who had an eye for such mysteries”?

And will any say when my bell of quittance is heard in the gloom,
And a crossing breeze cuts a pause in its outrollings,
Till they rise again, as they were a new bell’s boom,
“He hears it not now, but used to notice such things”?

by Thomas Hardy (1840-1928).

### astrobites - astro-ph reader's digest

Diaries of a Dwarf Planet: What are Those Spots on Ceres?

Lauren Sgro, PhD student at University of Georgia

Today we have a guest post from Lauren Sgro, who is a second year PhD student at the University of Georgia. Lauren studies the orbits of young, nearby binary star systems. Despite the all-consuming nature of graduate school, she enjoys doing yoga and occasionally hiking up a mountain.

Title: Sublimation in Bright Spots on (1) Ceres

Authors: A. Nathues, M. Hoffmann, M. Schaefer, L. Le Corre, V. Reddy, T. Platz, E. A. Cloutis, U. Christensen, T. Kneissl, J.-Y. Li, K. Mengel, N. Schmedemann, T. Schaefer, C. T. Russell, D. M. Applin, D. L. Buczkowski, M. R. M. Izawa, H. U. Keller, D. P. O’Brien, C. M. Pieters, C. A. Raymond, J. Ripken, P. M. Schenk, B. E. Schmidt, H. Sierks, M. V. Sykes, G. S. Thangjam, J.-B. Vincent.

First Author’s Institution: Max Planck Institute for Solar System Research, Göttingen, Germany.

Status: Accepted to Nature, 21 September 2015.

The largest body in the asteroid belt, Ceres, has been throwing humans for a loop ever since we first turned our telescopes in its direction. For instance, Herschel’s detection of water vapor on Ceres marked the first indisputable detection of water vapor on any object in the asteroid belt. Not to mention Ceres’ lone mountain, dubbed Ahuna Mons, which has scientists stumped trying to model its formation. Now, as NASA’s space probe Dawn completes its first year of observations, another mystery has materialized. What are those bright spots that pepper this dwarf planet?

The surface of Ceres is fairly dark, akin to freshly laid asphalt. However, a recent study counts 130 bright spots distributed over its surface that seem to be more similar in brightness to new concrete. Bright patches such as this hint that Ceres may have an underlying layer of ice. The relatively young Occator crater houses the brightest spot at its center, an area that is almost four times more reflective than any other feature on the alien world. Occator and the second lightest spot, designated ‘Feature A,’ are both labeled in Figure 1.

Figure 1. Enhanced color map of Ceres from the Dawn Framing Camera. Colors correspond to wavelengths as follows: red = 0.96m, green = 0.75m, blue = 0.44m. This composite image from today’s paper represents bright spots as more blue or white than the surrounding material. (Nathues et al. 2015)

So what is this shiny stuff?

Scientists used spectral analysis and absolute reflectance to narrow down the possibilities. Reflectance is a fractional measure of how much light that is hitting a surface is reflected back, which typically changes depending on the wavelength of light we are talking about. Most of Occator’s bright spots, referred to as secondary spots, have a wavelength of maximum reflectance (wavelength of light that is most reflected by a material) that is shorter than that for the average surface, as shown in Figure 2a. In fact this trend applies to most of the bright features found by this survey. But the central most spot inside Occator seems to be an exception to the rule. This region, the brightest source on Ceres, displays a spectrum (Figure 2b) that is entirely different than both the secondary spots and the average Ceres spectrum. Along with the confirmed presence of water vapor, this data suggests that the curious bright substance is either water ice, some sort of salt, or an iron-poor clay mineral.

A closer look at the brightest spot reveals that the most likely compound responsible for this phenomenon is really just dehydrated Epsom salt. As we move away from the central portion of Occator towards the secondary spots near the edge, the reflective material shows signs of alteration. The secondary spots could simply contain a salt that is less hydrated, indicating that the presence of water wanes moving outward from the center of the crater. Feature A’s spectrum supports this theory by matching well to a more dehydrated salt in Figure 2c.

Figure 2. a) The spectra of different features on Ceres, not including the central brightest Occator spot. The brighter spots exhibit more reflective spectra, which indicate brighter material than the average surface. b) Spectrum of the central brightest spot in Occator matched to a combination of the Ceres average and dehydrated salt spectra. c) Spectrum of Feature A matched to a combination of the Ceres average and an even less hydrated salt spectra. Discrepancies are likely due to differences in iron-rich minerals. (Nathues et al. 2015)

More evidence of a watery world

Occator itself is a mini mystery machine (cue Scooby Doo). Dawn caught a glimpse of a low altitude haze loitering inside the crater while looking for plumes on Ceres. In this context, a plume is a column of water vapor that erupts from the surface of a planet and suggests the existence of a subterranean ocean, much like that expected to exist beneath the crust of Europa or Enceladus. The failure to detect plumes signifies that any water hidden within Ceres is probably not liquid, ruling out various sorts of geological activity. However, the detection of haze tells its own story. Scientists speculate that this haze may be comprised of water-ice particles and dust, which is an idea supported by Herschel’s confirmation of a relatively nearby water vapor source. The looming fog exhibited diurnal patterns, being distinct around local noon but disappearing near sundown. The haze forms due to the sun’s heat, which warms the surface and causes sublimation of the present water-ice. Rising vapor brings along dust from the surface to create the observed haze, leaving behind salt deposits that manifest as bright stains. Without the presence of the sun, sublimation ceases and the haze disintegrates at dusk.

This hazy and fleeting feature isn’t found everywhere on Ceres. It is likely that in the case of Occator or even Feature A, where the same process is expected to occur, some underground reserve of ice became exposed and thus subject to sublimation. But how could the fresh ice have been revealed? Perhaps an impact was able to permeate the dark crust of Ceres and uncover the reflective ice and salt. This simple theory fits well with the observation that most of the bright spots are associated with impact craters. The less reflective spots scattered across Ceres can then be thought of as places where sublimation has exhausted the uncovered supply of briny water-ice.

With all this talk of sublimation, it seems that Ceres has more cometary characteristics than expected for an object residing in the asteroid belt. As the largest object that lies between Jupiter and Mars, Ceres gives us clues about the formation of the Solar System. This study suggests that our Solar System may have evolved with less of a distinction between asteroids and comets than previously thought, challenging the long-held belief that asteroids and comets formed entirely separately. Dawn will remain at Ceres for the rest of its mission and indefinitely afterwards, continuing to gather more exciting details about this strange and far-away world.

## May 02, 2016

### Christian P. Robert - xi'an's og

contemporary issues in hypothesis testing

Next Fall, on 15-16 September, I will take part in a CRiSM workshop on hypothesis testing. In our department in Warwick. The registration is now open [until Sept 2] with a moderate registration free of £40 and a call for posters. Jim Berger and Joris Mulder will both deliver a plenary talk there, while Andrew Gelman will alas give a remote talk from New York. (A terrific poster by the way!)

Filed under: pictures, Statistics, Travel, University life Tagged: Andrew Gelman, Bayes factors, Bayesian foundations, Bayesian statistics, Coventry, CRiSM, England, Fall, hypothesis testing, Jim Berger, Joris Mulder, statistical tests, University of Warwick, workshop

### Emily Lakdawalla - The Planetary Society Blog

A Moon for Makemake
The solar system beyond Neptune is full of worlds hosting moons. Now we know that the dwarf planet Makemake has one of its very own.

### Peter Coles - In the Dark

Flowers in Bute Park

On  my way back to Brighton after a weekend in Cardiff. I would have lingered for more of this bank holiday Monday but the trains are running to a weird timetable and I didn’t want to get back too late.

Anyway, in lieu of a proper post here’s a picture I took of some of the spring  flowers in Bute Park on Saturday.

### ZapperZ - Physics and Physicists

Walter Kohn
Walter Kohn, who won the Nobel Prize in Chemistry, has passed away on April 19.

He is considered as the father of Density Functional Theory (DFT). If you have done any computational chemistry or band structure calculation in solid state physics, you will have seen DFT in one form or another. It has become an indispensable technique to be able to accurately arrive at a theoretical description of many systems.

Zz.

### ZapperZ - Physics and Physicists

ITER Is Getting More Expensive And More Delayed
This news report details the cost overruns and the more-and-a-decade delay of ITER.

ITER chief Bernard Bigot said the experimental fusion reactor under construction in Cadarache, France, would not see the first test of its super-heated plasma before 2025 and its first full-power fusion not before 2035.

The biggest lesson from this is how NOT to run a major international collaboration. Any more large science projects like this, and the politicians and the public will understandably be reluctant to support science projects of that scale. The rest of us will suffer for it.

Zz.

### astrobites - astro-ph reader's digest

Mapping Gravity in Stellar Nurseries
Authors: G-X Li , A. Burkert, T. Megeath, and F. Wyrowski

First Author’s Institution: University Observatory Munich

Paper Status: Submitted to Astronomy and Astrophysics

Stars form when the densest parts of molecular clouds collapse under their own gravity. Molecular clouds are shaped by the interaction of gravity, turbulencemagnetic fields, and feedback from the energy and pressure exerted by stars. Turbulence is observed by measuring the statistical properties of the gas, magnetic fields can be observed using polarized light, and feedback can be identified by searching for protostellar outflows and other features in the cloud. But ultimately, gravitational collapse forms stars. Today’s paper proposes a method to map gravitational acceleration in molecular clouds.

The Method

Gravity comes from mass (thanks Isaac). To map the gravitational acceleration, we need a map of the mass in a molecular cloud. Molecular clouds are made up of gas and dust. The gas can be mapped using emission lines. The dust can be mapped by its effect on background starlight. The more dust in a cloud the dimmer and redder a background star appears. A map of the dust can be turned into the total column density by assuming an average factor between dust and gas mass. The gravitational acceleration is calculated for each pixel of a molecular cloud by adding the contributions from the mass in all other pixels.

The gravitational acceleration is a three-dimensional property, but observations only tell us the column density on the plane of the sky. We don’t know how the mass is distributed along the line of sight. The authors avoid this problem by assuming that all clouds have a uniform thickness of 0.3 pc. How bad is this assumption? It depends on the geometry of the cloud. A spherically symmetric cloud will have equal mass in front and behind the center of the cloud, contributing equal and opposite gravitational acceleration, so the assumption of uniform thickness is not bad. However, a filamentary cloud may be oriented at different angles to our line of sight, and could be much longer than it appears projected on the sky. These clouds appear more compact than they are, causing us to overestimate the acceleration within them. The authors stress that the acceleration maps are only accurate to an order of magnitude, and should be interpreted qualitatively.

Gravity On the Edge

Gravitational acceleration depends strongly on the shape of the cloud. Figure 1 shows the gravitational acceleration map of a simple disk. This cloud experiences the strongest gravitational acceleration in a ring around the edge of the disk. Figure 2 shows the acceleration in a simple filament. The strongest acceleration in this case is at the short ends of the cloud. Because of these edge effects, gravitational acceleration may enhance star formation in the edges of clouds.

Figure 1. Uniform disk model of a molecular cloud. (Left) Cloud density is indicated in blue. Red arrows indicate the direction and magnitude of the gravitational acceleration in the cloud. (Right) The magnitude of acceleration is indicated in red. The acceleration is strongest in a ring around the edge of the disk. This effect can be seen in the Perseus molecular cloud (Figure 4).

Figure 2. Filament model of a molecular cloud. The colors and arrows are the same as in Figure 1. In this case, the acceleration is strongest at the short ends of the filament. This effect can be seen in the Pipe Nebula (Figure 3).

Accelerating Star Formation

The authors make gravitational acceleration maps of several real  molecular clouds. The Pipe Nebula, shown in Figure 3, is a filamentary cloud. The acceleration is highest at the ends of the filament, like the model in Figure 2. The authors also show the positions of young stars in the Pipe Nebula. These young stars are mainly found in the areas of highest gravitational acceleration. The Perseus molecular cloud is shown in Figure 4. This cloud resembles the uniform disk model in Figure 1. Again, the gravitational acceleration is highest at the edges of this cloud. Young stars are found preferentially along the edges of the cloud. Both of these examples show that large acceleration may induce star formation. This could be because gas is flowing onto the clouds in these areas, enhancing the density enough to trigger collapse into stars.

Figure 3. The Pipe Nebula. The colors and arrows are the same as in previous figures. The stars show the position of young stars in the cloud. The young stars tend to be found in areas of high acceleration, suggesting that star formation is triggered by gas accreting onto the cloud in these areas. Compare to the filament model in Figure 2.

Figure 4. Perseus molecular cloud. The colors and arrows are the same as previous figures. The red contours show the region where dense cores of gas – the precursors of stars – are found. The star formation is preferentially occurring on the edge of the cloud, where the acceleration is highest. This effect is also seen in the simple uniform disk model in Figure 1.

The effects of gravitational acceleration may enhance star formation in areas of molecular clouds. But gravity is not the only force at work here. The effect of turbulence and magnetic fields need to be considered to judge the relative importance of gravity. The method presented here is complicated by theuncertainty in the three-dimensional structure of clouds, and the conversion between dust mass and total mass. By mapping the acceleration in simulated clouds, future studies may be able to quantify these uncertainties. Gravity is ultimately responsible for turning gas into stars, so understanding the role that gravity plays in molecular clouds is critical to a complete picture of star formation.

## May 01, 2016

### Christian P. Robert - xi'an's og

auxiliary likelihood-based approximate Bayesian computation in state-space models

With Gael Martin, Brendan McCabe, David T. Frazier, and Worapree Maneesoonthorn, we arXived (and submitted) a strongly revised version of our earlier paper. We begin by demonstrating that reduction to a set of sufficient statistics of reduced dimension relative to the sample size is infeasible for most state-space models, hence calling for the use of partial posteriors in such settings. Then we give conditions [like parameter identification] under which ABC methods are Bayesian consistent, when using an auxiliary model to produce summaries, either as MLEs or [more efficiently] scores. Indeed, for the order of accuracy required by the ABC perspective, scores are equivalent to MLEs but are computed much faster than MLEs. Those conditions happen to to be weaker than those found in the recent papers of Li and Fearnhead (2016) and Creel et al.  (2015).  In particular as we make no assumption about the limiting distributions of the summary statistics. We also tackle the dimensionality curse that plagues ABC techniques by numerically exhibiting the improved accuracy brought by looking at marginal rather than joint modes. That is, by matching individual parameters via the corresponding scalar score of the integrated auxiliary likelihood rather than matching on the multi-dimensional score statistics. The approach is illustrated on realistically complex models, namely a (latent) Ornstein-Ulenbeck process with a discrete time linear Gaussian approximation is adopted and a Kalman filter auxiliary likelihood. And a square root volatility process with an auxiliary likelihood associated with a Euler discretisation and the augmented unscented Kalman filter.  In our experiments, we compared our auxiliary based  technique to the two-step approach of Fearnhead and Prangle (in the Read Paper of 2012), exhibiting improvement for the examples analysed therein. Somewhat predictably, an important challenge in this approach that is common with the related techniques of indirect inference and efficient methods of moments, is the choice of a computationally efficient and accurate auxiliary model. But most of the current ABC literature discusses the role and choice of the summary statistics, which amounts to the same challenge, while missing the regularity provided by score functions of our auxiliary models.

Filed under: Books, pictures, Statistics, University life Tagged: ABC, auxiliary model, consistency, Kalman filter, Melbourne, Monash University, score function, summary statistics

### Peter Coles - In the Dark

R.I.P. Harry Kroto (1939-2016)

I heard earlier this afternoon of the death at the age of 76 of the distinguished chemist Sir Harry Kroto.

Along with Robert Curl and Richard Smalley,  Harry Kroto was awarded the Nobel Prize for Chemistry in 1996 for the discovery of the C60 structure that became known as Buckminsterfullerene (or the “Buckyball” for short).

Harry had a long association with the University of Sussex and was a regular visitor to the Falmer campus even after he moved to the USA.

I remember first meeting him in the 1988 when, as a new postdoc fresh out of my PhD, I had just taken over organising the Friday seminars for the Astronomy Centre. One speaker called off his talk just an hour before it was due to start so I asked if anyone could suggest someone on campus who might stand in. Someone suggested Harry, whose office was  nearby in the School of Molecular Sciences (now the Chichester Building). I was very nervous as I knocked on his door – Harry was already famous then – and held out very little hope that such a busy man would agree to give a talk with less than an hour’s notice. In fact he accepted immediately and with good grace gave a fine impromptu talk about the possibility that C60 might be a major component of interstellar dust. If only all distinguished people were so approachable and helpful!

I met him in campus more recently a couple of years ago when we met to talk about some work he had been doing on a range of things to do with widening participation in STEM subjects. I remember I had booked  an hour in my calendar but we talked for at least three. He was brimming with ideas and energy then. It’s hard to believe he is no more.

Harry Kroto was a man of very strong views  and he was not shy in expressing them. He cared passionately about science and was a powerful advocate for it. He will be greatly missed.

Rest in peace, Harry Kroto (1939-2016)

## April 30, 2016

### Clifford V. Johnson - Asymptotia

Wild Thing

The wildflower patch continues to produce surprises. You never know exactly what's going to come up, and in what quantities. I've been fascinated by this particular flower, for example, which seems to be constructed out of several smaller flowers! What a wonder, and of course, there's just one example of its parent plant in the entire patch, so once it is gone, it's gone.

The post Wild Thing appeared first on Asymptotia.

### The n-Category Cafe

Relative Endomorphisms

Let $\left(M,\otimes \right)\left(M, \otimes\right)$ be a monoidal category and let $CC$ be a left module category over $MM$, with action map also denoted by $\otimes \otimes$. If $m\in Mm \in M$ is a monoid and $c\in Cc \in C$ is an object, then we can talk about an action of $mm$ on $cc$: it’s just a map

$\alpha :m\otimes c\to c\alpha : m \otimes c \to c$

satisfying the usual associativity and unit axioms. (The fact that all we need is an action of $MM$ on $CC$ to define an action of $mm$ on $cc$ is a cute instance of the microcosm principle.)

This is a very general definition of monoid acting on an object which includes, as special cases (at least if enough colimits exist),

• actions of monoids in $\text{Set}\text\left\{Set\right\}$ on objects in ordinary categories,
• actions of monoids in $\text{Vect}\text\left\{Vect\right\}$ (that is, algebras) on objects in $\text{Vect}\text\left\{Vect\right\}$-enriched categories,
• actions of monads (letting $M=\text{End}\left(C\right)M = \text\left\{End\right\}\left(C\right)$), and
• actions of operads (letting $CC$ be a symmetric monoidal category and $MM$ be the monoidal category of symmetric sequences under the composition product)

This definition can be used, among other things, to straightforwardly motivate the definition of a monad (as I did here): actions of a monoidal category $MM$ on a category $CC$ correspond to monoidal functors $M\to \text{End}\left(C\right)M \to \text\left\{End\right\}\left(C\right)$, so every action in the above sense is equivalent to an action of a monad, namely the image of the monoid $mm$ under such a monoidal functor. In other words, monads on $CC$ are the universal monoids which act on objects $c\in Cc \in C$ in the above sense.

Corresponding to this notion of action is a notion of endomorphism object. Say that the relative endomorphism object ${\text{End}}_{M}\left(c\right)\text\left\{End\right\}_M\left(c\right)$, if it exists, is the universal monoid in $MM$ acting on $cc$: that is, it’s a monoid acting on $cc$, and the action of any other monoid on $cc$ uniquely factors through it.

This is again a very general definition which includes, as special cases (again if enough colimits exist),

• the endomorphism monoid in $\text{Set}\text\left\{Set\right\}$ of an object in an ordinary category,
• the endomorphism algebra of an object in a $\text{Vect}\text\left\{Vect\right\}$-enriched category,
• the endomorphism monad of an object in an ordinary category, and
• the endomorphism operad of an object in a symmetric monoidal category.

If the action of $MM$ on $CC$ has a compatible enrichment $\left[-,-\right]:{C}^{\mathrm{op}}×C\to M\left[-, -\right] : C^\left\{op\right\} \times C \to M$ in the sense that we have natural isomorphisms

${\text{Hom}}_{C}\left(m\otimes {c}_{1},{c}_{2}\right)\cong {\text{Hom}}_{M}\left(m,\left[{c}_{1},{c}_{2}\right]\right)\text\left\{Hom\right\}_C\left(m \otimes c_1, c_2\right) \cong \text\left\{Hom\right\}_M\left(m, \left[c_1, c_2\right]\right)$

then ${\text{End}}_{M}\left(c\right)\text\left\{End\right\}_M\left(c\right)$ is just the endomorphism monoid $\left[c,c\right]\left[c, c\right]$, and in fact the above discussion could have been done in the context of enrichments only, but in the examples I have in mind the actions are easier to notice than the enrichments. (Has anyone ever told you that symmetric monoidal categories are canonically enriched over symmetric sequences? Nobody told me, anyway.)

Here’s another example where the action is easier to notice than the enrichment. If $D,CD, C$ are two categories, then the monoidal category $\text{End}\left(C\right)=\left[C,C\right]\text\left\{End\right\}\left(C\right) = \left[C, C\right]$ has a natural left action on the category $\left[D,C\right]\left[D, C\right]$ of functors $D\to CD \to C$. If $G:D\to CG : D \to C$ is a functor, then the relative endomorphism object ${\text{End}}_{\text{End}\left(C\right)}\left(G\right)\text\left\{End\right\}_\left\{\text\left\{End\right\}\left(C\right)\right\}\left(G\right)$, if it exists, turns out to be the codensity monad of $GG$!

This actually follows from the construction of an enrichment: the category $\left[D,C\right]\left[D, C\right]$ of functors $D\to CD \to C$ is (if enough limits exist) enriched over $\text{End}\left(C\right)\text\left\{End\right\}\left(C\right)$ in a way compatible with the natural left action. This enrichment takes the following form (by a straightforward verification of universal properties): if ${G}_{1},{G}_{2}\in \left[D,C\right]G_1, G_2 \in \left[D, C\right]$ are two functors $D\to CD \to C$, then their hom object

$\left[{G}_{1},{G}_{2}\right]={\text{Ran}}_{{G}_{1}}\left({G}_{2}\right)\in \text{End}\left(C\right)\left[G_1, G_2\right] = \text\left\{Ran\right\}_\left\{G_1\right\}\left(G_2\right) \in \text\left\{End\right\}\left(C\right)$

is, if it exists, the right Kan extension of ${G}_{2}G_2$ along ${G}_{1}G_1$. When ${G}_{1}={G}_{2}G_1 = G_2$ this recovers the definition of the codensity monad of a functor $G:D\to CG : D \to C$ as the right Kan extension of $GG$ along itself, and neatly explains why it’s a monad: it’s an endomorphism object.

Question: Has anyone seen this definition of relative endomorphisms before?

It seems pretty natural, but I tried guessing what it would be called on the nLab and failed. It also seems that “relative endomorphisms” is used to mean something else in operad theory.

## April 29, 2016

### ZapperZ - Physics and Physicists

LHC Knocked Out By A Weasel?
You can't make these things up!

CERN's Large Hadron Collider, the world's biggest particle accelerator located near Geneva, Switzerland, lost power Friday. Engineers who were investigating the outage made a grisly discovery -- the charred remains of a weasel, CERN spokesman Arnaud Marsollier told CNN.
If you are a weasel kind, be forewarned! Don't mess around at CERN!

Zz.

### astrobites - astro-ph reader's digest

A PeVatron at the Galactic Center

Title: Acceleration of petaelectronvolt protons in the Galactic Centre
Authors: The HESS Collaboration
Status: Published in Nature

In the past, we’ve talked on this website a bit about the mysteries of galactic cosmic rays, or charged particles from outer space that are mainly made up of protons.  These particles can reach PeV energies and beyond, but the shocks of supernova remnants (the origin of most galactic cosmic rays) cannot accelerate particles to these high energies.  The HESS Collaboration analyzed 10 years of gamma-ray observations and have seen evidence of a PeVatron (PeV accelerator) in the center of our galaxy.  If confirmed, this would be the first PeVatron in our galaxy.

As mentioned above, the HESS Collaboration used observations of gamma rays from their array of telescopes to do this analysis.  Gamma rays are often used to probe the nature of cosmic ray accelerators; this is because they are associated with these sites, but unlike the charged cosmic rays, they are electrically neutral and therefore don’t bend in magnetic fields on their way to Earth (i.e. they point back to the source).

Figure 1: HESS’s very high energy gamma ray map of the Galactic Center region. The color scale shows the number of gamma rays per pixel, while the white contour lines illustrate the distribution of molecular gas. Their correlation points to a hadronic origin of gamma ray emission.  The right panel is simply a zoomed view of the inner portion.  (Source: Figure 1 from the paper)

Figure 2: The red shaded area shows the 1 sigma confidence band of the measured gamma-ray spectrum of the diffuse emission in the region of interest. The red lines show different models, assuming that the gamma rays are coming from neutral pion decay after the pions have been produced in proton-proton interactions. Note the lack of cutoff at high energies, indicating that the parent protons have energies in the PeV range.
The blue data points refer to another gamma-ray source in the region, HESS J1745-290. The link between these two objects is currently unknown.

The area they studied is known as the Central Molecular Zone, which surrounds the Galactic Center.  They found that the distribution of gamma rays mirrored the distribution of the gas-rich areas, which points to a hadronic (coming from proton interactions) origin of the gamma rays.  From the gamma-ray luminosity and amount of gases in the area, it can be shown that there must be at least one cosmic ray accelerator in the region.  Additionally, the energy spectrum of the diffuse gamma-ray emission from the region around Sagittarius A* (the location of the black hole at at the Galactic Center) does not have an observed cutoff or a break in the TeV energy range.  This means that the parent proton population that created these gamma rays should have energies of ~1 PeV (the PeVatron).  Just to refresh everyone’s memory, a TeV is 10^12 electronvolts, while a PeV is 10^15 electronvolts.  A few TeV is about the limit of what can be produced in particle laboratories on Earth (the LHC reaches 14 TeV).  A PeV is roughly 1000 times that!

What is the source of these protons?  The typical explanation for Galactic cosmic rays, supernova remnants, is unlikely here: in order to match the data and inject enough cosmic rays into the Central Molecular Zone, the authors estimate that we would need more than 10 supernova events over 1000 years.  This is a very high rate that is improbable.

Instead, they hypothesize that Sgr A* is the source of these protons.  They could either be accelerate in the accretion flow immediately outside the black hole, or further away where the outflow terminates.  They do note that the required acceleration rate is a few orders of magnitude above the current luminosity, but that the black hole may have been much more active in the past, leading to higher production rates of the protons and other nuclei.  If this is true, it could solve one of the most puzzling mysteries in cosmic ray physics: the origin of the higher energy galactic cosmic rays.

### Emily Lakdawalla - The Planetary Society Blog

Future High-Resolution Imaging of Mars: Super-Res to the Rescue?
HiRISE Principal Investigator Alfred McEwen explains an imaging technique known as Super-Resolution Restoration (SRR), and how it could come in handy for high-resolution imaging of the Red Planet.

## April 28, 2016

### Emily Lakdawalla - The Planetary Society Blog

What NASA Can Learn from SpaceX
SpaceX's announcement that it will send Dragon capsules to Mars demonstrates the advantage of having a clear plan to explore the red planet. NASA should take note.

### Emily Lakdawalla - The Planetary Society Blog

The phases of the far side of the Moon
Serbian artist Ivica Stošić used Clementine and Kaguya data to give a glimpse of the phases of the lunar farside.

### Symmetrybreaking - Fermilab/SLAC

A GUT feeling about physics

Scientists want to connect the fundamental forces of nature in one Grand Unified Theory.

The 1970s were a heady time in particle physics. New accelerators in the United States and Europe turned up unexpected particles that theorists tried to explain, and theorists in turn predicted new particles for experiments to hunt. The result was the Standard Model of particles and interactions, a theory that is essentially a catalog of the fundamental bits of matter and the forces governing them.

While that Standard Model is a very good description of the subatomic world, some important aspects—such as particle masses—come out of experiments rather than theory.

“If you write down the Standard Model, quite frankly it's a mess,” says John Ellis, a particle physicist at King’s College London. “You've got a whole bunch of parameters, and they all look arbitrary. You can't convince me that's the final theory!”

The hunt was on to create a grand unified theory, or GUT, that would elegantly explain how the universe works by linking three of the four known forces together. Physicists first linked the electromagnetic force, which dictates the structure of atoms and the behavior of light, and the weak nuclear force, which underlies how particles decay.

But they didn’t want to stop there. Scientists began working to link this electroweak theory with the strong force, which binds quarks together into things like the protons and neutrons in our atoms. (The fourth force that we know, gravity, doesn’t have a complete working quantum theory, so it's relegated to the realm of Theories of Everything, or ToEs.)

Linking the different forces into a single theory isn’t easy, since each behaves a different way. Electromagnetism is long-ranged, the weak force is short-ranged, and the strong force is weak in high-energy environments such as the early universe and strong where energy is low. To unify these three forces, scientists have to explain how they can be aspects of a single thing and yet manifest in radically different ways in the real world.

The electroweak theory unified the electromagnetic and weak forces by proposing they were aspects of a single interaction that is present only at very high energies, as in a particle accelerator or the very early universe. Above a certain threshold known as the electroweak scale, there is no difference between the two forces, but that unity is broken when the energy drops below a certain point.

The GUTs developed in the mid-1970s to incorporate the strong force predicted new particles, just as the electroweak theory had before. In fact, the very first GUT showed a relationship between particle masses that allowed physicists to make predictions about the second-heaviest particle before it was detected experimentally.

“We calculated the mass of the bottom quark before it was discovered,” says Mary Gaillard, a particle physicist at University of California, Berkeley. Scientists at Fermilab would go on to find the particle in 1977.

GUTs also predicted that protons should decay into lighter particles. There was just one problem: Experiments didn’t see that decay.

Artwork by Sandbox Studio, Chicago

### The problem with protons

GUTs predicted that all quarks could potentially change into lighter particles, including the quarks making up protons. In fact, GUTs said that protons would be unstable over a period much longer than the lifetime of the universe. To maximize the chances of seeing that rare proton decay, physicists needed to build detectors with a lot of atoms.

However, the first Kamiokande experiment in Japan didn't detect any proton decays, which meant a proton lifetime longer than that predicted by the simplest GUT theory. More complicated GUTs emerged with longer predicted proton lifetimes – and more complicated interactions and additional particles.

Most modern GUTs mix in supersymmetry (SUSY), a way of thinking about the structure of space-time that has profound implications for particle physics. SUSY uses extra interactions to adjust the strength of the three forces in the Standard Model so that they meet at a very high energy known as the GUT scale.

“Supersymmetry gives more particles that are involved via virtual quantum effects in the decay of the proton,” says JoAnne Hewett, a physicist at the Department of Energy’s SLAC National Accelerator Laboratory. That extends the predicted lifetime of the proton beyond what previous experiments were able to test. Yet SUSY-based GUTs also have some problems.

“They're kinda messy,” Gaillard says. Particularly, these theories predict more Higgs-like particles and different ways the Higgs boson from the Standard Model should behave. For that reason, Gaillard and other physicists are less enamored of GUTs than they were in the 1970s and '80s. To make matters worse, no supersymmetric particles have been found yet. But the hunt is still on.

“The basic philosophical impulse for grand unification is still there, just as important as ever,” Ellis says. “I still love SUSY, and I also am enamored of GUTs.”

Hewett agrees that GUTs aren't dead yet.

“I firmly believe that an observation of proton decay would affect how every person would think about the world,” she says. “Everybody can understand that we're made out of protons and ‘Oh wow! They decay.’”

Upcoming experiments like the proposed Hyper-K in Japan and the Deep Underground Neutrino Experiment in the United States will probe proton decay to greater precision than ever. Seeing a proton decay will tell us something about the unification of the forces of nature and whether we ultimately can trust our GUTs.

### astrobites - astro-ph reader's digest

Studying the First Stars with Gravitational Waves
Authors: Tilman Hartwig et al.

First Author’s Institution: Sorbonne University

Status: Submitted to MNRAS Letters

The detection of gravitational waves (neatly summarized in this excellent astrobites post) provided astronomers with an entirely new way of understanding the cosmos. With the notable exception of a handful of neutrinos from SN 1987A, all of our information about the universe outside of our galaxy had previously come in the form of little packets of electromagnetic radiation known as photons. Gravitational waves, on the other hand, are ripples in spacetime–a totally different phenomenon. They cause the distances between objects in spacetime to change as they pass through.

The Advanced Laser Interferometer Gravitational-Wave Observatory’s (aLIGO) detection of GW150914 (so named because it was detected on September 14, 2015) was the result of the inspiral and merger of two stellar black holes (BHs). The larger of the two black holes was about 36 solar masses and the smaller one about 29 solar masses. This first detection of gravitational waves has already provided enough information for scientists to start inferring rates of black hole-black hole (BH-BH) mergers.

Figure 1 (Figure 2 from the paper) shows the intrinsic merger rate densities for their models (this includes NS-NS, NS-BH, and BH-BH mergers) compared to the literature.  The red shaded area shows the variance for the fiducial IMF. The model from Kinugawa et al. (2014) is reproduced with both the original star formation rate (SFR) and with the SFR rescaled to match this paper’s. The original SFR from the Kinugawa (which does not agree with the 2015 Planck results) paper produces a much higher merger rate density. The Dominik et al. (2013) curve shows the merger rate for later generations of stars (Pop I/II) that have a tenth of the solar metallicity. GW150914’s point shows the estimated merger rate density inferred from the detection, with error bars. The merger rate in the local universe of primordial black holes is much lower than predicted in previous studies, meaning that most of the mergers that we detect should come from stellar remnants of Pop I/II.

The authors of today’s paper estimate the probability of detecting gravitational waves from the very first generation of stars, known as Population III (Pop III), stars with aLIGO. Pop III stars, which are made from the pristine (read: zero-metallicity) gas leftover from big-bang nucleosynthesis, are thought to be more massive than later generations of stars. They are relatively rare in the local universe, but should produce strong signals in gravitational waves thanks to their high masses. Detection rates for their BH-BH mergers should therefore be high.

The authors begin by modeling the formation of dark matter halos from z=50 to z=6 (here z means redshift and translates to about 0.2-1.0 billion years after the Big Bang), when they expect that no more Pop III stars will form. The authors populate their simulated halos with stars of various masses determined by a logarithmically-flat IMF (initial mass function). Since the exact IMF of Pop III stars is unknown, they consider three cases: their fiducial IMF of 3-300 solar masses, a low-mass case where the stars range from 1-100 solar masses, and a high-mass case of 10-1000 solar masses. The total stellar mass in each halo is then determined by the star-formation efficiency, which they make sure is consistent with the results from Planck.

Figure 2 (Figure 3 from the paper) shows the expected number of BH-BH merger detections as a function of the total mass of the binary system. The top shows the current sensitivity of aLIGO and the bottom shows the final sensitivity. The gray bar indicates where GW150914 falls. The de Mink & Mandel (2016) histogram indicates the estimated rate of Pop I/II star detections. As we can see, mergers from remnants of Pop I/II stars are expected to dominate in the mass range between 30 and 100 solar masses. However, if aLIGO detects enough events that result from binary systems with approximately 300 solar masses and above (which must come from Pop III stars), it will be able to discriminate between the three Pop III IMFs (initial mass function–basically the distribution of masses) discussed in this paper.

Like tango, it takes two to make gravitational waves, so they use previous studies of Pop III stars to get an estimate for the fraction of the stars in binary systems. The probability for any single star to be in a binary is 50% in their model, but they note that it is easy to scale their results with different fractions of binary stars. The authors also take into account the fact that stars in binary systems usually have similar mass and that stars in the 140-260 solar mass range are expected to blow apart completely as pair-instability supernovae, leaving no compact remnants (i.e. black holes) behind. They then calculate the signal-to-noise ratio of a single aLIGO detector for each of their mergers and to determine if something is a detection.

The result? Compared to previous studies which do not match the results of the Planck paper quite as well, their star formation rates are lower, as are their merger rate densities. They also find that the number of mergers they expect is largely dominated by the maximum mass in each distribution, since less massive stars produce neutron stars instead of black holes, which have a lower merger rate. On the other hand, the merger rate density decreases as the masses of the stars get higher since fewer high-mass binary pairs are formed in any given location. Figure 1 shows the merger rate densities that they obtained for each of their mass distributions.

Even with all of this information, how would we know that what we have detected is of primordial origin (and not something that formed more recently)? Since Pop III stars are more massive than their later counterparts, any remnants with masses greater than 42 solar masses must have come from the earliest stars (even stars with a tenth of the metallicity of our Sun would not leave remnants more massive than this). Thus the contribution to the BH-BH merger rate from Pop III stars varies as a function of the mass of the mergers, which is demonstrated in Figure 2. Though the primordial BH-BH merger rate that we could detect would be small, the massiveness of the black holes will make their gravitational wave signal stronger and thus easier to detect. This means that given our detection rate of BH-BH mergers, we might be able to rule out or put bounds on our Pop III mass distribution.

They conclude that GW150914 had about a 1% probability of being from primordial origin and estimate that aLIGO will probably be able to detect about 5 primordial BH-BH mergers a year. By looking at the rate of BH-BH mergers that we actually do detect, we’ll also be able to learn about the stars that these remnants originated from.

### Tommaso Dorigo - Scientificblogging

The Number Of My Publications Has Four Digits
While tediously compiling a list of scientific publications that chance to have my name in the authors list (I have to apply for a career advancement and apparently the committee will scrutinize the hundred-page-long lists of that kind that all candidates submit), I discovered today that I just passed the mark of 1000 published articles. This happened on February 18th 2016 with the appearance in print of a paper on dijet resonance searches by CMS. Yay! And 7 more have been added to the list since then.

## April 27, 2016

### Tommaso Dorigo - Scientificblogging

35% Off World Scientific Titles
I think this might be interesting to the few of you left out there who still read paper books (I do too). World Scientific offers, until April 29th, a 35% reduction in the cover price of its books, if you purchase two of them.
This might be a good time to pre-order my book, "Anomaly!", if you have not done so yet. Plus maybe get one of the other many excellent titles in the collection of WS.

You can see the offer at the site of my book (that's where I got the info from!).

### astrobites - astro-ph reader's digest

The Geology of Pluto and Charon

Title:  The Geology of Pluto and Charon Through the Eyes of New Horizons
Authors: Jeffrey M. Moore, William B. McKinnon, John R. Spencer, et al., including the New Horizons team
First Author’s Institution: NASA Ames Research Center
Status: Published in Science

The New Horizons mission to Pluto and Charon, launched when Pluto was still officially a planet, gave us the best images of the dwarf planet and its largest moon that we might ever see in our lifetime.  Less than a year after its July 14, 2015 fly-by, the New Horizons team have published a preliminary geologic examination of the two bodies.  Predicted to have a rather boring landscape unchanged for billions of years, the surfaces of Pluto and Charon have been discovered to be surprisingly complex and, for Pluto, still geologically active!

Pluto:

Figure 1: An annotated map of Pluto. Credit.

The image of Pluto’s “heart” proved that it still reciprocated our love for it despite its demotion from planetary status.  Officially named the “Tombaugh Regio” for Pluto’s discoverer, Clyde Tombaugh, this region actually comprises multiple regions that just have similar coloration and albedo (reflectivity).  The left half of the heart is known as the Sputnik Planum (SP), a region the size of Texas and Oklahoma (or, alternatively, France and the United Kingdom).   It’s a giant block of solid nitrogen, carbon monoxide, and methane ice that sits about 3-4 km below the surrounding highlands.  Amazingly, despite the fact that Pluto was thought to be geologically dead, the age of the surface of the SP is likely < 10 million years old based on the fact that no craters have been discovered in the region (see Figure 2).  The appearance of cells in the northern half of the SP implies the existence of solid-state convection that may be responsible for resurfacing the SP.

Figure 2: The distribution of craters on the surface of Pluto. The Sputnik Planum (left half of the heart) contains no craters, implying a surface age of < 10 million years. Credit.

The western edge of the SP is marked by multiple, heterogeneous mountain ranges.  These mountain ranges have been spectroscopically confirmed to be composed of water ice.  These mountains appear to be chunks of a pre-existing surface that have been fractured, moved, and rotated, giving them high elevations and steep slopes.  The authors do not know why the mountain ranges exist only on the western edge of the SP.

The portion of the Tombaugh Regio east of the SP (the right half of the heart) comprises flat plains and pitted uplands.  These pits are typically a few kilometers across and roughly 1 kilometer deep.  At the border between the pitted uplands and the SP, potential glaciers could be breaking off the pitted uplands and flowing into the SP.  To the north and northwest of the SP, there are a variety of different terrain, including a washboard (i.e., ridged) terrain, likely formed via erosion, and terrain dissected by networks of deep, wide valleys.  To the southwest of the SP are a pair of two huge mounds, only one of which, Wright Mons, was imaged with proper lighting.  Wright Mons is 3-4 km high, 150 km across, and has a deep hole right in the center at least 5 km deep.  The other mound, Piccard Mons, appears to be even larger. These two mounds are possible cryovolcanoes, water-ice volcanoes.

The surface of Pluto allows its history to be partially reconstructed.  The distribution of crater sizes implies that most of the surface can be dated back to the Late Heavy Bombardment (LHB) about 3.9 billion years ago.  A portion of the Cthulhu Regio, which wins the prize for most creative region name, may even pre-date the LHB.  The SP, on the other hand, is < 10 million years old, although it sits in a much older basin. Its half-ring of mountains to the west suggest that the entire SP might be a giant impact crater.

Charon:

Figure 3:  An enhanced color map of Charon.  Credit.

Whereas Pluto’s surface is dominated by nitrogen, carbon monoxide, and methane ices, Charon’s surface is mostly water-ice.  Charon is rent in half by a pair of ancient, gigantic canyons 50-200 km wide and 5-7 km deep, which split the moon (or maybe binary dwarf planet) into two general regions.  The northern half displays a network of large troughs 3-6 km deep.  A massive depression nearly 10 km deep (seen in profile as the bumpy edge against the black of space in Figure 3) lies near the Mordor Macula (MM), the reddish cap on the north pole.  Meanwhile, the Dorothy Gale crater located to the bottom and right of the MM is 230 km wide and 6 km deep.  The overall crater population distribution imply that the northern half of Charon is about 4 billion years old.

By contrast, the southern half of Charon is smoother and slightly younger by maybe a few hundreds of millions of years.  A lower density of craters and fields of small hills point to past cryovolcanism.  A unique feature to Charon is the existence of mountains rising 3-4 km above encircling moats. The moats themselves are 1-2 km below the surrounding surface, potentially due to the mountains pressing the surrounding region deeper into the moon.

The Future of New Horizons:

This is only the beginning.  It’s been just 9 months since New Horizons careened past Pluto and Charon.  A data transfer rate of just 1-2 kb/s means we won’t even have all the data for another 7 months!  New Horizons isn’t finished yet.  It plans to study several Kuiper Belt objects from afar and will even perform a close fly-by of one, 2014 MU69, on January 1, 2019, for NASA’s own version of a New Year’s fireworks display.

### Axel Maas - Looking Inside the Standard Model

Some small changes in the schedule
As you may have noticed, I have not written a new entry since some time.

The reasons have been twofold.

One is that being a professor is a little more strenuous than being a postdoc. Though not unexpected, at some point it takes a toll.

The other is that in the past I tried just to keep a regular schedule. However, that often required of me to think hard about a topic as there was no natural candidate. At other times, I had a number of possible topics, which where then stretched out rather than to be written when they were important.

As a consequence, I think it is more appropriate to write entries when something happens that is interesting to write about. This will be at least any time we put out a new paper, so that I will still update you on our research. I will also write something whenever somebody new starts in the group, or otherwise we start a new project. Also, some of my students want to also contribute, and I will be very happy to give them the opportunity to do so. Once in a while, I will also write some background entries, such that I can offer some context for the research we are doing.

So stay tuned. It may be in a different rhythm, but I will keep on writing about our (and my) research.

### CERN Bulletin

The CERN Accelerator School
Introduction to accelerator physics The CERN Accelerator School: Introduction to Accelerator Physics, which should have taken place in Istanbul, Turkey, later this year has now been relocated to Budapest, Hungary.  Further details regarding the new hotel and dates will be made available as soon as possible on a new Indico site at the end of May.

## April 26, 2016

### Alexey Petrov - Symmetry factor

30 years of Chernobyl disaster

30 years ago, on 26 April 1986, the biggest nuclear accident happened at the Chernobyl nuclear power station.

The picture above is of my 8th grade class (I am in the front row) on a trip from Leningrad to Kiev. We wanted to make sure that we’d spend May 1st (Labor Day in the Soviet Union) in Kiev! We took that picture in Gomel, which is about 80 miles away from Chernobyl, where our train made a regular stop. We were instructed to bury some pieces of clothing and shoes after coming back to Leningrad due to excess of radioactive dust on them…

### Lubos Motl - string vacua and pheno

When anti-CO2, junk food pseudosciences team up
Among other things, a Czech-Swedish man showed me an article in the April 9th issue of Nude Socialist
Reaping what we sow (pages 18-19)
written by Irakli Loladze (Google Scholar), a professor of junk food science at a college I've never heard of. He told us that he wanted to get lots of money and Barack Obama (whose relationship to science is accurately described by his being a painful footnote in the curved constitutional space) finally gave Loladze some big bucks for the excellent "research" that Loladze already wanted to do in 2002.

What is the result of the research? It's a simple combination of the pseudosciences about the "evil junk food" and about the "evil CO2". It says that CO2 turns out food into junk food. I kid you not. The one-page article in Nude Socialist contains basically nothing beyond the previous sentence written in the bold face.

This "new interdisciplinary pseudoscience" is quite a hybrid because the pseudosciences about the "junk food" and about the "evil CO2" are probably two worst examples of pseudosciences in the contemporary era, two examples featuring the most arrogant pseudo-experts as well as the largest number of ordinary people who have been brainwashed by these junk sciences.

Let's start with these two things separately. The would-be scientific memes about the "junk food" are what a sensible left-wing blogger called the pseudoscience of McDonald's hate. The term "junk food" is meant to denote a food with a lot of sugar, fat, and perhaps salt; and the deficit of all other things – proteins, fibers, minerals, and vitamins. Almost by definition, we're supposed to think that it's "what we get at McDonald's" and other fast food chains; and it's "bad".

Except that none of these statements is defensible.

First, it is not true that the fast food chains significantly differ from other restaurants and homes in the percentage of the nutrients. In fact, you can get lots of salads with assorted vegetables in McDonald's and many people actually eat these things – which trump the "healthy food" image of the things you can eat elsewhere. A majority of the people still eat the traditional things that contain beef or chicken and carbohydrates and fat, e.g. the buns and French fries. But the substance of this food doesn't really differ from what you eat in non-chain restaurants with hamburgers, or any other restaurants, for that matter.

Feynman mentioned the pseudoscience on healthy food as an example of the pseudosciences encouraged by the success of the actual science. Just try to appreciate how much these things have grown since the times when Feynman recorded the program above.

People self-evidently single out the hamburger chains because of their intense anti-corporate bigotry and dishonesty. As the guy whom I just linked to has said, there is no evidence that McDonald's contains special addictive chemicals or something entirely different than other places to eat; and its containing some chemicals is a tautology because all of our bodies and life are composed of numerous chemicals and all organisms are giant chemical factories.

Because Ray Kroc who turned McDonald's into a big company was a Czech American (his father was born 10 km from my home), I could also argue that the attacks on McDonald's are manifestations of nationalist or racist prejudices against my nation. Long before I knew about Ray Kroc, I considered McDonald's to be one of the symbols of the advanced civilization and when I learned about the founder, my pride about my nation's contributions to the civilization went up. But be sure, it doesn't influence my honest attitude. I have exactly the same opinions about the Burger King and Wendy's.

Second, it's not true that "sugars and fats and salt are bad". Sugars and fats are clearly the most important compounds that our bodies look for in the food; they're the essence and the main reasons why we eat at all. Other things are "cherries on a pie" in comparison. We mainly need energy – which may be quantified in calories – and when it comes to food, it is stored almost entirely in carbohydrates and fats.

Animals and humans have always looked for plants that were rich in sugars; and they loved to grow animals that had enough fat. It's no coincidence that civilized nations (except for Jews) eat lots of pork and pigs are fat before they are killed. It's really the point that they have the fat plus some proteins that are found in every meat.

You can always eat lots of things – fruits, vegetables, roots – that contain almost no calories but they have lots of all the other things. But people just don't need so much of the other things (although you need to refill several of them sort of regularly). People and animals primarily need some daily intake of food that has calories in it – this food is the essential part of food.

And what about salt? Even if it were right that McDonald's gives one more salt than other restaurants or food prepared by your wife, there is simply no problem with salt. Junk food evangelists keep on brainwashing most of the mankind by fairy-tales about the scientific evidence for the increased cardiovascular problems caused by excess salt. The actual truth is that science produces no evidence of this sort – and it has produced some so far weak evidence in the opposite direction. You surely need some salt and more than the minimum amount of salt seems to be good for you, especially if cardiovascular problems put you at risk.

So far the largest study of the influence of salt on the mortality was done in 2011,
Reduced dietary salt for the prevention of cardiovascular disease: a meta-analysis of randomized controlled trials (Cochrane review, PDF full text),
by Rod Taylor and four collaborators. As the main link shows, the paper has 217 citations so far. See a review, Now Salt Is Safe To Eat, in the Express (2011). They found that the increased salt intake has led to
1. the increased salt in urine which is hardly surprising LOL
2. the increased blood pressure which is not surprising either because a higher blood pressure allows the body to get rid of the salt more quickly, but the increased pressure means nothing because there were:
3. no hints of benefits or a change of all-cause mortality
4. salt restrictions have actually increased all-cause mortality in those with heart failure!
So the restrictions of salt intake have led to no statistically significant change in the main variables – and they shortened the life of those at cardiovascular risk.

According to the actual scientific research, the effects of salt seem too weak to be clearly discernible and if you think that by removing a gram of salt from a food, you are significantly helping your health, you are absolutely fooling yourself – just like if you believe the horoscope on the same page of your favorite newspapers as the "healthy food science". Some hints that extra salt could be good for you exists; but this influence isn't extremely strong, either. It just doesn't matter how much salt you eat if you have at least the minimum amount.

Salt superstitions are just a particular example. We're bombarded by hundreds of similar things and most of us don't have the time – and mostly even expertise – needed to find the actual answers in the scientific literature to the question whether these memes are right or wrong. But you're invited to look at the paper above and search for other papers and decide whether you still believe that the scientific research justifies the idea that it's good for your health to halve the intake of NaCl. There's no evidence for such a claim. It's really easy for the body to get rid of the extra salt and if the body increases the blood pressure to do so quickly, it doesn't signify any immediate problem. The very meme that "a higher blood pressure equals a problem" is another myth, too. There is a correlation between cardiovascular problems and a higher blood pressure (be sure that a low pressure may also reduce the quality of people's lives) but it's simply not true that the blood pressure always ruins the body or shows a problem – the blood pressure may get increased by the body's mechanisms safely and for sensible reasons, too.

On the other hand, spasms may be caused by shortage of salt. If you experience such things, you surely need to add more salt to your food.

Loladze, the junk food "scholar" who wrote the article in Nude Socialist that I started with, claims that CO2 is a "similar kind of junk food" for the plants as carbohydrates and fats are for humans. Right, I totally agree with this analogy. The only problem is that his "junk food" claims are equally idiotic in both cases. CO2 is the "main food" of plants in the exactly analogous way in which sugars and fats are the main food for us (proteins come third). Organisms just can't survive without this main part of the food! And they naturally look for the food that has a high percentage of these compounds. This strategy is in no way suicidal; it has been a strategy to survive in the previous billions of years (in the case of plants) or hundreds of millions of years (in the case of animals). In the developed world, we have enough food in calories and some people eat a lot of them and despite lots of superstitions about some mysterious detailed other reasons, that's the main reason why they may get fat. But that's just like children who are spoiled because their parents are too rich. Being rich in money or fats or sugar is primarily a good thing. Every good thing may turn into a bad thing in some situations but it's still true that people promoting a primarily good thing as a bad thing are liars and scammers.

Lolardze did this "research" – and it was paid – with an obvious purpose. To try to counter the fact that a higher concentration of CO2 is good for plants. Some of them may afford to have fewer pores (the holes through which get CO2) because fewer pores are enough to get the same CO2 if its concentration is higher. But it's good for the plants because water vapor is evaporating from the pores. So the "modern, high-CO2-using" plant with fewer pores is a much better water manager. It doesn't lose too much water which is why it can grow in less humid environments, and why it can grow larger, too. Serious articles showing the beneficial effect of CO2 on plants appear every other day. Yesterday, Nature published another one about the global greening of the Earth 70% of which was attributed to higher CO2.

Will Happer of Princeton knows much more about the processes involving CO2 in the plants etc. and he sort of wanted me to learn most of these details as well but I am simply not interested in too many details of this science. I feel confident that I know more than enough to be certain that a higher CO2 concentration in the air may be said to be good for plants.

Now, a question is what are the changes of the concentration of various nutrients in plants induced by the changed concentration of CO2. Quite generally, one may expect the increase of the compounds with lots of carbon – because carbon became more accessible. At the same moment, the change probably won't be too big. DNA still needs some – the same amount – of phosphorus and other things, the conditions for the "optimum material from which the leaves and other parts of the plant" should be composed are basically unchanged.

So I think that the actual answer is that the percentages of other nutrients don't change much but it may be expected that some relative concentration of non-carbon nutrients (and perhaps some organic nutrients as well) will go down. But that doesn't mean that people get unhealthy from the food grown in a higher CO2. If the food has too much sugar or fat in it, people will feel that the taste is boring or bland and they will add more vegetables on it, or spices, or eat more fruits and nuts and other things, whatever. They will do so because they feel that they're missing something.

Just like the plants try to manage their nutrients and resources optimally, people are trying to do the same thing. The shortage of CO2 for plants and fats+sugars for humans is clearly a problem causing starvation. The increase of their availability makes starvation much less likely and it basically leads to positive consequences only – and no negative ones. Well, you may say that if you have too much money, it's also bad. But the point is that you may always ignore it. If you know that it's bad for you to spend too much money, you can leave the funds in the safebox. In the same way, people may just leave the sugar in the McDonald's or Wendy's restaurants. Plants may leave the CO2 in the air. In all cases, people and plants generally try to get fats, sugars, CO2, and money because they believe it's good for them and in a vast majority of cases, they're obviously right.

I think that people like Loladze know very well that they're just dishonest corrupt aßholes. They must know that the higher CO2 is making the life of plants better and, as a consequence of our dependence on plants, it's making our lives better as well. They know that according to all objective criteria that people knew before CO2 was politicized, it may be shown that a higher CO2 concentration increases the quality of wine and pretty much every other thing related to food and beverages. But they just construct convoluted pseudoscientific theories and memes suggesting the opposite because they get paid for this deception.

These junk scientists are immoral and if there's no God to do the job of sending a lightning upon their heads, they have to be punished otherwise.

### Symmetrybreaking - Fermilab/SLAC

The hottest job in physics?

Accelerator scientists are in demand at labs and beyond.

While the supply of accelerator physicists in the United States has grown modestly over the last decade, it hasn’t been able to catch up with demand fueled by industry interest in medical particle accelerators and growing collaborations at the national labs.

About 15 PhDs in accelerator physics are granted by US universities each year. That’s up from around 12 per year, a rate that held relatively steady from 1985 to 2005. But accelerator physicists often come to the field without a specialized degree. For people like Yunhai Cai of the US Department of Energy's SLAC National Accelerator Laboratory, this has been a blessing and a curse. A blessing because high demand meant Cai found a ready job after his post doctoral studies, even though his expertise was in particle theory and he had never worked on accelerators. A curse because now, despite the growth, his field is still in need of more experts.

“Eleven of DOE’s seventeen national laboratories use large particle accelerators as one of their primary scientific instruments,” says Eric Colby, senior technical advisor for the Office of High Energy Physics at DOE. That means plenty of job opportunities for those coming out of special training programs or eager to transfer from another field. “These are major projects that will require hundreds of accelerator physicists and engineers to successfully complete.”

### Transition mettle

Cai, now a senior staff scientist at SLAC and head of the Free-Electron Laser and Beam Physics Department, is one of many scientists recruited from other fields. The transition is intensive, and Cai considers himself fortunate that his academic background taught him the mathematical principles behind his first job.

Notwithstanding, “the most valuable help was the trust of many leaders in the field of accelerators,” Cai says. “They offered me a position knowing I had no experience in the field.”

Training specialists from other fields is a common and successful practice, says Lia Merminga, associate lab director for accelerators at SLAC. A planned upgrade to SLAC's Linac Coherent Light Source is creating a high demand for specialized accelerator experts, such as cryogenics engineers and superconducting radio frequency (SRF) physicists and engineers.

“Instead of hiring trained cryogenics engineers who are in short supply, we hire mechanical engineers and train them in cryogenics either by providing for hands-on experience or with coursework,” Merminga says.

### Tommaso Dorigo - Scientificblogging

New Physics In The Angular Distribution Of B Decays ?
After decades of theoretical studies and experimental measurements, forty years ago particle physicists managed to construct a very successful theory, one which describes with great accuracy the dynamics of subnuclear particles. This theory is now universally known as the Standard Model of particle physics. Since then, physicists have invested enormous efforts in the attempt of breaking it down.

It is not a contradiction: our understanding of the physical world progresses as we construct a progressively more refined mathematical representation of reality. Often this is done by adding more detail to an existing framework, but in some cases a complete overhaul is needed. And we appear to be in that situation with the Standard Model.

## April 14, 2016

### ZapperZ - Physics and Physicists

Debunking Three Baseball Myths
A nice article on the debunking of 3 baseball myths using physics. I'm not that aware of the first two, but that last one, "Swing down on the ball to hit farther" has always been something I thought was unrealistic. Doing that makes it more difficult to get a perfect contact, because the timing has to be just right.

This is no different than a serve in tennis, and why hitting the ball at its highest point during a serve gives you a better chance at getting at the racket's sweet spot.

Zz.

### Symmetrybreaking - Fermilab/SLAC

Five fascinating facts about DUNE

One: The Deep Underground Neutrino Experiment will look for more than just neutrinos.

The Deep Underground Neutrino Experiment is a project of superlatives. It will use the world’s most intense neutrino beam and largest neutrino detector to study the weirdest and most abundant matter particles in the universe. More than 800 scientists from close to 30 countries are working on the project to crack some long-unanswered questions in physics. It’s part of a worldwide push to discover the missing pieces that could explain how the known particles and forces created the universe we live in. Here’s a two-minute animation that shows how the project will work:

Video of AYtKcZMJ_4c

Here are a few more surprising facts about DUNE you might not know:

#### 1. Engineers will use a mile-long fishing line to aim the neutrino beam from Illinois to South Dakota.

DUNE will aim a neutrino beam 800 miles (1300 kilometers) straight through the Earth from Fermilab to the Sanford Underground Research Facility—no tunnel necessary. Although the beam spreads as it travels, like a flashlight beam, it’s important to aim the center of the beam as precisely as possible at DUNE so that the maximum number of neutrinos can create a signal. Since neutrinos are electrically neutral, they can’t be steered by magnets after they’ve been created. Hence everything must be properly aligned—to within a fraction of a millimeter—when the neutrinos are made, emerging from the collisions of protons with carbon atoms.

Properly aligning the neutrino beam means using the Global Positioning System (GPS) to relate Sanford Lab’s underground map to the coordinates of Fermilab’s geographic system, making sure everything speaks the same location language. Part of the process requires mapping points underground to points on the Earth’s surface. To do this, the alignment crew will drop what might be the longest plumb line in the world down the 4850-foot (1.5-kilometer) mineshaft. The current plan is to use very strong fishing line—a mile of it—attached to a heavy weight that is immersed in a barrel of oil to dampen the movement of the pendulum. A laser tracker will record the precise location of the line.

#### 2. Mining crews will move enough rock for two Empire State Buildings up a 14-by-20-foot shaft.

To create caverns that are large enough to host the DUNE detectors, miners need to blast and remove more than 800,000 tons of rock from a mile underground. That’s the equivalent of 8 Nimitz-class aircraft carriers, a comparison often made by Chris Mossey, project director for the Long-Baseline Neutrino Facility (the name of the facility that will support DUNE). Mossey knows a thing or two about aircraft carriers: He happens to be a retired commander of the US Navy's Naval Facilities Engineering Council and oversaw the engineering, construction and maintenance services of US Navy facilities. But not everyone is that familiar with aircraft carriers, so alternatively you can impress your friends by saying that crews will move the weight equivalent of 2.2 Empire State Buildings, 80 Eiffel Towers, 4700 blue whales or 18 billion(ish) Twinkies.

#### 3. The interior of the DUNE detectors will have about the same average temperature as Saturn’s atmosphere.

Argon, an element that makes up almost one percent of the air we breathe, will be the material of choice to fill the DUNE detectors, albeit in its liquid form. As trillions of neutrinos pass through the transparent argon, a handful will interact with an argon nucleus and produce other particles. Those, in turn, will create light and knock loose electrons. Both can be recorded and turned into data that show exactly when and how a neutrino interacted. To keep the argon liquid, the cryogenics system will have to maintain a temperature of around minus 300 degrees Fahrenheit, or minus 184 degrees Celsius. That’s slightly colder than the average temperature of the icy ammonia clouds on the upper layer of Saturn’s atmosphere.

#### 4. The design of DUNE’s detector vessels is inspired by huge transport ships for gas.

DUNE’s set of four detectors will be the largest cryogenic instrument ever installed deep underground. You know who else needs to store and cool large volumes of liquid? The gas industry, which liquefies natural gas to transport it around the world using huge ships with powerful refrigerators. DUNE’s massive, insulated vessels will feature a membrane system that is similar to that used by liquid natural gas transport ships. A stainless steel frame sits inside an insulating layer, sandwiched between aluminum sheets. Multiple layers provide the strength to keep the liquid argon right where it should be—interacting with neutrinos.

#### 5. DUNE will look for more than just neutrinos.

Then why did they name the experiment after the neutrino? Well, most of the experiment is designed to study neutrinos—how they change as they move through space, how they arrive from exploding stars, how neutrinos differ from their antimatter partners, and how they interact with other matter particles. At the same time, the large size of the DUNE detectors and their shielded location a mile underground also make them the perfect tool to continue the search for signs of proton decay. Some theories predict that protons (one of the building blocks that make up the atoms in your body) have a very long but finite lifetime. Eventually they will decay into other particles, creating a signal that DUNE hopes to discover. Fortunately for our atoms, the proton’s estimated lifespan is much longer than the time our universe has existed so far. Because proton decay is expected to be such a rare event, scientists need to monitor lots of protons to catch one in the act—and seventy thousand tons of argon atoms means around 1034 protons (that’s a 1 with 34 zeroes after it), which isn’t too shabby.

## April 13, 2016

### Clifford V. Johnson - Asymptotia

Great Big Exchange

Here's a fun Great Big Story (CNN) video piece about the Science and Entertainment Exchange (and a bit about my work on Agent Carter). Click here for the piece.

(Yeah, the headline. Seems you can't have a story about science connecting with the rest of the culture without the word "nerd" being used somewhere...)

The post Great Big Exchange appeared first on Asymptotia.

## April 12, 2016

### Symmetrybreaking - Fermilab/SLAC

Art draws out the beauty of physics

Labs around the world open their doors to aesthetic creation.

When it comes to quantum mechanics, it’s easier to show than tell.

That’s why artist residencies at particle physics labs play an important part in conveying their stories, according to CERN theorist Luis Alvarez-Gaume.

He recently spent some time demonstrating physics concepts to Semiconductor, a duo of visual artists from England known for exploring matter through the tools and processes of science. They’ve done multiple short films, museum pieces and festivals all over the world. In July they were awarded a CERN residency as part of the Collide@CERN Ars Electronica Award.

“I tried to show them how we develop an intuition for quantum mechanics by applying the principles and getting used to the way it functions,” Alvarez-Gaume says. “Because honestly, I cannot explain quantum mechanics even to a scientist.”

The physicist laughed when he made that statement, but the artists, Ruth Jarman and Joe Gerhardt, are comforted by the sentiment. They soaked up all they could during their two-month stay in late 2015 and are still processing interviews and materials they’ll use to develop a major work based on their experiences.

“Particle physics is the most challenging subject we’ve ever worked with because it’s so difficult to create a tangible idea about it, and that’s kind of what we are all about,” Jarman says, adding that they are fully up for the challenge.

Besides speaking with theorists and experimentalists, the artists explored interesting spaces at CERN and filmed both the construction of a new generation of magnets and a workshop where scientists were developing prototypes of instruments.

“We also dug around a lot in the archives,” Gerhardt says. “It’s such an amazing place and we only really touched the surface.”

But they have a lot of faith in the process based on past experiences working in scientific settings.

A 2007 work called “Magnetic Movie” was based on a similar stay at NASA’s Space Sciences Laboratories at UC Berkeley, where the artists captured the "secret lives of invisible magnetic fields." In the film, brightly colored streams and blobs emanate from various rooms at the lab to the sounds of VLF (very low frequency) audio recordings and scientists talking.

“Are we observing a series of scientific experiments, the universe in flux or a documentary of a fictional world?” the artists ask on their website.

The piece won multiple awards at international film festivals. But, just as importantly to the artists, the scientists were excited about the way it celebrated their work, “even though it was removed from their context,” Jarman says.

### Picturing the invisible

At the Department of Energy’s Fermilab, another group of artists has taken on the challenge of “visualizing the invisible.” Current artist-in-residence Ellen Sandor and her collaborative group (art)n have been brushing up on neutrinos and the machines that study them.

Their goal is to put their own cutting-edge technologies to use in scientifically accurate and “transcendent” artworks that tell the story of Fermilab’s past, present and future, the artist says.

Sandor is known as a pioneer of virtual photography. In the 1980s she invented a new medium called PHSColograms, 3-D images that combine photography, holography, sculpture and computer graphics to create what she calls “immersive” experiences.

The group will use PHSColograms, sculpture, 3D printing, virtual reality and projection mapping in a body of work that will eventually be on display at the lab.

“We want to tell the story with scientific visualization and also with abstraction,” Sandor says. “But all of the images will be exciting and artistic.”

The value of such rich digital visuals lies in what Sandor calls their “wow factor,” according to Sam Zeller, neutrino physicist and science advisor for the artist-in-residence program.

“We scientists don’t always know how to hit that mark, but she does,” Zeller says. “These three-dimensional immersive images come closer to the video game environment. If we want to capture the imagination of school-age children, we can’t just stand in front of a poster and talk anymore.”

As co-spokesperson of the MicroBooNE experiment, Zeller and team are collaborating with the artists on virtual reality visualizations of a new detector technology called a liquid-argon time projection chamber. The detector components, as well as the reactions it detects, are sealed inside a stainless steel vessel out of view.

“Because she strives for scientific accuracy, we can use Sandor’s art to help us explain how our detector works and demonstrate it to the public,” Zeller says.

### Growing collaborations

According to Monica Bello, head of Arts@CERN, programs that combine art and science are a growing trend around the globe.

Organizations such as the Arts Catalyst Centre for Art, Science & Technology in London commission science-related art worldwide, and galleries like Kapelica Gallery in Ljubljana, Slovenia, present contemporary art focused largely on science and technology.

US nonprofit Leonardo, The International Society for the Arts, Sciences and Technology, supports cross-disciplinary research and international artist and scientist residencies and events.

“However, programs of this kind founded within scientific institutions and with full support are still rare,” Bello says. Yet, many labs, including TRIUMF in Canada and INFN in Italy, host art exhibits, events or occasional artist residencies.

“While we don’t bring on full-time artists continually, TRIUMF offers a suite of initiatives that explore the intersection of art and science,” says Melissa Baluk, communications coordinator at TRIUMF.  “A great example is our ongoing partnership with artist Ingrid Koenig of Emily Carr University of Art + Design here in Vancouver. Koenig tailors some of her fine art classes to these intersections, for example, courses called ‘Black Holes and Other Transformations of Energy’ and ‘Quantum Entanglements: Manifestations in Practice.’”

The collaboration invites physicists to Koenig’s studio and draws her students to the lab. “It’s a wonderful partnership that allows all involved to discover news ways of thinking about the interconnections between art, science, and culture on a scale that works for us,” Baluk says.

Fermilab’s robust commitment to the arts reaches back to founding director, physicist and artist Robert Wilson. His sculptures are still exhibited around the lab, says Georgia Schwender, curator of the Fermilab Art Gallery.

Schwender finds that art-science programs attract the community through the unconventional pairing of subjects; events such as the international Art@CMS exhibit last year at Fermilab are very well received.

“It’s not just a physics or an art class,” she says. “People who might be a little afraid of the art or a little afraid of the science are less intimidated when you bring them together.”

Fermilab recently complemented its tradition of cultural engagement with a new artist residency, which began in 2014 with mixed media artist Lindsay Olson.

### Art-physics interactions

Science as a subject for art has grown since Sandor’s first PHSCologram of the AIDS virus bloomed into a career of art-science collaborations.

“In the beginning it was almost practical. People were dying, and we wanted to bring everything to the surface and leave nothing hidden,” the artist says. “By the 1990s I realized that scientists were the rock stars of the future, and that’s even truer today.”

Sandor relishes being part of the scientific process. Drawing out the hidden beauty of particle physics to create something scientifically accurate and artistically stunning has been one of the most satisfying projects to date, she says.

Like Sandor, Semiconductor works with authentic scientific data, but they also emphasize how the language of science influences our experience of nature.

“The data represents something we can’t actually see, feel or touch,” Jarman says. “We reference the tools and processes of science and encourage the noise and the artifact to constantly remind people that it is man observing nature, but not actually how it is.”

Both Zeller and Alvarez-Gaume have personal interests in art and find value in the similarities and differences between the fields.

“Our objectives are very different, but our paths are similar,” Alvarez-Gaume says. “We experience inspiration, passion and frustration. We work through trial and error, failing most of the time.”

Like art, science is abstract but enjoyable, he adds. “Theoretical physicists will tell you there is beauty in science—a sense of awe. Art helps bring this to the surface. People are not interested in the details: They want to get a vision, a picture about why we think particle physics is interesting or exciting.”

Zeller finds her own inspiration in art-science collaborations.

“One of the things that surprised me the most in working with artists was the fact that they could articulate much better than I could what it is that my research achieves for humankind, and this reinvigorated me with excitement about my work,” she says.

Yet, one key difference between art and science speaks for the need to nurture their growing intersections, Alvarez-Gaume says.

“Science is inevitable; art is fragile. Without Einstein it may have taken many, many years, and many people working on it, but we still would have come up with his theories. If Beethoven died at age 5, we would not have the sonatas; art is not repeatable.”

And a world without art is not a world he would like to imagine.

## April 11, 2016

### John Baez - Azimuth

Diamonds and Triamonds

The structure of a diamond crystal is fascinating. But there’s an equally fascinating form of carbon, called the triamond, that’s theoretically possible but never yet seen in nature. Here it is:

In the triamond, each carbon atom is bonded to three others at 120° angles, with one double bond and two single bonds. Its bonds lie in a plane, so we get a plane for each atom.

But here’s the tricky part: for any two neighboring atoms, these planes are different. In fact, if we draw the bond planes for all the atoms in the triamond, they come in four kinds, parallel to the faces of a regular tetrahedron!

If we discount the difference between single and double bonds, the triamond is highly symmetrical. There’s a symmetry carrying any atom and any of its bonds to any other atom and any of its bonds. However, the triamond has an inherent handedness, or chirality. It comes in two mirror-image forms.

A rather surprising thing about the triamond is that the smallest rings of atoms are 10-sided. Each atom lies in 15 of these 10-sided rings.

Some chemists have argued that the triamond should be ‘metastable’ at room temperature and pressure: that is, it should last for a while but eventually turn to graphite. Diamonds are also considered metastable, though I’ve never seen anyone pull an old diamond ring from their jewelry cabinet and discover to their shock that it’s turned to graphite. The big difference is that diamonds are formed naturally under high pressure—while triamonds, it seems, are not.

Nonetheless, the mathematics behind the triamond does find its way into nature. A while back I told you about a minimal surface called the ‘gyroid’, which is found in many places:

It turns out that the pattern of a gyroid is closely connected to the triamond! So, if you’re looking for a triamond-like pattern in nature, certain butterfly wings are your best bet:

• Matthias Weber, The gyroids: algorithmic geometry III, The Inner Frame, 23 October 2015.

Instead of trying to explain it here, I’ll refer you to the wonderful pictures at Weber’s blog.

### Building the triamond

I want to tell you a way to build the triamond. I saw it here:

• Toshikazu Sunada, Crystals that nature might miss creating, Notices of the American Mathematical Society 55 (2008), 208–215.

This is the paper that got people excited about the triamond, though it was discovered much earlier by the crystallographer Fritz Laves back in 1932, and Coxeter named it the Laves graph.

To build the triamond, we can start with this graph:

It’s called $\mathrm{K}_4,$ since it’s the complete graph on four vertices, meaning there’s one edge between each pair of vertices. The vertices correspond to four different kinds of atoms in the triamond: let’s call them red, green, yellow and blue. The edges of this graph have arrows on them, labelled with certain vectors

$e_1, e_2, e_3, f_1, f_2, f_3 \in \mathbb{R}^3$

Let’s not worry yet about what these vectors are. What really matters is this: to move from any atom in the triamond to any of its neighbors, you move along the vector labeling the edge between them… or its negative, if you’re moving against the arrow.

For example, suppose you’re at any red atom. It has 3 nearest neighbors, which are blue, green and yellow. To move to the blue neighbor you add $f_1$ to your position. To move to the green one you subtract $e_2,$ since you’re moving against the arrow on the edge connecting blue and green. Similarly, to go to the yellow neighbor you subtract the vector $f_3$ from your position.

Thus, any path along the bonds of the triamond determines a path in the graph $\mathrm{K}_4.$

Conversely, if you pick an atom of some color in the triamond, any path in $\mathrm{K}_4$ starting from the vertex of that color determines a path in the triamond! However, going around a loop in $\mathrm{K}_4$ may not get you back to the atom you started with in the triamond.

Mathematicians summarize these facts by saying the triamond is a ‘covering space’ of the graph $\mathrm{K}_4.$

Now let’s see if you can figure out those vectors.

Puzzle 1. Find vectors $e_1, e_2, e_3, f_1, f_2, f_3 \in \mathbb{R}^3$ such that:

A) All these vectors have the same length.

B) The three vectors coming out of any vertex lie in a plane at 120° angles to each other:

For example, $f_1, -e_2$ and $-f_3$ lie in a plane at 120° angles to each other. We put in two minus signs because two arrows are pointing into the red vertex.

C) The four planes we get this way, one for each vertex, are parallel to the faces of a regular tetrahedron.

If you want, you can even add another constraint:

D) All the components of the vectors $e_1, e_2, e_3, f_1, f_2, f_3$ are integers.

### Diamonds and hyperdiamonds

That’s the triamond. Compare the diamond:

Here each atom of carbon is connected to four others. This pattern is found not just in carbon but also other elements in the same column of the periodic table: silicon, germanium, and tin. They all like to hook up with four neighbors.

The pattern of atoms in a diamond is called the diamond cubic. It’s elegant but a bit tricky. Look at it carefully!

To build it, we start by putting an atom at each corner of a cube. Then we put an atom in the middle of each face of the cube. If we stopped there, we would have a face-centered cubic. But there are also four more carbons inside the cube—one at the center of each tetrahedron we’ve created.

If you look really carefully, you can see that the full pattern consists of two interpenetrating face-centered cubic lattices, one offset relative to the other along the cube’s main diagonal.

The face-centered cubic is the 3-dimensional version of a pattern that exists in any dimension: the Dn lattice. To build this, take an n-dimensional checkerboard and alternately color the hypercubes red and black. Then, put a point in the center of each black hypercube!

You can also get the Dn lattice by taking all n-tuples of integers that sum to an even integer. Requiring that they sum to something even is a way to pick out the black hypercubes.

The diamond is also an example of a pattern that exists in any dimension! I’ll call this the hyperdiamond, but mathematicians call it Dn+, because it’s the union of two copies of the Dn lattice. To build it, first take all n-tuples of integers that sum to an even integer. Then take all those points shifted by the vector (1/2, …, 1/2).

In any dimension, the volume of the unit cell of the hyperdiamond is 1, so mathematicians say it’s unimodular. But only in even dimensions is the sum or difference of any two points in the hyperdiamond again a point in the hyperdiamond. Mathematicians call a discrete set of points with this property a lattice.

If even dimensions are better than odd ones, how about dimensions that are multiples of 4? Then the hyperdiamond is better still: it’s an integral lattice, meaning that the dot product of any two vectors in the lattice is again an integer.

And in dimensions that are multiples of 8, the hyperdiamond is even better. It’s even, meaning that the dot product of any vector with itself is even.

In fact, even unimodular lattices are only possible in Euclidean space when the dimension is a multiple of 8. In 8 dimensions, the only even unimodular lattice is the 8-dimensional hyperdiamond, which is usually called the E8 lattice. The E8 lattice is one of my favorite entities, and I’ve written a lot about it in this series:

To me, the glittering beauty of diamonds is just a tiny hint of the overwhelming beauty of E8.

But let’s go back down to 3 dimensions. I’d like to describe the diamond rather explicitly, so we can see how a slight change produces the triamond.

It will be less stressful if we double the size of our diamond. So, let’s start with a face-centered cubic consisting of points whose coordinates are even integers summing to a multiple of 4. That consists of these points:

(0,0,0)   (2,2,0)   (2,0,2)   (0,2,2)

and all points obtained from these by adding multiples of 4 to any of the coordinates. To get the diamond, we take all these together with another face-centered cubic that’s been shifted by (1,1,1). That consists of these points:

(1,1,1)   (3,3,1)   (3,1,3)   (1,3,3)

and all points obtained by adding multiples of 4 to any of the coordinates.

The triamond is similar! Now we start with these points

(0,0,0)   (1,2,3)   (2,3,1)   (3,1,2)

and all the points obtain from these by adding multiples of 4 to any of the coordinates. To get the triamond, we take all these together with another copy of these points that’s been shifted by (2,2,2). That other copy consists of these points:

(2,2,2)   (3,0,1)   (0,1,3)   (1,3,0)

and all points obtained by adding multiples of 4 to any of the coordinates.

Unlike the diamond, the triamond has an inherent handedness, or chirality. You’ll note how we used the point (1,2,3) and took cyclic permutations of its coordinates to get more points. If we’d started with (3,2,1) we would have gotten the other, mirror-image version of the triamond.

### Covering spaces

I mentioned that the triamond is a ‘covering space’ of the graph $\mathrm{K}_4.$ More precisely, there’s a graph $T$ whose vertices are the atoms of the triamond, and whose edges are the bonds of the triamond. There’s a map of graphs

$p: T \to \mathrm{K}_4$

This automatically means that every path in $T$ is mapped to a path in $\mathrm{K}_4.$ But what makes $T$ a covering space of $\mathrm{K}_4$ is that any path in $T$ comes from a path in $\mathrm{K}_4,$ which is unique after we choose its starting point.

If you’re a high-powered mathematician you might wonder if $T$ is the universal covering space of $\mathrm{K}_4.$ It’s not, but it’s the universal abelian covering space.

What does this mean? Any path in $\mathrm{K}_4$ gives a sequence of vectors $e_1, e_2, e_3, f_1, f_2, f_3$ and their negatives. If we pick a starting point in the triamond, this sequence describes a unique path in the triamond. When does this path get you back where you started? The answer, I believe, is this: if and only if you can take your sequence, rewrite it using the commutative law, and cancel like terms to get zero. This is related to how adding vectors in $\mathbb{R}^3$ is a commutative operation.

For example, there’s a loop in $\mathrm{K}_4$ that goes “red, blue, green, red”. This gives the sequence of vectors

$f_1, -e_3, e_2$

We can turn this into an expression

$f_1 - e_3 + e_2$

However, we can’t simplify this to zero using just the commutative law and cancelling like terms. So, if we start at some red atom in the triamond and take the unique path that goes “red, blue, green, red”, we do not get back where we started!

Note that in this simplification process, we’re not allowed to use what the vectors “really are”. It’s a purely formal manipulation.

Puzzle 2. Describe a loop of length 10 in the triamond using this method. Check that you can simplify the corresponding expression to zero using the rules I described.

A similar story works for the diamond, but starting with a different graph:

The graph formed by a diamond’s atoms and the edges between them is the universal abelian cover of this little graph! This graph has 2 vertices because there are 2 kinds of atom in the diamond. It has 4 edges because each atom has 4 nearest neighbors.

Puzzle 3. What vectors should we use to label the edges of this graph, so that the vectors coming out of any vertex describe how to move from that kind of atom in the diamond to its 4 nearest neighbors?

There’s also a similar story for graphene, which is hexagonal array of carbon atoms in a plane:

Puzzle 4. What graph with edges labelled by vectors in $\mathbb{R}^2$ should we use to describe graphene?

I don’t know much about how this universal abelian cover trick generalizes to higher dimensions, though it’s easy to handle the case of a cubical lattice in any dimension.

Puzzle 5. I described higher-dimensional analogues of diamonds: are there higher-dimensional triamonds?

### References

The Wikipedia article is good:

• Wikipedia, Laves graph.

They say this graph has many names: the K4 crystal, the (10,3)-a network, the srs net, the diamond twin, and of course the triamond. The name triamond is not very logical: while each carbon has 3 neighbors in the triamond, each carbon has not 2 but 4 neighbors in the diamond. So, perhaps the diamond should be called the ‘quadriamond’. In fact, the word ‘diamond’ has nothing to do with the prefix ‘di-‘ meaning ‘two’. It’s more closely related to the word ‘adamant’. Still, I like the word ‘triamond’.

This paper describes various attempts to find the Laves graph in chemistry:

• Stephen T. Hyde, Michael O’Keeffe, and Davide M. Proserpio, A short history of an elusive yet ubiquitous structure in chemistry, materials, and mathematics, Angew. Chem. Int. Ed. 47 (2008), 7996–8000.

This paper does some calculations arguing that the triamond is a metastable form of carbon:

• Masahiro Itoh et al, New metallic carbon crystal, Phys. Rev. Lett. 102 (2009), 055703.

Abstract. Recently, mathematical analysis clarified that sp2 hybridized carbon should have a three-dimensional crystal structure ($\mathrm{K}_4$) which can be regarded as a twin of the sp3 diamond crystal. In this study, various physical properties of the $\mathrm{K}_4$ carbon crystal, especially for the electronic properties, were evaluated by first principles calculations. Although the $\mathrm{K}_4$ crystal is in a metastable state, a possible pressure induced structural phase transition from graphite to $\mathrm{K}_4$ was suggested. Twisted π states across the Fermi level result in metallic properties in a new carbon crystal.

The picture of the $\mathrm{K}_4$ crystal was placed on Wikicommons by someone named ‘Workbit’, under a Creative Commons Attribution-Share Alike 4.0 International license. The picture of the tetrahedron was made using Robert Webb’s Stella software and placed on Wikicommons. The pictures of graphs come from Sunada’s paper, though I modified the picture of $\mathrm{K}_4.$ The moving image of the diamond cubic was created by H.K.D.H. Bhadeshia and put into the public domain on Wikicommons. The picture of graphene was drawn by Dr. Thomas Szkopek and put into the public domain on Wikicommons.

## April 08, 2016

### Jester - Resonaances

April Fools' 16: Was LIGO a hack?

This post is an April Fools' joke. LIGO's gravitational waves are for real. At least I hope so ;)

We have had recently a few scientific embarrassments, where a big discovery announced with great fanfares was subsequently overturned by new evidence.  We still remember OPERA's faster than light neutrinos which turned out to be a loose cable, or BICEP's gravitational waves from inflation, which turned out to be galactic dust emission... It seems that another such embarrassment is coming our way: the recent LIGO's discovery of gravitational waves emitted in a black hole merger may share a similar fate. There are reasons to believe that the experiment was hacked, and the signal was injected by a prankster.

From the beginning, one reason to be skeptical about LIGO's discovery was that the signal  seemed too beautiful to be true. Indeed, the experimental curve looked as if taken out of a textbook on general relativity, with a clearly visible chirp signal from the inspiral phase, followed by a ringdown signal when the merged black hole relaxes to the Kerr state. The reason may be that it *is* taken out of a  textbook. This is at least what is strongly suggested by recent developments.

On EvilZone, a well-known hacker's forum, a hacker using a nickname Madhatter was boasting that it was possible to tamper with scientific instruments, including the LHC, the Fermi satellite, and the LIGO interferometer.  When challenged, he or she uploaded a piece of code that allows one to access LIGO computers. Apparently, the hacker took advantage the same backdoor that allows the selected members of the LIGO team to inject a fake signal in order to test the analysis chain.  This was brought to attention of the collaboration members, who  decided to test the code. To everyone's bewilderment, the effect was to reproduce exactly the same signal in the LIGO apparatus as the one observed in September last year!

Even though the traces of a hack cannot be discovered, there is little doubt now that there was a foul play involved. It is not clear what was the motif of the hacker: was it just a prank, or maybe an elaborate plan to discredit the scientists. What is even more worrying is that the same thing could happen in other experiments. The rumor is that the ATLAS and CMS collaborations are already checking whether the 750 GeV diphoton resonance signal could also be injected by a hacker.

## April 07, 2016

### Symmetrybreaking - Fermilab/SLAC

Physicists build ultra-powerful accelerator magnet

An international partnership to upgrade the LHC has yielded the strongest accelerator magnet ever created.

The next generation of cutting-edge accelerator magnets is no longer just an idea. Recent tests revealed that the United States and CERN have successfully co-created a prototype superconducting accelerator magnet that is much more powerful than those currently inside the Large Hadron Collider. Engineers will incorporate more than 20 magnets similar to this model into the next iteration of the LHC, which will take the stage in 2026 and increase the LHC’s luminosity by a factor of ten. That translates into a ten-fold increase in the data rate.

“Building this magnet prototype was truly an international effort,” says Lucio Rossi, the head of the High-Luminosity (HighLumi) LHC project at CERN. “Half the magnetic coils inside the prototype were produced at CERN, and half at laboratories in the United States.”

During the original construction of the Large Hadron Collider, US Department of Energy national laboratories foresaw the future need for stronger LHC magnets and created the LHC Accelerator Research Program (LARP): an R&D program committed to developing new accelerator technology for future LHC upgrades.

This 1.5-meter-long model, which is a fully functioning accelerator magnet, was developed by scientists and engineers at Fermilab, Brookhaven National Laboratory, Lawrence Berkeley National Laboratory, and CERN. The magnet recently underwent an intense testing program at Fermilab, which it passed in March with flying colors. It will now undergo a rigorous series of endurance and stress tests to simulate the arduous conditions inside a particle accelerator.

The 1.5-meter prototype magnet, the MQXF1 quadrupole, sits at Fermilab before testing.

G. Ambrosio (US-LARP and Fermilab), P. Ferracin and E. Todesco (CERN TE-MSC)

This new type of magnet will replace about 5 percent of the LHC’s focusing and steering magnets when the accelerator is converted into the High-Luminosity LHC, a planned upgrade which will increase the number and density of protons packed inside the accelerator. The HL-LHC upgrade will enable scientists to collect data at a much faster rate.

The LHC’s magnets are made by repeatedly winding a superconducting cable into long coils. These coils are then installed on all sides of the beam pipe and encased inside a superfluid helium cryogenic system. When cooled to 1.9 Kelvin, the coils can carry a huge amount of electrical current with zero electrical resistance. By modulating the amount of current running through the coils, engineers can manipulate the strength and quality of the resulting magnetic field and control the particles inside the accelerator.

The magnets currently inside the LHC are made from niobium titanium, a superconductor that can operate inside a magnetic field of up to 10 teslas before losing its superconducting properties. This new magnet is made from niobium-three tin (Nb3Sn), a superconductor capable of carrying current through a magnetic field of up to 20 teslas.

“We’re dealing with a new technology that can achieve far beyond what was possible when the LHC was first constructed,” says Giorgio Apollinari, Fermilab scientist and Director of US LARP. “This new magnet technology will make the HL-LHC project possible and empower physicists to think about future applications of this technology in the field of accelerators.”

A High-Luminosity LHC coil similar to those incorporated into the successful magnet prototype shows the collaboration between CERN and the LHC Accelerator Research Program, LARP.

Photo by Reidar Hahn, Fermilab

This technology is powerful and versatile—like upgrading from a moped to a motorcycle. But this new super material doesn’t come without its drawbacks.

“Niobium-three tin is much more complicated to work with than niobium titanium,” says Peter Wanderer, head of the Superconducting Magnet Division at Brookhaven National Lab. “It doesn’t become a superconductor until it is baked at 650 degrees Celsius. This heat-treatment changes the material’s atomic structure and it becomes almost as brittle as ceramic.”

Building a moose-sized magnet from a material more fragile than a teacup is not an easy endeavor. Scientists and engineers at the US national laboratories spent 10 years designing and perfecting a new and internationally reproducible process to wind, form, bake and stabilize the coils.

“The LARP-CERN collaboration works closely on all aspects of the design, fabrication and testing of the magnets,” says Soren Prestemon of the Berkeley Center for Magnet Technology at Berkeley Lab. “The success is a testament to the seamless nature of the collaboration, the level of expertise of the teams involved, and the ownership shown by the participating laboratories.”

This model is a huge success for the engineers and scientists involved. But it is only the first step toward building the next big supercollider.

“This test showed that it is possible,” Apollinari says. “The next step is it to apply everything we’ve learned moving from this prototype into bigger and bigger magnets.”

### Lubos Motl - string vacua and pheno

$$P=NP$$ and string landscape: 10 years later
The newborn Universe was a tiny computer but it still figured out amazing things
Two new papers: First, off-topic. PRL published a Feb 2016 LIGO paper claiming that the gravitational waves (Phys.Org says "gravity waves" and I find it OK) contribute much more to the background than previously believed. Also, Nature published a German-American-Canadian paper claiming that supermassive black holes are omnipresent because they saw one (with mass of 17 billions Suns) even in an isolated galaxy. Also, the LHC will open the gate to hell in 2016 again, due to the higher luminosity (or power, they don't care). CERN folks have recommended a psychiatrist to the conspiracy theorists such as Pope Francis. But God demands at most 4 inverse femtobarns a year because the fifth one is tickling Jesus' testicles.
Ten years ago, the research of the string landscape was a hotter topic than it is today. Because of some recent exchanges about $$P=NP$$, I was led to recall the February 2006 paper by Michael Douglas and Frederik Denef,
Computational complexity of the landscape I
They wrote that the string landscape had lots of elements and, using the numerous computer scientists' $$P\neq NP$$ lore, it's probably going to be permanently impossible to find the right string vacuum even if the string landscape predicts one.

The authors have promised the second paper in the series,
[48] F. Denef and M. R. Douglas, “Computational Complexity of the Landscape II: Cosmological Considerations,” to appear
but now, more than 10 years later, we are still waiting for this companion paper to appear. ;-) What new ideas and views do I have 10 years later?

Douglas and Denef are extremely smart men. And they started – or Douglas started – to look at the landscape from a "global" or "computational" perspective sometime in 2004 or so (see e.g. TRF blog posts with the two name that started in 2005. These two authors co-wrote 6 inequivalent papers and you may see that the paper on the complexity was the least cited one among them, although 56 citations is decent.

Papers trying to combine $$P\neq NP$$ i.e. computer science with string phenomenology may be said to be "interdisciplinary". You may find people who are ready to equate "interdisciplinary" with "deep and amazing", for example Lee Smolin. However, the opinion of the actual, achieved physicists is different. They're mostly ambiguous about the "sign" of the adjective "interdisciplinary" and many of them are pretty seriously dismissive of papers with this adjective because they largely equate "interdisciplinary" with "written by cranks such as Lee Smolin".

So whether or not you view "interdisciplinary" things as things that are "automatically better than others" depends on whether or not you are a crackpot yourself. Don't get me wrong: I do acknowledge that there are interesting papers that could be classified as interdisciplinary. Sometimes, a new discipline ready to explode is born in that way. However, I do dismiss the people who use this "interdisciplinary spirit" as a cheap way to look smarter, broader, and deeper in the eyes of the ignorant masses. The content of these papers is rubbish almost as a matter of a general law. The "interdisciplinary" status is often used to claim that the papers don't have to obey the quality requirements of either "main" discipline.

But I want to mention more essential things here.

If you read the rather pedagogic Wikipedia article on $$P=NP$$, the first example of an $$NP$$ problem you will encounter is the "subset sum problem". It's $$NP$$ because it takes at most $$n^p$$ steps to verify a solution; $$n$$ is the number of elements in the set.
For instance, does a subset of the set $$\{−2, −3, 15, 14, 7, −10\}$$ add up to $$0$$? The answer "yes, because the subset $$\{−2, −3, −10, 15\}$$ adds up to zero" can be quickly verified with three additions. There is no known algorithm to find such a subset in polynomial time (there is one, however, in exponential time, which consists of $$2^n-n-1$$ tries), but such an algorithm exists if $$P = NP$$; hence this problem is in $$NP$$ (quickly checkable) but not necessarily in $$P$$ (quickly solvable).
If the set has $$n$$ elements, there are $$2^n-1$$ ways to choose a non-empty subset because you have to pick the value of $$n$$ bits. It's not hard to convince yourself that you should give up the search for the right subset of numbers – whose sum is zero – if $$n$$ is too large and the numbers look hopelessly random. It looks like the manual verification of all possible $$2^n-1$$ subsets is the only plausible way to approach the problem.

But the fact that you don't know where to start or you have convinced yourself to give up doesn't mean that no one can ever find a faster solution to find the right subset. You may want an algorithm that doesn't "discriminate" against some sets of $$n$$ numbers. And indeed, it seems plausible that no fast non-discriminatory method to find the right solution exists.

However, the faster solution to the "subset sum problem" may very well be discriminatory. It may be an algorithm that chooses and cleverly combines very different strategies depending on what the numbers actually are. Imagine that some numbers in the set are of order $$1$$, others are of order $$1,000$$, others are of order $$1,000,000$$ etc. With some extra assumptions, the small numbers don't matter and the "larger" numbers must approximately cancel "first" before you consider the smaller numbers.

In that case, you start with the "largest" numbers of order one million, and solve the "much smaller" problem, and you know which of these largest numbers are in the right subset and which of them aren't. Their sum isn't exactly equal to zero – it differs by a difference of order $$1,000$$ – and you may pick the right subset of the numbers of order $$1,000$$, and so on. With this hierarchy, you ultimately find the right subset "a scale after scale".

This doesn't work when the numbers are of the same order. But it's conceivable that there exist other special limiting situations in which you may find a method that "dramatically speeds up the search", and the patches of these special conditions when a "dramatic speed up is possible" actually cover the whole set $$\RR^n$$. There are many clever things – dividing to many sub-problems, sorting but also Fourier transform, auxiliary zeta-like functions, whatever – that other people may come up with and get much further in solving the problem.

There's simply no proof that a polynomially fast algorithm finding the "subset with the sum equal to zero" doesn't exist. There's no proof that the algorithm exists, either; obviously, no one knows such an algorithm (which would be even more ambitious because the proof that an algorithm exists may be existential).

OK, Douglas and Denef were piling a speculation upon another speculation so they were surely $$P\neq NP$$ believers. But the new "floors of speculations" (more physical speculations) they added on top of $$P\neq NP$$ was even more controversial. Their basic opinion – with defeatist consequences – was simple. The search for the right "flux vacuum" in the landscape is analogous to the "zero subset problem". And there is an exponentially large number of ways to assign the fluxes – just like there is an exponentially large number of subsets. It seems that "checking the vacua one by one" is, similarly to the subset, the only possible strategy. So if the number of vacua is $$10^{500}$$ or so, we will never find the right one.

Maybe. But maybe, this statement will turn out to be completely wrong, too.

Even if one believes that $$P\neq NP$$ and there's no "polynomially fast" solution to the "zero subset problem" (and analogous but harder problems in the search for the right fluxes and the correct string vacuum), it doesn't imply that physicists will never be able to find the right vacuum. The reason is that it's enough to guarantee $$P\neq NP$$ if there exists no algorithm that can solve the "zero subset problem" for any set of $$n$$ real numbers in the whole $$\RR^n$$, including the "most generic ones".

However, the analogous problem in the string landscape is in no way proven – or should be expected – to be "generic" in this sense. The vacua may always exhibit e.g. the hierarchy that I previously mentioned as a feature that may dramatically speed up the search. Or, more likely and more deeply, the numbers may have some hidden patterns and dual descriptions that allow us to calculate the "right choice of the bits" by a very different calculation than by trying an exponentially large number of options, one by one.

Denef and Douglas basically assumed that "all the numbers in this business are just hopeless, disordered mess", and there is an exponentially large number of options, so there's no chance to succeed because "brute force" is the only tool that can deal with generic mess and the required amount of brute force is too large. But this is an additional (and vaguely and emotionally formulated) assumption, not a fact. All of science is full of examples where this "it is hopeless" assumption totally fails – in fact, it fails almost whenever scientists succeed. ;-)

I am constantly reminded about the likely flaws of the "defeatist" attitude when I talk to the laymen, e.g. relatives, who are absolutely convinced that almost nothing can be scientifically determined or calculated. We play cards and I mention that the probability that from a nicely shuffled pack of cards, someone gets at least 8 wild cards (among 14) is 1 in half a million. I get immediately screamed at. How could this be calculated? It's insane, it's an infinitely difficult problem. They're clearly assuming something like "playing zillions of games is a necessary condition to estimate the answer, and even that isn't enough". And I am like What? I could calculate all these things when I was 9 years old, it's just damn basic combinatorics. ;-) The Pythagoriad contest 32 years ago was full of similar – and harder – problems. Clearly, with some computer assistance, I can crack much more complex problems than that. And in physics, we can in principle calculate almost everything we observe.

Now, Denef, Douglas, and other particle physicists aren't laymen in this sense but they can be victims of the same "I don't know what to do so it must be impossible" fallacy. I am often tempted to surrender to this fallacy but I try not to. The claim that "something is impossible to calculate" requires much more solid evidence than the observation that "I don't know where to start".

Let me give you a simple example from quantum field theory. Calculate the tree-level scattering amplitude in the $$\NNN=4$$ gauge theory with a large number $$n$$ of external gluons. Choose their helicity to be "maximally helicity violating" (or higher than that). The number of Feynman diagrams contributing to the scattering amplitude may be huge – exponentially or factorially growing with $$n$$, at least if you allow loops – but we know that the scattering amplitude is extremely simple (or zero, in the super-extreme cases) because of other considerations. This claim supports a part of the twistor/amplituhedron minirevolution and may be proven by recursive identities and similar tools.

So it's clearly not true that the brute force incorporation of all the diagrams is the only way or the fastest way to find the solution. There exist hidden structures in the theory that allow you to find the answer more effectively by transformations, twistors, or – and this theme is really deep and omnipresent in recent 20 years of theoretical physics – through dual descriptions (which are equivalent but the equivalence looks really shocking at the beginning).

It's plausible that even though one string vacuum is correct, humans have no chance to find it. But it's also plausible that they will find it. They may localize the viable subsets by applying many filters, or they may calculate the relevant parameters or the cosmological constant using an approximate or multi-stage scheme that allows one to pick tiny viable subsets and dramatically simplify the problem. Or our vacuum can simply be a special one in the landscape. It may be the "simplest one" in some topological sense; and/or the early cosmology may produce the probability for the right vacuum that vastly exceeds the probability of others, and so on. The loopholes are uncountable. The fact that you may write down a "story" whose message is that "the problem is hard" doesn't mean much. A "story" that implicitly assumes that you shall listen to no other stories is just a matter of a propaganda.

But I want to mention one point about the Douglas-Denef research direction that I have never paid much attention to. It's largely related to the second paper in the "complexity and the landscape" series that they promised but never delivered, one that was more focused on the early cosmology. Did the paper exist? Was it shown to be wrong? What was in it? The abstract of the first, published paper, tells us something about the second, non-existent paper:
In a companion paper, we apply this point of view to the question of how early cosmology might select a vacuum.
This sentence actually sketches a totally wonderful question that I had never articulated cleanly enough. It's possible that the only problem that this question created was that it made Douglas and Denef realize something that invalidates pretty much the basic assumptions or philosophy of their whole research direction (the basic assumption is basically defeatism and frustration). What do I mean?

It seems to me that these two men have realized something potentially far-reaching and it's this simple idea:
It seems that the newborn Universe was a rather small quantum mechanical system with a limited number of degrees of freedom – perhaps something comparable to a 10-qubit quantum computer to be produced in 2020. But if that's so, all the problems that this "cosmic computer" was expected to solve must be reasonably simple!
This looks like a potentially powerful – and perhaps more far-reaching than the Douglas-Denef $$P\neq NP$$ defeatist – realization. You know, if the very young quantum computer had the task to find the right values of fluxes to produce a vacuum with a tiny cosmological constant, a task that may be considered similar to the "zero subset problem" for a large value of $$n$$ – then it seems that the relatively modest "cosmic computer" must have had a way to solve the problem because we're here, surrounded by the cosmological constant of a tiny magnitude.

This realization sounds great and encouraging. However, it may be largely if not completely invalidated by the anthropic reasoning. The anthropic reasoning postulates that all the Universes with the totally wrong values of the cosmological constant and other parameters also exist, just like our hospitable Universe. They only differ by the absence of The Reference Frame weblog in them. So no serious physics blogger and his serious readers can discuss the question why the early cosmic computer decided to pick one value of the cosmological constant or another. If you embrace this anthropic component of the "dynamics", it is no longer necessary that the "right vacuum was quickly picked/calculated by a tiny cosmic quantum computer". Instead, the selection could have been made "anthropically" by a comparison of quattuoroctogintillions of Universes much later, after they have evolved for billions of years and became large. (The number is comparable to the income I currently get in Adventure Capitalist when I move the Android clock to 1970 and back to 2037 LOL. I haven't played it today. It's surely a good game to teach you not to be terrified by large but still "just exponential" integers.)

Now, the fifth section of the Douglas-Denef Complexity I paper is dedicated to the more advanced "quantum computing" issues. This section was arguably a "demo" of the Complexity II paper that has never appeared. The most important guy who is cited is Scott Aaronson. The first four references in the Denef-Douglas paper point to papers by Aaronson – your desire to throw up only weakens once you realize that the authors in the list of references are alphabetically sorted. ;-)

I believe that I have only known the name of Scott Aaronson since late 2006 when this Gentleman entered the "String Wars" by boasting that he didn't give a damn about the truth about physics, and as all other corrupt šitheads, he will parrot the opinions of the highest bidder and seek to maximize profit from all sides of the "String Wars". He immediately became a role model of the Academic corruption in my eyes.

So Douglas and Denef actually talk about lots of the complexity classes that Aaronson's texts (and his book) were always full of, $$NP$$, co-$$NP$$, $$NP$$ to the power of co-$$NP$$, $$DP$$, $$PH$$, $$PSPACE$$, and so on.

But it seems sort of striking how they ignored a realization that could have been the most innovative one in their research:
Our brains shouldn't give up the search for the right vacuum defining the Universe around us because the newborn Universe was, in some sense, smaller than our brains but it managed to find the right answer, too.
Again, I am neither optimistic nor pessimistic when it comes to all these questions, e.g. the question whether the humans will ever find the correct stringy vacuum. I think it's OK and actually unavoidable when individual researchers have their favorite answers. But when it comes to important assumptions that haven't been settled in one way or another, it's absolutely critical for the whole thinking mankind to cover both (or all) possibilities.

It's essential that in the absence of genuine evidence (or, better, proofs), people in various "opinion minorities" are not being hunted, eliminated, or forbidden. The anthropic principle has been an example of that. The more you believe in some kind of a (strong enough) anthropic principle, the more you become convinced that the amount of fundamental progress in the future will be close (and closer) to zero.

But high-energy fundamental physicists are arguably smart and tolerant enough that they realize that no real (non-circular, not just "plausible scenario") evidence exists that would back the defeatist scenarios and many other opinions. That's why the people, even though you could probably divide them to "anthropic" and "non-anthropic" camps rather sharply, have largely stopped bloody arguments. They know that if a clear proof of their view existed, they could have already turned it into a full-fledged (and probably quantitative) theory that almost everyone must be able to check. Such a complete theory answering these big questions doesn't exist in the literature so people realize that despite the differences in their guesses, all of them are comparably ignorant.

Almost everyone has realized that the papers that exist simply aren't solid evidence that may rationally settle the big questions. It's important that some people keep on trying to isolate the right vacuum because there exists a possibility that they will succeed – but they must be allowed to try. I think that the dangerous herd instinct and the suppression of ad hoc random minorities is much more brutal in many other fields of science (and especially in fields that are "not quite science").

## April 05, 2016

### Symmetrybreaking - Fermilab/SLAC

Six weighty facts about gravity

Perplexed by gravity? Don’t let it get you down.

Gravity: we barely ever think about it, at least until we slip on ice or stumble on the stairs. To many ancient thinkers, gravity wasn’t even a force—it was just the natural tendency of objects to sink toward the center of Earth, while planets were subject to other, unrelated laws.

Of course, we now know that gravity does far more than make things fall down. It governs the motion of planets around the Sun, holds galaxies together and determines the structure of the universe itself. We also recognize that gravity is one of the four fundamental forces of nature, along with electromagnetism, the weak force and the strong force.

The modern theory of gravity—Einstein’s general theory of relativity—is one of the most successful theories we have. At the same time, we still don’t know everything about gravity, including the exact way it fits in with the other fundamental forces. But here are six weighty facts we do know about gravity.

Illustration by Sandbox Studio, Chicago with Ana Kova

### 1. Gravity is by far the weakest force we know.

Gravity only attracts—there’s no negative version of the force to push things apart. And while gravity is powerful enough to hold galaxies together, it is so weak that you overcome it every day. If you pick up a book, you’re counteracting the force of gravity from all of Earth.

For comparison, the electric force between an electron and a proton inside an atom is roughly one quintillion (that’s a one with 30 zeroes after it) times stronger than the gravitational attraction between them. In fact, gravity is so weak, we don’t know exactly how weak it is.

Illustration by Sandbox Studio, Chicago with Ana Kova

### 2. Gravity and weight are not the same thing.

Astronauts on the space station float, and sometimes we lazily say they are in zero gravity. But that’s not true. The force of gravity on an astronaut is about 90 percent of the force they would experience on Earth. However, astronauts are weightless, since weight is the force the ground (or a chair or a bed or whatever) exerts back on them on Earth.

Take a bathroom scale onto an elevator in a big fancy hotel and stand on it while riding up and down, ignoring any skeptical looks you might receive. Your weight fluctuates, and you feel the elevator accelerating and decelerating, yet the gravitational force is the same. In orbit, on the other hand, astronauts move along with the space station. There is nothing to push them against the side of the spaceship to make weight. Einstein turned this idea, along with his special theory of relativity, into general relativity.

Illustration by Sandbox Studio, Chicago with Ana Kova

### 3. Gravity makes waves that move at light speed.

General relativity predicts gravitational waves. If you have two stars or white dwarfs or black holes locked in mutual orbit, they slowly get closer as gravitational waves carry energy away. In fact, Earth also emits gravitational waves as it orbits the sun, but the energy loss is too tiny to notice.

We’ve had indirect evidence for gravitational waves for 40 years, but the Laser Interferometer Gravitational-wave Observatory (LIGO) only confirmed the phenomenon this year. The detectors picked up a burst of gravitational waves produced by the collision of two black holes more than a billion light-years away.

One consequence of relativity is that nothing can travel faster than the speed of light in vacuum. That goes for gravity, too: If something drastic happened to the sun, the gravitational effect would reach us at the same time as the light from the event.

Illustration by Sandbox Studio, Chicago with Ana Kova

### 4. Explaining the microscopic behavior of gravity has thrown researchers for a loop.

The other three fundamental forces of nature are described by quantum theories at the smallest of scales— specifically, the Standard Model. However, we still don’t have a fully working quantum theory of gravity, though researchers are trying.

One avenue of research is called loop quantum gravity, which uses techniques from quantum physics to describe the structure of space-time. It proposes that space-time is particle-like on the tiniest scales, the same way matter is made of particles. Matter would be restricted to hopping from one point to another on a flexible, mesh-like structure. This allows loop quantum gravity to describe the effect of gravity on a scale far smaller than the nucleus of an atom.

A more famous approach is string theory, where particles—including gravitons—are considered to be vibrations of strings that are coiled up in dimensions too small for experiments to reach. Neither loop quantum gravity nor string theory, nor any other theory is currently able to provide testable details about the microscopic behavior of gravity.

Illustration by Sandbox Studio, Chicago with Ana Kova

### 5. Gravity might be carried by massless particles called gravitons.

In the Standard Model, particles interact with each other via other force-carrying particles. For example, the photon is the carrier of the electromagnetic force. The hypothetical particles for quantum gravity are gravitons, and we have some ideas of how they should work from general relativity. Like photons, gravitons are likely massless. If they had mass, experiments should have seen something—but it doesn’t rule out a ridiculously tiny mass.

Illustration by Sandbox Studio, Chicago with Ana Kova

### 6. Quantum gravity appears at the smallest length anything can be.

Gravity is very weak, but the closer together two objects are, the stronger it becomes. Ultimately, it reaches the strength of the other forces at a very tiny distance known as the Planck length, many times smaller than the nucleus of an atom.

That’s where quantum gravity’s effects will be strong enough to measure, but it’s far too small for any experiment to probe. Some people have proposed theories that would let quantum gravity show up at close to the millimeter scale, but so far we haven’t seen those effects. Others have looked at creative ways to magnify quantum gravity effects, using vibrations in a large metal bar or collections of atoms kept at ultracold temperatures.

It seems that, from the smallest scale to the largest, gravity keeps attracting scientists’ attention. Perhaps that’ll be some solace the next time you take a tumble, when gravity grabs your attention too.

### Lubos Motl - string vacua and pheno

Sonified raw Higgs data sound like Beethoven, Wagner
Update: the credibility of all the information below is impaired by the date, April 1st

LIGO's gravitational waves sound like music of a sort. Particle physicists at CERN have finally transformed the Higgs-producing proton-proton collision data into musical patterns carefully and the result was somewhat surprising.

The spectrum almost exactly resembles Ludwig van Beethoven's Fateful Fifth Symphony (for Japanese readers, it is the Anthem of Asagohan Breakfasts; don't forget that if you need a Japanese loan, tomate natotata). You can see that the accuracy is overwhelming – famous CERN professors such as Rebeca Einstein were literally dancing to the tune when the data were sonified.

The finding may be said to be a fruit of the 2014 research project of CERN physicists who began to stick random things to the collider. They have also placed the Royal Albert Hall to the trajectory of the LHC beam.

But the CERN press release makes it obvious that the CERN researchers aren't fully aware of all the far-reaching implications of their discovery. The second part of the spectrum actually isn't Beethoven's; it is Richard Wagner's Ride of the Valkyries.

You don't need a physics PhD to figure out that this finding means that there are at least two Higgs bosons. The appearance of the two musical compositions more or less proves supersymmetry. The LHC teams have missed this lesson that wasn't hiding too well in the data but they haven't missed the big picture. They believe that when they look for music in the Higgs and gauge boson sector, the tunes will prove string theory.

LIGO has sonified the yet-to-be-announced GW151226 gravitational waves. What was hiding inside was Johann Strauss' Blue Danube Waltz. We're apparently entering a new epoch of science, the era of unification of mind, matter, and music (three more explanations of the term M-theory).

Some true visionaries, e.g. the Nobel prize winner Brian Josephson, have anticipated this unification of mind, matter, and music for years. Many of us were laughing at them and calling them psychiatrically ill crackpots but now we are being shown that Josephson and his colleagues were sane all along and they should be finally released from the psychiatric asylums.

The particular theoretical predictions suggesting the German tunes were originally calculated by folks at the Fermilab. George W. Bush has famously corrected a 25% error resulting from a miscalculation of the tau neutrino branching ratio.

By the way, after self-driving cars, Google Holland finally revealed a product they should have produced earlier, the self-driving bikes. Meanwhile, Seznam.cz, Google's main Czech nation-wide competitor, has finally presented a simplified version of its maps service. I must admit that the map of Czechia looks simpler now, indeed.

Fidorka cookies will now beat competition by its magnetic wrapping – you can attach the round chocolate bar to your friend or your car etc. ;-)

Pastebin has switched to CERN's official font, Comic Sans.

YouTube allows you to watch existing videos in 360°, thanks to Snoopavision. Only Google has disabled its revolutionary Mic Drop Feature in Gmail, a button that allowed the users to end the conversation. The reason for the retirement of the great feature was just a few million lost jobs.

Czech carmaker Škoda Auto finally introduced a brand new version of Superb for the U.K. market. The new model differs from the 3-months-old one by an extra dog umbrella.