# Particle Physics Planet

## January 28, 2015

### Emily Lakdawalla - The Planetary Society Blog

Ceres: Just a little bit closer (and officially better than Hubble)
Last week's Dawn images of Ceres were just slightly less detailed than Hubble's best. This week's are just slightly better.

## January 27, 2015

### Christian P. Robert - xi'an's og

On Wednesday afternoon, Richard Everitt and Dennis Prangle organised an RSS workshop in Reading on Bayesian Computation. And invited me to give a talk there, along with John Hemmings, Christophe Andrieu, Marcelo Pereyra, and themselves. Given the proximity between Oxford and Reading, this felt like a neighbourly visit, especially when I realised I could take my bike on the train! John Hemmings gave a presentation on synthetic models for climate change and their evaluation, which could have some connection with Tony O’Hagan’s recent talk in Warwick, Dennis told us about “the lazier ABC” version in connection with his “lazy ABC” paper, [from my very personal view] Marcelo expanded on the Moreau-Yoshida expansion he had presented in Bristol about six months ago, with the notion that using a Gaussian tail regularisation of a sub-exponential target in a Langevin algorithm could produce better convergence guarantees than the competition, including Hamiltonian Monte Carlo, Luke Kelly spoke about an extension of phylogenetic trees using a notion of lateral transfer, and Richard introduced a notion of biased approximation to Metropolis-Hasting acceptance ratios, notion that I found quite attractive if not completely formalised, as there should be a Monte Carlo equivalent to the improvement brought by biased Bayes estimators over unbiased classical counterparts. (Repeating a remark by Persi Diaconis made more than 20 years ago.) Christophe Andrieu also exposed some developments of his’ on exact approximations à la Andrieu and Roberts (2009).

Since those developments are not yet finalised into an archived document, I will not into the details, but I found the results quite impressive and worth exploring, so I am looking forward to the incoming publication. One aspect of the talk which I can comment on is related to the exchange algorithm of Murray et al. (2006). Let me recall that this algorithm handles double intractable problems (i.e., likelihoods with intractable normalising constants), by introducing auxiliary variables with the same distribution as the data given the new value of the parameter and computing an augmented acceptance ratio which expectation is the targeted acceptance ratio and which conveniently removes the unknown normalising constants. This auxiliary scheme produces a random acceptance ratio and hence differs from the exact-approximation MCMC approach, which target directly the intractable likelihood. It somewhat replaces the unknown constant with the density taken at a plausible realisation, hence providing a proper scale. At least for the new value. I wonder if a comparison has been conducted between both versions, the naïve intuition being that the ratio of estimates should be more variable than the estimate of the ratio. More generally, it seemed to me during the introductory part of Christophe’s talk that those different methods always faced a harmonic mean danger when being phrased as expectations of ratios, since those ratios were not necessarily squared integrable. And not necessarily bounded. Hence my rather gratuitous suggestion of using other tools than the expectation, like maybe a median, thus circling back to the biased estimators of Richard. (And later cycling back, unscathed, to Reading station!)

On top of the six talks in the afternoon, there was a small poster session during the tea break, where I met Garth Holloway, working in agricultural economics, who happened to be a (unsuspected) fan of mine!, to the point of entitling his poster “Robert’s paradox”!!! The problem covered by this undeserved denomination happened to be the bias in Chib’s approximation of the evidence in mixture estimation, a phenomenon that I related to the exchangeability of the component parameters in an earlier paper or set of slides. So “my” paradox is essentially label switching. For which I cannot claim any fame! Still, I am looking forward the completed version of this poster to discuss Garth’s solution, but we had a beer together after the talks, drinking to the health of our common friend John Deely.

Filed under: Statistics, Travel, University life Tagged: BayesComp, biking, Chib's approximation, doubly intractable problems, exchange algorithm, exchangeability, John Deely, label switching, mixture estimation, Purdue University, trains, University of Oxford, University of Reading

### John Baez - Azimuth

Trends in Reaction Network Theory

For those who have been following the posts on reaction networks, this workshop should be interesting! I hope to see you there.

Workshop on Mathematical Trends in Reaction Network Theory, 1-3 July 2015, Department of Mathematical Sciences, University of Copenhagen. Organized by Elisenda Feliu and Carsten Wiuf.

### Description

This workshop focuses on current and new trends in mathematical reaction network theory, which we consider broadly to be the theory describing the behaviour of systems of (bio)chemical reactions. In recent years, new interesting approaches using theory from dynamical systems, stochastics, algebra and beyond, have appeared. We aim to provide a forum for discussion of these new approaches and to bring together researchers from different communities.

### Structure

The workshop starts in the morning of Wednesday, July 1st, and finishes at lunchtime on Friday, July 3rd. In the morning there will be invited talks, followed by contributed talks in the afternoon. There will be a reception and poster session Wednesday in the afternoon, and a conference dinner Thursday. For those participants staying Friday afternoon, a sightseeing event will be arranged.

### Organization

The workshop is organized by the research group on Mathematics of Reaction Networks at the Department of Mathematical Sciences, University of Copenhagen. The event is sponsored by the Danish Research Council, the Department of Mathematical Sciences and the Dynamical Systems Interdisciplinary Network, which is part of the UCPH Excellence Programme for Interdisciplinary Research.

### Confirmed invited speakers

Nikki Meskhat (North Carolina State University, US)

Alan D. Rendall (Johannes Gutenberg Universität Mainz, Germany)

• János Tóth (Budapest University of Technology and Economics, Hungary)

Sebastian Walcher (RWTH Aachen, Germany)

Gheorghe Craciun (University of Wisconsin, Madison, US)

David Doty (California Institute of Technology, US)

>

Manoj Gopalkrishnan (Tata Institute of Fundamental Research, India)

Michal Komorowski (Institute of Fundamental Technological Research, Polish Academy of Sciences, Poland)

John Baez (University of California, Riverside, US)

### Important dates

Abstract submission for posters and contributed talks: March 15, 2015.

Notification of acceptance: March 26, 2015.

Conference: July 1-3, 2015.

### The organizers

The organizers are Elisenda Feliu and Carsten Wiuf at the Department of Mathematical Sciences of the University of Copenhagen.

They’ve written some interesting papers on reaction networks, including some that discuss chemical reactions with more than one stationary state. This is a highly nonlinear regime that’s very important in biology:

• Elisenda Feliu and Carsten Wiuf, A computational method to preclude multistationarity in networks of interacting species, Bioinformatics 29 (2013), 2327-2334.

Motivation. Modeling and analysis of complex systems are important aspects of understanding systemic behavior. In the lack of detailed knowledge about a system, we often choose modeling equations out of convenience and search the (high-dimensional) parameter space randomly to learn about model properties. Qualitative modeling sidesteps the issue of choosing specific modeling equations and frees the inference from specific properties of the equations. We consider classes of ordinary differential equation (ODE) models arising from interactions of species/entities, such as (bio)chemical reaction networks or ecosystems. A class is defined by imposing mild assumptions on the interaction rates. In this framework, we investigate whether there can be multiple positive steady states in some ODE models in a given class.

Results. We have developed and implemented a method to decide whether any ODE model in a given class cannot have multiple steady states. The method runs efficiently on models of moderate size. We tested the method on a large set of models for gene silencing by sRNA interference and on two publicly available databases of biological models, KEGG and Biomodels. We recommend that this method is used as (i) a pre-screening step for selecting an appropriate model and (ii) for investigating the robustness of non-existence of multiple steady state for a given ODE model with respect to variation in interaction rates.

Availability and Implementation. Scripts and examples in Maple are available in the Supplementary Information.

• Elisenda Feliu, Injectivity, multiple zeros, and multistationarity in reaction networks, Proceedings of the Royal Society A.

Abstract. Polynomial dynamical systems are widely used to model and study real phenomena. In biochemistry, they are the preferred choice for modelling the concentration of chemical species in reaction networks with mass-action kinetics. These systems are typically parameterised by many (unknown) parameters. A goal is to understand how properties of the dynamical systems depend on the parameters. Qualitative properties relating to the behaviour of a dynamical system are locally inferred from the system at steady state. Here we focus on steady states that are the positive solutions to a parameterised system of generalised polynomial equations. In recent years, methods from computational algebra have been developed to understand these solutions, but our knowledge is limited: for example, we cannot efficiently decide how many positive solutions the system has as a function of the parameters. Even deciding whether there is one or more solutions is non-trivial. We present a new method, based on so-called injectivity, to preclude or assert that multiple positive solutions exist. The results apply to generalised polynomials and variables can be restricted to the linear, parameter-independent first integrals of the dynamical system. The method has been tested in a wide range of systems.

You can see more of their papers on their webpages.

### Quantum Diaries

Looking Forward to 2015: Analysis Techniques

With 2015 a few weeks old, it seems like a fine time to review what happened in 2014 and to look forward to the new year and the restart of data taking. Along with many interesting physics results, just to name a few, LHCb saw its 200th publication, a test of lepton universality. With protons about to enter the LHC, and the ALICE and LHCb detectors recording muon data from transfer line tests between the SPS and LHC (see also here), the start of data-taking is almost upon us. For some implications, see Ken Bloom’s post here. Will we find supersymmetry? Split Higgs? Nothing at all? I’m not going to speculate on that, but I would like to review two techniques which played a key role in two results from LHCb and a few analysis techniques which enabled them.

The first result I want to discuss is the $$Z(4430)^{-}$$. The first evidence for this state came from the Belle Collaboration in 2007, with subsequent studies in 2009 and in 2013. BaBar also searched for the state, and while they did not see it, they did not rule it out.

The LHCb collaboration searched for this state, using the specific decay mode $$B^0\to \psi’ K^{+} \pi^{-}$$, with $$\psi’$$ decaying to two muons. For more reading, see the nice writeup from earlier in 2014. As in the Belle analyses, which looked using muons or electrons in the final $$\psi’$$ state, the trick here is to look for bumps in the $$\psi’ \pi^{-}$$ mass distribution. If a peak appears which is not described  by the conventional 2 and 3 quark states, mesons and baryons, we know and love, it must be from a state involving a $$c \overline{c}d\overline{u}$$ quark combination. The search is performed in two ways: a model-dependent search, which looks at the $$K\pi$$ and $$\psi’\pi$$ invariant mass and decay angle distributions, and a “model independent” search which looks for structure induced in the $$K\pi$$ system induced by a resonance in the $$\psi’\pi$$ system and does not invoke any exotic resonances.

At the end of the day, it is found in both cases that the data are not described without including a resonance for the $$Z(4430)^-$$.

Now, it appears that we have a resonance on our hands, but how can we be sure? In the context of the aforementioned model dependent analysis, the amplitude for the $$Z(4430)^{-}$$ is modeled as a Breit-Wigner amplitude, which is a complex number. If this amplitude is plotted in the imaginary plane as a function of the invariant mass of the resonance, a circular shape is traced out. This is characteristic of a resonance. Therefore, by fitting the real and imaginary parts of the amplitude in six bins of $$\psi’\pi$$ invariant mass, the shape can be directly compared to that of an exected resonance. That’s exactly what’s done in the plot below:

The argand plane for the Z(4430)- search. Units are arbitrary.

What is seen is that the data (black points) roughly follow the outlined circular shape given by the Breit-Wigner resonance (red). The outliers are pulled due to detector effects. The shape quite clearly follows the circular characteristic of a resonance. This diagram is called an Argand Diagram.

Another analysis technique to identify resonances was used to find the two newest particles by LHCb:

Depiction of the two Xi_b resonances found by the LHCb Collaboration. Credit to Italic Pig

Or perhaps seen as

Xi_b resonances, depicted by Lison Bernet.

Any way that you draw them, the two new particles, the $$\Xi_b’^-$$ and $$\Xi_b^{*-}$$ were seen by the LHCb collaboration a few months ago. Notably, the paper was released almost 40 years to the day that the discovery of the $$J/\psi$$ was announced, sparking the November Revolution, and the understanding that mesons and baryons are composed of quarks. The $$\Xi_b’^-$$ and $$\Xi_b^{*-}$$ baryons are yet another example of the quark model at work. The two particles are shown in $$\delta m \equiv m_{candidate}(\Xi_b^0\pi_s^-)-m_{candidate}(\Xi_b^0)-m(\pi)$$ space below.

$$\Xi_b’^-$$ and $$\Xi_b^{*-}$$ mass peaks shown in $$\delta(m_{candidate})$$ space.

Here, the search is performed by reconstructing $$\Xi_b^0 \pi^-_s$$ decays, where the $$\Xi_b^0$$ decays to $$\Xi_c^+\pi^-$$, and $$\Xi_c^+\to p K^- \pi^+$$. The terminology $$\pi_s$$ is only used to distinguish between that pion and the other pions. The peaks are clearly visible. Now, we know that there are two resonances, but how do we determine whether or not the particles are the $$\Xi_b’^-$$ and $$\Xi_b^{*-}$$? The answer is to fit what is called the helicity distributions of the two particles.

To understand the concept, let’s consider a toy example. First, let’s say that particle A decays to B and C, as $$A\to B C$$. Now, let’s let particle C also decay, to particles D and F, as $$C\to D F$$. In the frame where A decays at rest, the decay looks something like the following picture.

Simple Model of $$A\to BC$$, $$C\to DF$$

There should be no preferential direction for B and C to decay if A is at rest, and they will decay back to back from conservation of momentum. Likewise, the same would be true if we jump to the frame where C is at rest; D and F would have no preferential decay direction. Therefore, we can play a trick. Let’s take the picture above, and exactly at the point where C decays, jump to its rest frame. We can then measure the directions of the outgoing particles. We can then define a helicity angle $$\theta_h$$ as the angle between the C flight in A’s rest frame and D’s flight in C’s rest frame. I’ve shown this in the picture below.

Helicity Angle Definition for a simple model

If there is no preferential direction of the decay, we would expect a flat distribution of $$\theta_h$$. The important caveat here is that I’m not including anything about angular momentum, spin or otherwise, in this argument. We’ll come back to that later. Now, we can identify A as the $$\Xi_b’$$ or $$\Xi_b^*$$ candidate, C as the $$\Xi_b^0$$ and D as the $$\Xi_C$$ candidates used in the analysis. The actual data are shown below.

Helicity angle distributions for the $$\Xi_b’$$and $$\Xi_b*$$ candidates (upper and lower, respectively).

While it appears that the lower mass may have variations, it is statistically consistent with being a flat line. Now the extra power of such an analysis is that if we now consider angular momentum of the particles themselves, there are implied selection rules which will alter the distributions above, and which allow for exclusion or validation of particle spin hypotheses simply by the distribution shape. This is the rationale for having the extra fit in the plot above. As it turns out, both distributions being flat allows for the identification of  the $$\Xi ‘_b^-$$ and the $$\Xi_b^{*-}$$, but do not allow for conclusive ruling out of other spins.

With the restart of data taking at the LHC almost upon us (go look on Twitter for #restartLHC), if you see a claim for a new resonance, keep an eye out for Argand Diagrams or Helicity Distributions.

### Emily Lakdawalla - The Planetary Society Blog

A second ringed centaur? Centaurs with rings could be common
Chiron, which is both a centaur and a comet, may also have rings.

### astrobites - astro-ph reader's digest

Shedding Light on Galaxy Formation
Title: Galaxies that Shine: radiation hydrodynamical simulations of disk galaxies

Authors: Joakim Rosdahl, Joop Schaye, Romain Teyssier, Oscar Agertz

First Author’s Institution: Leiden Observatory, Leiden University, Leiden, The Netherlands

Paper Status: Submitted to MNRAS

Computational simulations have proven invaluable in understanding the formation and evolution of galaxies. When the first galaxies were made in simulations, they formed… too well. Gas cooled too much and too fast, and these galaxies formed way too many stars. These first simulations, however, missed a whole host of physics that today fall under the umbrella of “feedback” processes. Feedback encompasses a wide range of really interesting astrophysics, including radiation from stars heating and ionizing surrounding gas, thermal and kinetic energy injection from supernova explosions,  heating from active galactic nuclei (AGN), and the impact of AGN jets. Among other things, these processes can drive galactic winds, blowing gas out of galaxies, and slowing down star formation.

Including every type of feedback in a simulation is a great way to produce realistic galaxies, but is unfortunately computationally expensive and impossible to do perfectly. In fact, much of the relevant physics occurs on scales far smaller than the simulation resolution, and must be addressed with what is called “sub-grid” models. Today, the game of producing realistic galaxies in simulations boils down to figuring out the right physics to include, while minimizing computational costs. Much progress has been made in this field, with one giant exception: radiation. Properly accounting for radiation is expensive and complex; the nearly universal solution is to make assumptions about how photons propagate through gas, so it doesn’t have to be computed directly. The authors of this paper present the first galaxy-scale simulations of “radiation hydrodynamics”, or hydrodynamic simulations that directly compute the radiative transfer of photons, and their feedback onto the galaxy.

## Feedback, Feedback, Feedback

The authors produce galaxy simulations using RAMSES-RT, an adaptive mesh refinement (AMR) code that includes a nearly first-principles treatment of radiation, based upon the RAMSES code. Their treatment of radiation breaks photons into five energy bins, infrared, optical, and 3 ultraviolet bins separated by hydrogen and helium ionization energies. These act on the gas through three primary physical processes; the ionization and heating of the gas through interactions with hydrogen and helium, momentum transfer between the photons and the gas (radiation pressure), and pressure trough interactions with dust (including the effects of light scattering off of dust).  In addition, they include prescriptions for star formation and supernova feedback, radiative heating and cooling, and chemistry to accurately track the abundances of hydrogen and helium and their ionization states. The photons are produced every timestep in the simulation from “star particles” (representation of groups of stars in the simulated galaxy); the number of photons and their energies are determined by the given star particle’s mass and size.

Fig.1: The radiation flux from the G9 galaxy for all five photon energy bins. Shown is the face-on (top) and edge-on (bottom) views of the galaxy. (Source: Fig. 1 of Rosdahl et. al. 2015)

Fig. 2: Table of each type of simulation run, showing which feedback types were included in each. These are (in order) supernova feedback, heating from radiation, momentum transfer between photons and gas (radiation pressure), and radiation pressure on dust (Source: Table 3 of Rosdahl et. al. 2015)

The authors include all of this physics into simulations of 3 disk galaxies (labelled G8, G9, G10), each containing roughly 108, 109, and 1010 solar masses of gas + stars, embedded in dark matter haloes of about 1010, 1011, and 1012 solar masses. The heaviest of these three is comparable to the Milky Way. Fig. 1 above shows face-on and edge on views of radiation flux from the G9 galaxy in all 5 photon energy bins. In each of their simulations, they evolve each galaxy for 500 Myr, examining how turning on / off various feedback processes (namely supernova, radiation heating, and radiation pressure) affect the evolution of each galaxy. Fig. 2 gives the combination of physics in each simulation type and their labels.

## Star Formation and Galactic Winds

Although they produce a thorough investigation into many of the details of their radiation feedback, this astrobite will focus on only two effects: how the radiation affects the formation of stars, and its effect on driving galactic winds. Fig. 3 presents the total star formation (in stellar masses) and star formation rate for the G9 galaxy under 6 different simulations. The labels in Fig. 3 are given in Fig. 2. The dashed lines give the mass outflow rate from the galactic winds as measured outside the galaxy. On one extreme, the simulation with no feedback converts gas into stars too efficiently, and drives no galactic winds. On the other, the full feedback simulation (dark red) produces the least amount of stars, but interestingly, has weaker galactic winds than some of the other simulations. The three thick lines in the top of Fig. 3 give the supernova + radiation feedback simulations. Compared to the supernova only simulation (blue), the radiative heating feedback provides the dominant change, while including radiation pressure and dust pressure only make small changes to the total star formation.

Fig. 3: For galaxy G9, shown is the total mass of stars (top), the star formation rate (bottom), and the galactic wind outflow rate (bottom, dashed) for each of the simulations listed in Fig. 2. (Source: FIg. 4 of Rosdahl et. al. 2015)

Fig. 4: Galactic winds from the G9 galaxy with supernova feedback only (left) and with supernova and radiation feedback (right). The images show the surface (column) density of hydrogen. (Source: Fig. 5 of Rosdahl et. al. 2015)

Fig. 4 shows the outflows of the G9 galaxy at the end of the simulation run, with SN feedback only on the left, and full feedback on the right. Although the two are morphologically quite different, the authors show that the differences in total mass loss from galactic winds between the two simulations is minimal (see Fig. 3). In fact, they show that the full radiation feedback model produces slightly less winds, a byproduct of slowing down star formation in the galaxy.

## The Future of Galaxy Evolution

The authors have shown that radiative feedback does play an important role in studying galaxy formation and evolution. In this work, they sought to characterize the effects of supernova and radiative feedback vs. supernova feedback alone.  In future work, the study of radiation feedback on various scales, from small slices of the galactic disk to larger galaxies, and the inclusion of AGN feedback in these simulations, will be important in piecing together a complete understanding of galaxy formation.

### Peter Coles - In the Dark

The Map is not the Territory

I came across this charming historical map while following one of my favourite Twitter feeds “@Libroantiguo” which publishes fascinating material about books of all kinds, especially old ones. It shows the location of London coffee houses and is itself constructed in the shape of a coffee pot:

Although this one is obviously just a bit of fun, maps like this are quite fascinating, not only as practical guides to navigating a transport system but also because they often stand up very well as works of art. It’s also interesting how they evolve with time  because of changes to the network and also changing ideas about stylistic matters.

A familiar example is the London Underground or Tube map. There is a fascinating website depicting the evolutionary history of this famous piece of graphic design. Early versions simply portrayed the railway lines inset into a normal geographical map which made them rather complicated, as the real layout of the lines is far from regular. A geographically accurate depiction of the modern tube network is shown here which makes the point:

A revolution occurred in 1933 when Harry Beck compiled the first “modern” version of the map. His great idea was to simplify the representation of the network around a single unifying feature. To this end he turned the Central Line (in red) into a straight line travelling left to right across the centre of the page, only changing direction at the extremities. All other lines were also distorted to run basically either North-South or East-West and produce a regular pattern, abandoning any attempt to represent the “real” geometry of the system but preserving its topology (i.e. its connectivity).  Here is an early version of his beautiful construction:

Note that although this a “modern” map in terms of how it represents the layout, it does look rather dated in terms of other design elements such as the border and typefaces used. We tend not to notice how much we surround the essential things, which tend to last, with embellishments that date very quickly.

More modern versions of this map that you can get at tube stations and the like rather spoil the idea by introducing a kink in the central line to accommodate the complexity of the interchange between Bank and Monument stations as well as generally buggering about with the predominantly  rectilinear arrangement of the previous design:

I quite often use this map when I’m giving popular talks about physics. I think it illustrates quite nicely some of the philosophical issues related with theoretical representations of nature. I think of theories as being like maps, i.e. as attempts to make a useful representation of some  aspects of external reality. By useful, I mean the things we can use to make tests. However, there is a persistent tendency for some scientists to confuse the theory and the reality it is supposed to describe, especially a tendency to assert there is a one-to-one relationship between all elements of reality and the corresponding elements in the theoretical picture. This confusion was stated most succintly by the Polish scientist Alfred Korzybski in his memorable aphorism :

The map is not the territory.

I see this problem written particularly large with those physicists who persistently identify the landscape of string-theoretical possibilities with a multiverse of physically existing domains in which all these are realised. Of course, the Universe might be like that but it’s by no means clear to me that it has to be. I think we just don’t know what we’re doing well enough to know as much as we like to think we do.

A theory is also surrounded by a penumbra of non-testable elements, including those concepts that we use to translate the mathematical language of physics into everday words. We shouldn’t forget that many equations of physics have survived for a long time, but their interpretation has changed radically over the years.

The inevitable gap that lies between theory and reality does not mean that physics is a useless waste of time, it just means that its scope is limited. The Tube  map is not complete or accurate in all respects, but it’s excellent for what it was made for. Physics goes down the tubes when it loses sight of its key requirement: to be testable.

In any case, an attempt to make a grand unified theory of the London Underground system would no doubt produce a monstrous thing so unwieldly that it would be useless in practice. I think there’s a lesson there for string theorists too…

Now, anyone for a game of Mornington Crescent?

### Symmetrybreaking - Fermilab/SLAC

Of symmetries, the strong force and Helen Quinn

Scientist Helen Quinn has had a significant impact on the field of theoretical physics.

Modern theoretical physicists spend much of their time examining the symmetries governing particles and their interactions. Researchers describe these principles mathematically and test them with sophisticated experiments, leading to profound insights about how the universe works.

For example, understanding symmetries in nature allowed physicists to predict the flow of electricity through materials and the shape of protons. Spotting imperfect symmetries led to the discovery of the Higgs boson.

One researcher who has used an understanding of symmetry in nature to make great strides in theoretical physics is Helen Quinn. Over the course of her career, she has helped shape the modern Standard Model of particles and interactions— and outlined some of its limitations. With various collaborators, she has worked to establish the deep mathematical connection between the fundamental forces of nature, pondered solutions to the mysterious asymmetry between matter and antimatter in the cosmos and helped describe properties of the particle known as the charm quark before it was discovered experimentally.

“Helen's contributions to physics are legendary,” says Stanford University professor of physics Eva Silverstein. Silverstein first met Quinn as an undergraduate in 1989, then became her colleague at SLAC in 1997.

Quinn’s best-known paper is one she wrote with fellow theorist Roberto Peccei in 1977. In it, they showed how to solve a major problem with the strong force, which governs the structure of protons and other particles. The theory continues to find application across particle physics. “That's an amazing thing: that an idea you had almost 40 years ago is still alive and well,” says Peccei, now a professor emeritus of physics at the University of California, Los Angeles.

#### GUTs, glory, and broken symmetries

Quinn was born in Australia in 1943 and emigrated with her family to the United States while she was still a university student. For that reason, she says, “I had a funny path through undergraduate school.”

When she moved to Stanford University, she had already spent two years studying at the University of Melbourne to become a meteorologist with support from the Australian Weather Bureau, and needed to select an academic major that wouldn’t force her to start over again. That program happened to be physics.

With the longest linear accelerator in the world nearing completion next door at what is now called SLAC National Accelerator Laboratory, Stanford was an auspicious place to study particle physics, so Quinn stayed on to finish her PhD. “Really, the beginning was the fact that particle physics was bubbling at that time at Stanford, and that's where I got hooked on it,” she says. She entered the  graduate  program when women comprised only about 2 percent of all physics students in American PhD programs.

After finishing her PhD, Quinn traveled to Germany for postdoctoral research at the DESY laboratory before returning to the United States. She taught high school in Boston briefly before landing a position at Harvard University. While there, she collaborated with theorist Steven Weinberg and Howard Georgi to work on something known as “grand unified theories,” whimsically nicknamed GUTs. GUT models were attempts to bring together the three forces described by quantum physics: electromagnetism, which holds together atoms, and the weak and strong forces, which govern nuclear structure. (There still is no quantum theory of gravity, the fourth fundamental force.)

“Her paper with Howard Georgi and Steve Weinberg on grand unified theories was the first paper that made sense of grand unified theories,” Peccei says.

Quinn returned to SLAC during a leave of absence from Harvard, where she connected with Peccei. The two of them had frequent conversations with Weinberg and Gerard ’t Hooft, both of whom were visiting SLAC at that time. (Both Weinberg and ’t Hooft later won Nobel Prizes for their work on symmetries in particle physics.)

At that time, many theorists were engaged in understanding the strong force, which governs the structure of particles such as protons, using a theory called quantum chromodynamics, or QCD.  (The name “chromodynamics” refers to the “color charge” of quarks, which is analogous to electric charge.)

The problem: QCD predicted some results at odds with experiment, including an electrical property of neutrons.

Quinn and Peccei realized that they could make that problem go away if one type of quark had no mass. While that was at odds with reality, it hadn’t always been so, Quinn says: “That led me to think, well, in the very early universe when it's hot …quarks are massless.”

By adding a new symmetry once quarks acquired their masses from the Higgs field, they could resolve the problem with QCD. As soon as their paper came out, Weinberg realized the theory also made a prediction that Quinn and Peccei had not noticed: the axion, which might comprise some or all of the mysterious dark matter binding galaxies together. (Independently, Frank Wilczek also found the axion implicit in the Peccei-Quinn theory.) Quinn laughs now over how obvious she says it seems in retrospect.

#### Experiments and education

After her collaboration with Peccei, Quinn worked extensively with experimentalists and other theorists at SLAC to understand the interactions involving the bottom quark. Studying particles containing bottom quarks is one of the best ways to investigate the symmetries built into QCD, which in turn may offer clues as to why there’s a lot more matter than antimatter in the cosmos.

Along the way, Quinn was elected as member of the National Academy of Sciences, and has received a number of prestigious prizes, including the J.J. Sakurai Prize for theoretical physics and the Dirac Medal from the International Center for Theoretical Physics. She also served as president of the American Physical Society, the premiere professional organization for physicists in the United States.

Since retiring in 2010, Quinn has turned her attention full-time to one of her long-time passions: science education at the kindergarten through high-school level. As part of the board on science education at the National Academy of Sciences, she headed the committee that produced the document “A Framework for K-12 Science Education” in 2011.

“The overarching goal is that most students should have the experience of learning and understanding, not just a bunch of disconnected facts,” she says.

Instead of enduring perpetual tests as required under current policy, she wants students to focus on learning “the way science works: how to think about problems as scientists do and analyze data and evidence and draw conclusions based on evidence.” Peccei calls her “unique among very well-known physicists” for this later work.

“She's devoted a tremendous amount of time to physics education, and has been really a champion of that at a national level,” he says.

On top of that, the Peccei-Quinn model remains a powerful tool for theorists and “a good candidate to solve some of the outstanding problems in particle physics and cosmology,” Silverstein says. Along with dark matter, these include Silverstein’s own research in string theory and early universe inflation.

As with her efforts on behalf of education, the impact of Quinn’s physics research is in how it lays the foundation for others to build on. There’s a certain symmetry in that.

Like what you see? Sign up for a free subscription to symmetry!

### Clifford V. Johnson - Asymptotia

The Visitors
Yesterday I sneaked on to campus for a few hours. I'm on family leave (as I mentioned earlier) and so I've not been going to campus unless I more or less have to. Yesterday was one of those days that I decided was a visit day and so visit I did. I went to say hi to a visitor to the Mathematics Department, Sylvester James Gates Jr., an old friend who I've known for many years. He was giving the CAMS (Center for Applied Mathematical Sciences) distinguished lecture with the title "How Attempting To Answer A Physics Question Led Me to Graph Theory, Error-Correcting Codes, Coxeter Algebras, and Algebraic Geometry". You can see him in action in the picture above. I was able to visit with Jim for a while (lunch with him and CAMS director Susan Friedlander), and then hear the talk, which was very interesting. I wish he'd had time to say more on all the connections he mentioned in the title, but what he did explain sounded rather interesting. It is all about the long unsolved problem of finding certain kinds of (unconstrained, off-shell) representations of extended supersymmetry. (Supersymmetry is, you may know, a symmetry that [...] Click to continue reading this post

### Peter Coles - In the Dark

Luqman Onikosi

Yesterday my attention was drawn to the case of Luqman Onikosi, a postgraduate student at the University of Sussex, who is originally from Nigeria. Luqman has been granted temporary permission to reside in the United Kingdom based on his medical circumstances; he is suffering from Hepatitis B, for which far better treatment is available in the UK than in his home country. His immigration status is yet to be definitely resolved and in the meantime he is being treated, entirely according to established policy and practice, as an Overseas Student. He is therefore  liable to pay full Overseas Fees if he is to continue on his course, an MA in Global Political Economy, and currently can not afford to pay them.

It would be inappropriate for me to comment in further detail on Luqman’s case – not least because I don’t have much in the way of further detail to comment on – but I am happy to use the medium of this personal blog to draw the attention of readers to a crowdsourcing appeal that has started with the aim of collecting sufficient funds to enable him to continue his studies. You can find the website where you can find more information about the issues surrounding his case, and instructions on how to make a donation, here.

## January 26, 2015

### arXiv blog

How A Box Could Solve The Personal Data Conundrum

Software known as a Databox could one day both safeguard your personal data and sell it, say computer scientists.

One of the trickiest issues for anyone with an online presence is how to manage personal information. Almost any form of surfing leaves a data trail that advertisers, social networks and so on can use to their advantage.

### Christian P. Robert - xi'an's og

the density that did not exist…

On Cross Validated, I had a rather extended discussion with a user about a probability density

$f(x_1,x_2)=\left(\dfrac{x_1}{x_2}\right)\left(\dfrac{\alpha}{x_2}\right)^{x_1-1}\exp\left\{-\left(\dfrac{\alpha}{x_2}\right)^{x_1} \right\}\mathbb{I}_{\mathbb{R}^*_+}(x_1,x_2)$

as I thought it could be decomposed in two manageable conditionals and simulated by Gibbs sampling. The first component led to a Gumbel like density

$g(y|x_2)\propto ye^{-y-e^{-y}} \quad\text{with}\quad y=\left(\alpha/x_2 \right)^{x_1}\stackrel{\text{def}}{=}\beta^{x_1}$

wirh y being restricted to either (0,1) or (1,∞) depending on β. The density is bounded and can be easily simulated by an accept-reject step. The second component leads to

$g(t|x_1)\propto \exp\{-\gamma ~ t \}~t^{-{1}/{x_1}} \quad\text{with}\quad t=\dfrac{1}{{x_2}^{x_1}}$

which offers the slight difficulty that it is not integrable when the first component is less than 1! So the above density does not exist (as a probability density).

What I found interesting in this question was that, for once, the Gibbs sampler was the solution rather than the problem, i.e., that it pointed out the lack of integrability of the joint. (What I found less interesting was that the user did not acknowledge a lengthy discussion that we had previously about the Gibbs implementation and that he erased, that he lost interest in the question by not following up on my answer, a seemingly common feature of his‘, and that he did not provide neither source nor motivation for this zombie density.)

Filed under: Kids, R, Statistics, University life Tagged: cross validated, Gibbs sampling, Gumbel distribution, improper posteriors, zombie density

### astrobites - astro-ph reader's digest

Evryscope, Greek for “wide-seeing”
Title: Evryscope Science: Exploring the Potential of All-Sky Gigapixel-Scale Telescopes

Authors: Nicholas M. Law et al.

First Author’s Institution: University of North Carolina at Chapel Hill

How fantastic would it be to image the entire sky, every few minutes, every night, for a series of years? The science cases for such surveys —in today’s paper they are called All-Sky Gigapixel Scale Surveys— are numerous, and span a huge range of astronomical topics. Just to begin with, such surveys could: detect transiting giant planets, sample Gamma Ray Bursts and nearby Supernovae, and a wealth of other rare and/or unexpected transient events that are further described in the paper.

Evryscope is a telescope that sets out to take such a minute-by-minute movie of the sky accessible to it. It is designed as an array of extremely wide-angle telescopes, contrasting the traditional meaning of the word “tele-scope” (Greek for “far-seeing”) by Evryscope’s emphasis on extremely wide angles (“Evryscope” is Greek for “wide-seeing”). The array is currently being constructed by the authors at the University of North Carolina at Chapel Hill, and is scheduled to be deployed at the Cerro Tololo Inter-American Observatory (CTIO) in Chile later this year.

But wait, aren’t there large sky surveys out there that are already patrolling the sky a few times a week? Yes, there are! But a bit differently. There is for example the tremendously successful Sloan Digital Sky Survey (SDSS— see Figure 1, and read more about SDSS-related Astrobites here, here, here), which has paved the way for numerous other surveys such as Pan-STARRS, and the upcoming Large Synoptic Survey Telescope (LSST). These surveys are all designed around a similar concept: they utilize a single large-aperture telescope that repeatedly observes few-degree-wide fields to achieve deep imaging. Then the observations are tiled together to cover large parts of the sky several times a week.

Figure 1: The Sloan Digital Sky Survey Telescope, a 2.5m telescope that surveys large areas of the available sky a few times a week. The Evryscope-survey concept is a bit different, valuing the continuous coverage of almost the whole available sky over being able to see faint far-away objects. Image from the SDSS homepage.

The authors of today’s paper note that surveys like the SDSS are largely optimized to finding day-or-longer-type events such as supernovae —and are extremely good at that— but are not, however, sensitive to the very diverse class of even shorter-timescale transient events (remembering the list of example science cases above). Up until now, such short-timescale events have generally been studied with individual small telescopes staring at single, limited, fields of view. Expanding on this idea, the authors then propose the Evryscope as an array of small telescopes arranged so that together they can survey the whole available sky minute-by-minute. In contrast to SDSS-like surveys, an Evryscope-like survey will not be able to detect targets nearly as faint as SDSS-like surveys can, but rather focuses on the continuous monitoring of the brightest objects it can see.

Figure 2: The currently under-construction Evryscope, showing the 1.8m diameter custom-molded dome. The dome houses 27 individual 61mm aperture telescopes, each of which have their own CCD detector. Figure 1 from the paper.

Evryscope: A further description

Evryscope is designed as an array of 27 61mm optical telescopes, arranged in a custom-molded fiberglass dome, which is mounted on an off-the-shelf German Equatorial mount (see Figure 2). Each telescope has its own 29 MPix CCD detector, adding up to a total detector size of 0.78GPix! The authors refer to Evryscope’s observing strategy as a “ratcheting survey”, as it goes like this: the dome follows the instantaneous field of view (see Figure 3, left) by rotating the dome ever so slowly to compensate for Earth’s rotation rate, taking 2 minute exposures back-to-back for two hours, and then reset and repeat (see Figure 3, right). This ratcheting approach enables Evryscope to image essentially every part of the visible sky for at least 2 hours every night!

Figure 3: Evryscope sky coverage (blue), for a mid-latitude Northern-hemisphere site (30°N), showing the SDSS DR7 photometric survey (red) for scale. Left: Instantaneous Evryscope coverage (8660 sq. deg.), including the individual camera fields-of-view (skewed boxes). Right: The Evryscope sky coverage over one 10-hour night. The intensity of the blue color corresponds to the length of continuous coverage (between 2 and 10 hours, in steps of 2 hours) provided by the ratcheting survey, covering a total of 18400 sq.deg. every night. Figures 3 (left) and 5 (right) from the paper.

With its Gigapixel-scale detector, Evryscope will gather large amounts of data, amounting to about 100 TB of compressed FITS images per year! The data will be stored and analyzed on site. The pipeline will be optimized to provide real-time detection of interesting transient events, with rapid retrieval of compressed associated images, allowing for rapid follow-up with other more powerful telescopes. Real-time analysis of the sheer amount of data that Gigapixel-scale systems like Evryscope create would have been largely unfeasible just a few years ago. The rise of consumer digital imaging, ever increasing computing power, and decreasing storage costs have however made the overall cost manageable (a few million dollars much less than one million dollars!) with current technology.

Outlook

The Evryscope array is scheduled to see first light later this year in CTIO in Chile, where it will start to produce a powerful minute-by-minute data-set on transient events happening in its field of view, which until now have not been feasible to capture. But it won’t see the whole sky, just the sky it can see from its location on Earth. So then, why stop there, why not reuse the design and expand? Indeed, this is what the authors are already thinking —see whitepaper about the “Antarctic-Evryscope” on the group’s website. And who knows, maybe soon after we will have an Evryscope at *evry* major observatory in the world working together to record a continuous movie of the whole sky?

### Peter Coles - In the Dark

Lines on the Death of Demis Roussos

After all the sound and fury accompanying yesterday elections in Greece there’s one item of much sadder news. The legendary Demis Roussos has passed away. I can’t think of him without thinking of Abigail’s Party:

So, farewell then,
Demis Roussos.

“For ever
And ever
And ever
And ever
You’ll
Beeeeeee
The One!”

You sang.

But, alas,
Nothing
Ever
Lasts
For ever
And ever
And ever
And ever.
And now
You’ve Gone.

But at least
liked you.

(And so did
Keith’s Mum.)

by Peter Coles (aged 51 ½).

### Emily Lakdawalla - The Planetary Society Blog

At last! A slew of OSIRIS images shows fascinating landscapes on Rosetta's comet
The first results of the Rosetta mission are out in Science magazine. The publication of these papers means that the OSIRIS camera team has finally released a large quantity of closeup images of comet Churyumov-Gerasimenko, taken in August and September of last year. I explain most of them, with help from my notes from December's American Geophysical Union meeting.

### Lubos Motl - string vacua and pheno

A reply to an anti-physics rant by Ms Hossenfelder
S.H. of Tokyo University sent me a link to another text about the "problems with physics". The write-up is one month old and for quite some time, I refused to read it in its entirety. Now I did so and the text I will respond to is really, really terrible. The author is Sabine Hossenfelder and the title reads
Does the scientific method need revision?

Does the prevalence of untestable theories in cosmology and quantum gravity require us to change what we mean by a scientific theory?
To answer this, No. Only people who have always misunderstood how science works – at least science since the discoveries by Albert Einstein – need to change their opinions what a scientific theory is and how it is being looked for. Let me immediately get to the propositions in the body of the write-up and respond.

Here we go:
Theoretical physics has problems.
Theoretical physics solves problems and organizes ideas about how Nature works. Anything may be substituted for "it" to the sentence "it has problems" but the only reason why someone would substitute "theoretical physics" into this sentence is that he or she hates science and especially the most remarkable insights that physics discovered in recent decades.

The third sentence says:
But especially in high energy physics and quantum gravity, progress has basically stalled since the development of the standard model in the mid 70s.
This is an absolutely preposterous claim. First, since the mid 1970s, there have been several important experimental discoveries – like the discoveries of the W-bosons, Z-bosons, Higgs boson, top quark, neutrino oscillations; non-uniformities of the cosmic microwave background, the cosmological constant, and so on, and so on.

But much more shockingly, there have been long sequences of profound and amazing theoretical discoveries, including supersymmetry, supergravity, superstring theory, its explanation for the black hole thermodynamics, D-branes, dualities, holography, AdS/CFT correspondence, AdS/mundane_physics correspondences, and so on, and so on. Many of these results deservedly boast O(10,000) citations – like AdS/CFT – which actually sometimes beats the figures of the Standard Model. Which of those discoveries are more important is debatable and the citation counts can't be treated dogmatically but some of the recent discoveries are unquestionably in the "same league" as the top papers that have led to the Standard Model.

It is silly not to consider these amazing advances "fully important" just because they're primarily theoretical in character. The W-bosons, Z-bosons, Higgs bosons etc. have been believed to exist since the 1960s even though they were also discovered in 1983 or 2012, respectively, and they were "just a theory" for several previous decades. The beta-decay was known by competent particle physicists to be mediated by the W-boson even though no W-boson had been seen by 1983. Exactly analogously, we know that the gravitational force (and other forces) is mediated by closed strings even though we haven't seen a fundamental string yet. The situations are absolutely analogous and people claiming that it is something "totally different" are hopelessly deluded.

One can become virtually certain about certain things long before the thing is directly observed – and that is true not only for particular species of bosons but also for the theoretical discoveries since the mid 1970s that I have mentioned.
Yes, we’ve discovered a new particle every now and then. Yes, we’ve collected loads of data.
In the framework of quantum field theory, almost all discoveries can be reduced to the "discovery of a new particle". So if someone finds such a discovery unimpressive, he or she simply shows his or her disrespect for the whole discipline. But the discoveries were not just discoveries of new particles.
But the fundamental constituents of our theories, quantum field theory and Riemannian geometry, haven’t changed since that time.
That's completely untrue. Exactly since the 1970s, state-of-the-art physics has switched from quantum field theory and Riemannian geometry to string theory as its foundational layer. People have learned that this more correct new framework is different from the previous approximate ones; but from other viewpoints, it is exactly equivalent thanks to previously overlooked relationships and dualities.

Laymen and physicists who are not up to their job may have failed to notice that a fundamental paradigm shift has taken place in physics since the mid 1970s but that can't change the fact that this paradigm shift has occurred.
Everybody has their own favorite explanation for why this is so and what can be done about it. One major factor is certainly that the low hanging fruits have been picked, [experiments become hard, relevant problems are harder...].

Still, it is a frustrating situation and this makes you wonder if not there are other reasons for lack of progress, reasons that we can do something about.
If Ms Hossenfelder finds physics this frustrating, she should leave it – and after all, her bosses should do this service for her, too. Institutionalized scientific research has also become a part of the Big Government and it is torturing lots of people who would love to be liberated but they still think that to pretend to be scientists means to be on a great welfare program. Niels Bohr didn't establish Nordita as another welfare program, however, so he is turning in his grave.

Ms Hossenfelder hasn't written one valuable paper in her life but her research has already cost the taxpayers something that isn't far from one million dollars. It is not shocking that she tries to pretend that there are no results in physics – in this way, she may argue that she is like "everyone else". But she is not. Some people have made amazing or at least pretty interesting and justifiable discoveries, she is just not one of those people. She prefers to play the game that no one has found anything and the taxpayers are apparently satisfied with this utterly dishonest demagogy.

If you have the feeling that the money paid to the research is not spent optimally, you may be right but you may want to realize that it's thanks to the likes of Hossenfelder, Smolin, and others who do nothing useful or intellectually valuable and who are not finding any new truths (and not even viable hypotheses) about Nature.
Especially in a time when we really need a game changer, some breakthrough technology, clean energy, that warp drive, a transporter! Anything to get us off the road to Facebook, sorry, I meant self-destruction.
We don't "need" a game changer now more than we needed it at pretty much any moment in the past (or we will need it in the future). People often dream about game changers and game changers sometimes arrive.

We don't really "need" any breakthrough technology and we certainly don't need "clean energy" because we have lots of clean energy, especially with the rise of fracking etc.

We may "need" warp drive but people have been expressing similar desires for decades and competent physicists know that warp drive is prohibited by the laws of relativity.

And we don't "need" transporters – perhaps the parties in the Ukrainian civil war need such things.

Finally, we are more resilient and further from self-destruction than we were at pretty much any point in the past. Also, we don't need to bash Facebook which is just another very useful pro-entertainment website. It is enough to ignore Facebook if you think it's a waste of time – I am largely doing so ;-) but I still take the credit for having brought lots of (more socially oriented) people who like it to the server.

So every single item that Hossenfelder enumerates in her list "what we need" is crap.
It is our lacking understanding of space, time, matter, and their quantum behavior that prevents us from better using what nature has given us.
This statement is almost certainly untrue, too. A better understanding of space, time, and matter – something that real physicists are actually working on, and not just bashing – will almost certainly confirm that warp drives and similar things don't exist. Better theories will give us clearer explanations why these things don't exist. There may be some "positive applications" of quantum gravity but right now, we don't know what they could be and they are surely not the primary reason why top quantum gravity people do the research they do.

The idea that the future research in quantum gravity will lead to practical applications similar to warp drive is a belief, a form of religion, and circumstantial evidence (and sometimes almost rigorous proofs) makes this belief extremely unlikely.
And it is this frustration that lead people inside and outside the community to argue we’re doing something wrong, ...
No, this is a lie, too. As I have already said, physics bashers are bashing physics not because of frustration that physics isn't making huge progress – it obviously *is* making huge progress. Physics bashers bash physics in order to find excuses for their own non-existent or almost non-existent results in science – something I know very well from some of the unproductive physicists in Czechia whom the institutions inherited from the socialist era. They try to hide that they are nowhere near the top physicists – and most of them are just useless parasites. And many listeners buy these excuses because the number of incredibly gullible people who love to listen to similar conspiracy theories (not so much to science) is huge. And if you combine this fact with many ordinary people's disdain for mathematics etc., it is not surprising that some of these physics bashers may literally make living out of their physics bashing and nothing else.
The arxiv categories hep-th and gr-qc are full every day with supposedly new ideas. But so far, not a single one of the existing approaches towards quantum gravity has any evidence speaking for it.
This is complete rubbish. The tens of thousands of papers are full of various kinds of evidence supporting this claim or another claim about the inner workings of Nature. In particular, the scientific case for string theory as the right framework underlying the Universe is completely comparable to the case for the Higgs boson in the 1960s. The Higgs boson was discovered in 2012, 50 years after the 1960s, but that doesn't mean that adequate physicists in the 1960s were saying that "there wasn't any evidence supporting that theory".

People who were not embarrassed haven't said such a thing and people who are not embarrassing themselves are not saying a similar thing about string theory – and other things – today.
To me the reason this has happened is obvious: We haven’t paid enough attention to experimentally testing quantum gravity. One cannot develop a scientific theory without experimental input. It’s never happened before and it will never happen. Without data, a theory isn’t science. Without experimental test, quantum gravity isn’t physics.
None of these statements is right. We have paid more than enough attention to "experimental quantum gravity". It is a vastly overstudied and overfunded discipline. All sensible physicists realize that it is extremely unlikely that we will directly observe some characteristic effects of quantum gravity in the near future. The required temperatures are around $$10^{32}$$ kelvins, the required distances are probably $$10^{-35}$$ meters, and so on. Max Planck has known the values of these natural units since the late 19th century.

So we have paid more than enough attention to this strategy.

It is also untrue that the progress in theoretical physics since the mid 1970s has been done "without experimental input". The amount of data we know about many things is huge. To a large extent, the knowledge of one or two basic experiments showing quantum mechanics and one or two experiments testing gravity is enough to deduce a lot. General relativity, quantum mechanics, and string theory largely follow from (subsets of) these several elementary experiments.

On the other hand, it is not true that scientific progress cannot be made without (new) experimental input. Einstein found special relativity even though he wasn't actively aware of the Michelson-Morley experiment. He could have deduced the whole theory independently of any experiments. Experiments had previously been used to construct e.g. Maxwell's equations but Einstein didn't deal with them directly. Einstein only needed the equations themselves. More or less the same thing occurred 10 years later when he discovered general relativity. But the same approach based on "nearly pure thought" has also be victorious in the case of Bekenstein's and Hawking's black hole thermodynamics, string theory, and in some other important examples.

So the idea that one can't find important things without some new experiments – excluding experiments whose results are old and generally known – is obviously untrue. Science haters can say that this or that important part of science "is not science" or "is not physics" but that doesn't change anything about the fact that certain insights about Nature may be found and have been found and supported by highly convincing bodies of evidence in similar ways. Only simpletons may pay attention to demagogue's proclamation that "something is not science". This emotional scream is not a technical argument for or against any scientifically meaningful proposition.

I will omit another repetitive paragraph where Hossenfelder advocates "experimental quantum gravity". She thinks that tons of effects are easily observable because she's incompetent.
Yes, experimental tests of quantum gravity are farfetched. But if you think that you can’t test it, you shouldn’t put money into the theory either.
This is totally wrong. It is perfectly sensible to pay almost all of the quantum gravity research money to the theorists because whether someone likes it or not, quantum gravity is predominantly a theoretical discipline. It is about people's careful arguments, logical thoughts, and calculations that make our existing knowledge fit together more seamlessly than before.

In particular, the goal of quantum gravity is to learn how space and time actually work in our Universe, a world governed by the postulates of quantum mechanics. Quantum gravity is not – and any discipline of legitimate science is not – a religious cult that trains its followers to believe in far-fetched theories. The idea that you may observe completely new effects of quantum gravity (unknown to the theorists) in your kitchen is far-fetched and that really means that it is extremely unlikely. And its being extremely unlikely is the rational reason why almost no money is going into this possibility. This justification can't be "beaten" by the ideological cliché that everything connected with experiments in the kitchen should have a priority because it's "more scientific".

It's not more scientific. A priori, it is equally scientific. A posteriori, it is less scientific because arguments rooted in science almost reliably show that such new quantum gravity effects in the kitchen are very unlikely – some of them are rather close to supernatural phenomena such as telekinesis. So everything that Ms Hossenfelder says is upside down once again.
And yes, that’s a community problem because funding agencies rely on experts’ opinion. And so the circle closes.
Quantum gravity theorists and string theorists are getting money because they do absolutely amazing research, sometimes make a medium-importance discovery, and sometimes a full-fledged breakthrough. And if or when they don't do such a thing for a few years, they are still exceptional people who are preserving and nurturing the mankind's cutting-edge portrait of the Universe. The folks in the funding agencies are usually less than full-fledged quantum gravity or string theorists. But as long as the system at least barely works, they still know enough – much more than an average human or Ms Hossenfelder knows – so they may see that something fantastic is going on here or there even though they can't quite join the research. That's true for various people making decisions in government agencies but that's true e.g. for Yuri Milner, too.

As Ms Hossenfelder indicated, the only way how this logic may change – and yes, I think it is unfortunately changing to some extent – is that the funding decisions don't depend on expert opinion (and on any people connected with knowledge and progress in physics) at all. The decisions may be done by people who hate physics and who have no idea about contemporary physics. The decisions may depend on people who are would-be authority and pick winners and losers by arbitrarily stating that "this is science" and "this is not science". I don't have to say how such decisions (would?) influence the research.
To make matters worse, philosopher Richard Dawid has recently argued that it is possible to assess the promise of a theory without experimental test whatsover, and that physicists should thus revise the scientific method by taking into account what he calls “non-empirical facts”.
Dawid just wrote something that isn't usual among the prevailing self-appointed "critics and philosophers of physics" but he didn't really write anything that would be conceptually new. At least intuitively, physicists like Dirac or Einstein have known all these things for a century. Of course that "non-empirical facts" have played a role in the search for the deeper laws of physics and this role became dramatic about 100 years ago.
Dawid may be confused on this matter because physicists do, in practice, use empirical facts that we do not explicitly collect data on. For example, we discard theories that have an unstable vacuum, singularities, or complex-valued observables. Not because this is an internal inconsistency — it is not. You can deal with this mathematically just fine. We discard these because we have never observed any of that. We discard them because we don’t think they’ll describe what we see. This is not a non-empirical assessment.
This was actually the only paragraph I fully read when I replied to S.H. in Tokyo for the first time – and this paragraph looked "marginally acceptable" to me from a certain point of view.

Well, the paragraph is only solving a terminological issue. Should the violation of unitarity or instability of the Universe that would manifest itself a Planck time after the Big Bang, or something like that be counted as "empirical" or "non-empirical" input? I don't really care much. It's surely something that most experts consider consistency conditions, like Dawid.

We may also say that we "observe" that the Universe isn't unstable and doesn't violate unitarity. But this is a really tricky assertion. Our interpretation of all the observations really assumes that probabilities are non-negative and add to 100%. Whatever our interpretation of any experiment is, it must be adjusted to this assumption. So it's a pre-empirical input. It follows from pure logic. Also, some instabilities and other violations of what we call "consistency conditions" (e.g. unitarity) may be claimed to be very small and therefore hard to observe. But some of these violations will be rejected by theorists, anyway, even if they are very tiny because they are violations of consistency conditions.

I don't really care about the terminology. What's important in practice is that these "consistency conditions" cannot be used as justifications for some new fancy yet meaningful experiments.
A huge problem with the lack of empirical fact is that theories remain axiomatically underconstrained.
The statement is surely not true in general. String theory is 100% constrained. It cannot be deformed at all. It has many solutions but its underlying laws are totally robust.
This already tells you that the idea of a theory for everything will inevitably lead to what has now been called the “multiverse”. It is just a consequence of stripping away axioms until the theory becomes ambiguous.
If the multiverse exists, and it is rather likely that it does, it doesn't mean that the laws of physics are ambiguous. It just means that the world is "larger" and perhaps has more "diverse subregions" than previously thought. But all these regions follow the same unambiguous laws of physics – laws of physics we want to understand as accurately as possible.

The comment about "stripping away axioms" is tendentious, too, because it suggests that there is some "a priori known" number of axioms that is right. But it's not the case. If someone randomly invents a set of axioms, it may be too large (overconstrained) or too small (underconstrained). In the first case, some axioms should be stripped away, in the latter case, some axioms should be added. But the very fact that a theory predicts or doesn't predict the multiverse doesn't imply that its set of axioms is underconstrained or overconstrained.

For example, some theories of inflation predict that inflation is not eternal and no multiverse is predicted; other, very analogous theories (that may sometimes differ by values of parameters only!) predict that inflation is eternal and the Universe emerges. So Hossenfelder's claim that the multiverse is linked with "underconstrained axioms" is demonstrably incorrect, too.
Somewhere along the line many physicists have come to believe that it must be possible to formulate a theory without observational input, based on pure logic and some sense of aesthetics. They must believe their brains have a mystical connection to the universe and pure power of thought will tell them the laws of nature.
There is nothing mystical about this important mode of thinking in theoretical physics. It's how special relativity was found, much like general relativity, the idea that atoms exist, the idea that the motion of atoms is linked to heat, not to mention the Dirac equation, gauge theories, and many other things. A large fraction of theoretical physicists have made their discovery by optimizing the "beauty" of the candidate laws of physics. People like Dirac have emphasized the importance of the mathematical beauty in the search for the laws of physics all the time, and for a good reason.

That's the most important thing Dirac needed to write on a Moscow blackboard.

And the more recent breakthroughs in physics we consider, the greater role such considerations have played (and will play). And the reason why this "mathematical beauty" works isn't supernatural – even though many of us love to be amazed by this power of beautiful mathematics and this meme is often sold to the laymen, too. One may give Bayesian explanations why "more beautiful" laws are more likely to hold than generic, comparable, but "less beautiful" competitors. Bayesian inference dictates to assign comparable prior probabilities to competing hypotheses and because the mathematically beautiful theories have a smaller number of truly independent assumptions and building blocks, and therefore a smaller number of ways how to invent variations, their prior probability won't be split to so many "sub-hypotheses". Moreover, as we describe deeper levels of reality, the risk that an inconsistency emerges is high and ever higher, and the "not beautiful theories" are increasingly likely to lead to one kind of an inconsistency or another.

Sabine Hossenfelder's denial of this principle only shows her lack of familiarity with physics, its logic, and its history.
You can thus never arrive at a theory that describes our universe without taking into account observations, period.
Whether someone has ever found important things without "any observations" is questionable. But it is still true and important that a good theorist may need 1,000 times less empirical data than a worse theorist to find and write down a correct theory, and a bad theorist will not find the right theory with arbitrarily large amounts of data! And that's the real "period", that's why the mathematical beauty is important for good theoretical physicists – and the others have almost no chance to make progress these days.
The attempt to reduce axioms too much just leads to a whole “multiverse” of predictions, most of which don’t describe anything we will ever see.
I have already said that there is no relationship between the multiverse and the underdeterminedness of the sets of axioms.
(The only other option is to just use all of mathematics, as Tegmark argues. You might like or not like that; at least it’s logically coherent. But that’s a different story and shall be told another time.)
But these Tegmark's comments are purely verbal philosophical remarks without any scientific content. They don't imply anything for observations, not even in principle. For this reason, they have nothing to do with physical models of eternal inflation or the multiverse or even specific compactifications of string/M-theory which are completely specific theories about Nature and the observations of it.
Now if you have a theory that contains more than one universe, you can still try to find out how likely it is that we find ourselves in a universe just like ours. The multiverse-defenders therefore also argue for a modification of the scientific method, one that takes into account probabilistic predictions.
Most people writing papers about the multiverse – more precisely, papers evoking the anthropic principle – use the probability calculus incorrectly. But the general statement that invoking probabilities in deductions of properties of Nature is a "modification of the scientific method" is a total idiocy. The usage of probabilities was not only "allowed" in the scientific method for quite some time. In fact, science could have never been done without probabilities at all! All of science is about looking at the body of our observations and saying which explanation is more likely and which explanation is less likely.

And of course that a theory with a "larger Universe than previously thought" and perhaps with some extra rules to pinpoint "our location" in this larger world is an OK competitor to describe the Universe a priori.

Every experimenter needs to do some calculations involving probabilities – probabilities that a slightly unexpected result is obtained by chance, and so on – all the time. Ms Hossenfelder just doesn't have a clue what science is.
In a Nature comment out today, George Ellis and Joe Silk argue that the trend of physicists to pursue untestable theories is worrisome.
I agree with this, though I would have said the worrisome part is that physicists do not care enough about the testability — and apparently don’t need to care because they are getting published and paid regardless.
I don't get paid a penny but I am still able to see that the people whose first obsession is "testability" are either crackpots or third-class physicists such as Ms Hossenfelder who don't have an idea what they are talking about.

The purpose of science is to find the truth about Nature. Easy testability (in practice) means that there exists a procedure, an experimental procedure, that may accelerate the process by which we decide whether the hypothesis is true or not. But the testability doesn't actually make the hypothesis true (or more true) and scientists are looking for correct theories, not falsifiable theories, and it's an entirely different thing.

One could say that the less falsifiable a theory is, the better. We are looking for theories that withstand tests. So they won't be falsified anytime soon! A theory that has already resisted some attempts to be falsified is in a better shape than a theory that has already been falsified. The only "philosophical" feature of this kind that is important is that the propositions made by the theory are scientifically meaningful – i.e. having some non-tautological observable consequences in principle. If this is satisfied, the hypothesis is perfectly scientific and its higher likelihood to be falsified soon may only hurt. If one "knows" that a hypothesis is likely to die after a soon-to-be-performed experiment, it's probably because he "knows" that the hypothesis is actually unlikely.
See, in practice the origin of the problem is senior researchers not teaching their students that physics is all about describing nature. Instead, the students are taught by example that you can publish and live from outright bizarre speculations as long as you wrap them into enough math.
Maybe this is what Ms Hossenfelder has learned from her superiors such as Mr Smolin but no one is teaching these things at good places – like those I have been affiliated with.
I cringe every time a string theorist starts talking about beauty and elegance.
Because you are a stupid cringing crackpot.
Whatever made them think that the human sense for beauty has any relevance for the fundamental laws of nature?
The history of physics, especially 20th century physics, plus the Bayesian arguments showing that more beautiful theories are more likely. The sense of beauty used by these physicists – one that works so often – is very different from the sense of beauty used by average humans or average women in some respects. But it also has some similar features so it is similar in other respects.

Even more important is to point out that this extended discussion about "strings and beauty" is a straw man because almost no arguments referring to "beauty" can be found in papers on string theory. Many string theorists would actually disagree that "beauty" is a reason why they think that the theory is on the right track. Ms Hossenfelder is basically proposing illogical connections between her numerous claims, all of which happen to be incorrect.

I will omit one paragraph repeating content-free clichés that science describes Nature. Great, I agree that science describes Nature.
Call them mathematics, art, or philosophy, but if they don’t describe nature don’t call them science.
The only problem is that all theories that Ms Hossenfelder has targeted for her criticism do describe Nature and are excellent and sometimes paramount additions to science (sometimes nearly established ones, sometimes very promising ones), unlike everything that Ms Hossenfelder and similar "critics of physics" have ever written in their whole lives.

### Tommaso Dorigo - Scientificblogging

Reviews In Physics - A New Journal
The publishing giant Elsevier is about to launch a new journal, Reviews in Physics. This will be a fully open-access, peer-reviewed journal which aims at providing short reviews (15 pages maximum) on physics topics at the forefront of research. The web page of the journal is here, and a screenshot is shown below.

### CERN Bulletin

CERN Bulletin Issue No. 04-05/2015
Link to e-Bulletin Issue No. 04-05/2015Link to all articles in this issue No.

### Emily Lakdawalla - The Planetary Society Blog

It's Official: LightSail Test Flight Scheduled for May 2015
This May, the first of The Planetary Society's two member-funded LightSail spacecraft is slated to hitch a ride to space for a test flight aboard an Atlas V rocket.

## January 25, 2015

### Christian P. Robert - xi'an's og

a week in Oxford

I spent [most of] the past week in Oxford in connection with our joint OxWaSP PhD program, which is supported by the EPSRC, and constitutes a joint Centre of Doctoral Training in  statistical science focussing on data-­intensive environments and large-­scale models. The first cohort of a dozen PhD students had started their training last Fall with the first year spent in Oxford, before splitting between Oxford and Warwick to write their thesis.  Courses are taught over a two week block, with a two day introduction to the theme (Bayesian Statistics in my case), followed by reading, meetings, daily research talks, mini-projects, and a final day in Warwick including presentations of the mini-projects and a concluding seminar.  (involving Jonty Rougier and Robin Ryder, next Friday). This approach by bursts of training periods is quite ambitious in that it requires a lot from the students, both through the lectures and in personal investment, and reminds me somewhat of a similar approach at École Polytechnique where courses are given over fairly short periods. But it is also profitable for highly motivated and selected students in that total immersion into one topic and a large amount of collective work bring them up to speed with a reasonable basis and the option to write their thesis on that topic. Hopefully, I will see some of those students next year in Warwick working on some Bayesian analysis problem!

On a personal basis, I also enjoyed very much my time in Oxford, first for meeting with old friends, albeit too briefly, and second for cycling, as the owner of the great Airbnb place I rented kindly let me use her bike to go around, which allowed me to go around quite freely! Even on a train trip to Reading. As it was a road racing bike, it took me a trip or two to get used to it, especially on the first day when the roads were somewhat icy, but I enjoyed the lightness of it, relative to my lost mountain bike, to the point of considering switching to a road bike for my next bike… I had also some apprehensions with driving at night, which I avoid while in Paris, but got over them until the very last night when I had a very close brush with a car entering from a side road, which either had not seen me or thought I would let it pass. Gave me the opportunity of shouting Oï!

Filed under: Books, Kids, pictures, Statistics, Travel, University life Tagged: airbnb, Bayesian statistics, EPSRC, mountain bike, PhD course, PhD students, slides, slideshare, stolen bike, The Bayesian Choice, University of Oxford, University of Warwick

### arXiv blog

First Videos Created of Whole Brain Neural Activity in an Unrestrained Animal

Neuroscientists have recorded the neural activity in the entire brains of freely moving nematode worms for the first time.

The fundamental challenge of neuroscience is to understand how the nervous system controls an animal’s behavior. In recent years, neuroscientists have made great strides in determining how the collective activity of many individual neurons is critical for controlling behaviors such as arm reach in primates, song production in the zebrafinch and the choice between swimming or crawling in leeches.

### Peter Coles - In the Dark

Social Physics & Astronomy

When I give popular talks about Cosmology,  I sometimes look for appropriate analogies or metaphors in television programmes about forensic science, such as CSI: Crime Scene Investigation which I watch quite regularly (to the disdain of many of my colleagues and friends). Cosmology is methodologically similar to forensic science because it is generally necessary in both these fields to proceed by observation and inference, rather than experiment and deduction: cosmologists have only one Universe;  forensic scientists have only one scene of the crime. They can collect trace evidence, look for fingerprints, establish or falsify alibis, and so on. But they can’t do what a laboratory physicist or chemist would typically try to do: perform a series of similar experimental crimes under slightly different physical conditions. What we have to do in cosmology is the same as what detectives do when pursuing an investigation: make inferences and deductions within the framework of a hypothesis that we continually subject to empirical test. This process carries on until reasonable doubt is exhausted, if that ever happens.

Of course there is much more pressure on detectives to prove guilt than there is on cosmologists to establish the truth about our Cosmos. That’s just as well, because there is still a very great deal we do not know about how the Universe works.I have a feeling that I’ve stretched this analogy to breaking point but at least it provides some kind of excuse for writing about an interesting historical connection between astronomy and forensic science by way of the social sciences.

The gentleman shown in the picture on the left is Lambert Adolphe Jacques Quételet, a Belgian astronomer who lived from 1796 to 1874. His principal research interest was in the field of celestial mechanics. He was also an expert in statistics. In Quételet’s  time it was by no means unusual for astronomers to well-versed in statistics, but he  was exceptionally distinguished in that field. Indeed, Quételet has been called “the father of modern statistics”. and, amongst other things he was responsible for organizing the first ever international conference on statistics in Paris in 1853.

His fame as a statistician owed less to its applications to astronomy, however, than the fact that in 1835 he had written a very influential book which, in English, was titled A Treatise on Man but whose somewhat more verbose original French title included the phrase physique sociale (“social physics”). I don’t think modern social scientists would see much of a connection between what they do and what we do in the physical sciences. Indeed the philosopher Auguste Comte was annoyed that Quételet appropriated the phrase “social physics” because he did not approve of the quantitative statistical-based  approach that it had come to represent. For that reason Comte  ditched the term from his own work and invented the modern subject of  sociology…

Quételet had been struck not only by the regular motions performed by the planets across the sky, but also by the existence of strong patterns in social phenomena, such as suicides and crime. If statistics was essential for understanding the former, should it not be deployed in the study of the latter? Quételet’s first book was an attempt to apply statistical methods to the development of man’s physical and intellectual faculties. His follow-up book Anthropometry, or the Measurement of Different Faculties in Man (1871) carried these ideas further, at the expense of a much clumsier title.

This foray into “social physics” was controversial at the time, for good reason. It also made Quételet extremely famous in his lifetime and his influence became widespread. For example, Francis Galton wrote about the deep impact Quételet had on a person who went on to become extremely famous:

Her statistics were more than a study, they were indeed her religion. For her Quételet was the hero as scientist, and the presentation copy of his “Social Physics” is annotated on every page. Florence Nightingale believed – and in all the actions of her life acted on that belief – that the administrator could only be successful if he were guided by statistical knowledge. The legislator – to say nothing of the politician – too often failed for want of this knowledge. Nay, she went further; she held that the universe – including human communities – was evolving in accordance with a divine plan; that it was man’s business to endeavour to understand this plan and guide his actions in sympathy with it. But to understand God’s thoughts, she held we must study statistics, for these are the measure of His purpose. Thus the study of statistics was for her a religious duty.

The person  in question was of course  Florence Nightingale. Not many people know that she was an adept statistician who was an early advocate of the use of pie charts to represent data graphically; she apparently found them useful when dealing with dim-witted army officers and dimmer-witted politicians.

The type of thinking described in the quote  also spawned a number of highly unsavoury developments in pseudoscience, such as the eugenics movement (in which Galton himself was involved), and some of the vile activities related to it that were carried out in Nazi Germany. But an idea is not responsible for the people who believe in it, and Quételet’s work did lead to many good things, such as the beginnings of forensic science.

A young medical student by the name of Louis-Adolphe Bertillon was excited by the whole idea of “social physics”, to the extent that he found himself imprisoned for his dangerous ideas during the revolution of 1848, along with one of his Professors, Achile Guillard, who later invented the subject of demography, the study of racial groups and regional populations. When they were both released, Bertillon became a close confidante of Guillard and eventually married his daughter Zoé. Their second son, Adolphe Bertillon, turned out to be a prodigy.

Young Adolphe was so inspired by Quételet’s work, which had no doubt been introduced to him by his father, that he hit upon a novel way to solve crimes. He would create a database of measured physical characteristics of convicted criminals. He chose 11 basic measurements, including length and width of head, right ear, forearm, middle and ring fingers, left foot, height, length of trunk, and so on. On their own none of these individual characteristics could be probative, but it ought to be possible to use a large number of different measurements to establish identity with a very high probability. Indeed, after two years’ study, Bertillon reckoned that the chances of two individuals having all 11 measurements in common were about four million to one. He further improved the system by adding photographs, in portrait and from the side, and a note of any special marks, like scars or moles.

Bertillonage, as this system became known, was rather cumbersome but proved highly successful in a number of high-profile criminal cases in Paris. By 1892, Bertillon was exceedingly famous but nowadays the word bertillonage only appears in places like the Observer’s Azed crossword.

The main reason why Bertillon’s fame subsided and his system fell into disuse was the development of an alternative and much simpler method of criminal identification: fingerprints. The first systematic use of fingerprints on a large scale was implemented in India in 1858 in an attempt to stamp out electoral fraud.

The name of the British civil servant who had the idea of using fingerprinting in this way was Sir William James Herschel (1833-1917), the eldest child of Sir John Herschel, the astronomer, and thus the grandson of Sir William Herschel, the discoverer of Uranus. Another interesting connection between astronomy and forensic science.

### astrobites - astro-ph reader's digest

Grad students: apply now for ComSciCon 2015!

ComSciCon 2015 will be the third in the annual series of Communicating Science workshops for graduate students

Applications are now open for the Communicating Science 2015 workshop, to be held in Cambridge, MA on June 18-20th, 2015!

Graduate students at US institutions in astronomy, and all fields of science and engineering, are encouraged to apply. The application will close on March 1st.

It’s been more than two years since we announced the first ComSciCon worshop here on Astrobites. Since then, we’ve received almost 2000 applications from graduate students across the country, and we’ve welcomed about 150 of them to three national and local workshops held in Cambridge, MA. You can read about last year’s workshop to get a sense for the activities and participants at ComSciCon events.

While acceptance to the workshop is competitive, attendance of the workshop is free of charge and travel support will be provided to accepted applicants.

Participants will build the communication skills that scientists and other technical professionals need to express complex ideas to their peers, experts in other fields, and the general public. There will be panel discussions on the following topics:

• Communicating with Non-Scientific Audiences
• Science Communication in Popular Culture
• Communicating as a Science Advocate
• Multimedia Communication for Scientists

In addition to these discussions, ample time is allotted for interacting with the experts and with attendees from throughout the country to discuss science communication and develop science outreach collaborations. Workshop participants will produce an original piece of science writing and receive feedback from workshop attendees and professional science communicators, including journalists, authors, public policy advocates, educators, and more.

ComSciCon attendees have founded new science communication organizations in collaboration with other students at the event, published more than 25 articles written at the conference in popular publications with national impact, and formed lasting networks with our student alumni and invited experts. Visit the ComSciCon website to learn more about our past workshop programs and participants.

Group photo at the 2014 ComSciCon workshop

If you can’t make it to the national workshop in June, check to see whether one of our upcoming regional workshops would be a good fit for you.

This workshop is sponsored by Harvard University, the Massachusetts Institute of Technology, University of Colorado Boulder, the American Astronomical Society, the American Association for the Advancement of Science, the American Chemical Society, and Microsoft Research.

### Peter Coles - In the Dark

Last days on the Ice

Earlier this month I reblogged a post about the launch of the balloon-borne SPIDER experiment in Antarctica. Here’s a follow up from last week. Spider parachuted back down to the ice on January 17th and was recovered successfully. Now the team will be leaving the ice and returning home, hopefully with some exciting science results!

I’d love to go to Antarctica, actually. When I was finishing my undergraduate studies at Cambridge I applied for a place on the British Antarctic Survey, but didn’t get accepted. I don’t suppose I’ll get the chance now, but you never know…

Originally posted on SPIDER on the Ice:

Four of the last five of the SPIDER crew– Don, Ed, Sasha, and I– are slated to leave the Ice tomorrow morning. That means this is probably my last blog post– at least until SPIDER 2! It has been an incredible few months, but I can’t say I’m all that sad for it to be ending. I’m ready to have an adventure in New Zealand and then get home to all the people I’ve missed so much while I’ve been away.

As is the nature of field campaigns, it has been an absolute roller coaster, but the highs have certainly made the lows fade in my memory. We got SPIDER on that balloon, and despite all of the complexities and possible points of failure, it worked. That’s a high I won’t be coming down from any time soon.

On top of success with our experiment, we’ve also had the privilege of…

View original 98 more words

### Tommaso Dorigo - Scientificblogging

The Plot Of The Week: CMS Search For Majorana Neutrinos
The CMS collaboration has released yesterday results of a search for Majorana neutrinos in dimuon data collected by the CMS detector in 8 TeV proton-proton collisions delivered by the LHC in 2012. If you are short of time and just need an executive summary, here it is: no such thing is seen, unfortunately, and limits are set on the production rate of heavy neutrinos N as a function of their mass. If you have five spare minutes, however, you might be interested in some more detail of the search and its results.

## January 24, 2015

### Geraint Lewis - Cosmic Horizons

The Constant Nature of the Speed of light in a vacuum
Wow! It has been a while, but I do have an excuse! I have been finishing up a book on the fine-tuning of the Universe and hopefully it will be published (and will become a really big best seller?? :) in 2015. But time to rebirth the blog, and what a better way to start that a gripe.

There's been some chatter on the interweb about a recent story about the speed of light in a vacuum being slowed down. Here's oneHere's another. Some of these squeak loudly about how the speed of light may not be "a constant", implying that something has gone horribly wrong with the Universe. Unfortunately, some of my physicsy colleagues were equally shocked but the result.

Why would one be shocked? Well, the speed of light being constant to all observers is central of Einstein's Special Theory of Relativity. Surely if these results are right, and Einstein is wrong, then science is a mess, etc etc etc.

Except there is nothing mysterious about this result. Nothing strange. In fact it was completely expected. The question boils down to what you mean by speed.
Now, you might be thinking that speed is simply related to the time it takes for a thing to travel from here to there. But we're dealing with light here, which, in classical physics is represented by oscillations in an electromagnetic field, while in our quantum picture it's oscillations in the wave function; the difference is not important.

When you first encounter electromagnetic radiation (i.e. light) you are often given a simple example of a single wave propagating in a vacuum. Every student of physics will have seen this picture at some point;
The electric (and magnetic) fields oscillate as a sin wave and the speed at which bumps in the wave move forward is the speed of light. This was one of the great successes of James Clark Maxwell, one of the greatest physicists who ever lived. In his work, he fully unified electricity and magnetism and showed that electromagnetic radiation, light, was the natural consequence.

Without going into too many specific details, this is known as the phase velocity. For light in a vacuum, the phase velocity is equal to c.

One of the coolest things I ever learnt was Fourier series, or the notion that you can construct arbitrary wave shapes by adding together sins and cos waves. This still freaks me out a bit to this day, but instead of an electromagnetic wave being a simple sin or cos you can add waves to create a wave packet, basically a lump of light.

But when you add waves together, the result lump doesn't travel at the same speed as the waves that comprise the packet. The lump moves with what's known as the group velocity. Now, the group velocity and the phase velocity are, in general, different. In fact, they can be very different as it is possible to construct a packet that does not move at all, while all the waves making up the packet are moving at c!

So, this result was achieved by manipulating the waves to produce a packet whose group velocity was measurably smaller than a simple wave. That's it! Now, this is not meant to diminish the work of the experimenters, as this is not easy to set up and measure, but it means nothing for the speed of light, relativity etc etc. And the researchers know that!

And as I mentioned, understanding the difference between phase and group velocity has been known for a long time, with Hamilton (of Hamiltonian fame) in 1839, and Rayleigh in 1877. These initial studies were in waves in general, mainly sound waves, not necessarily light, but the mathematics are basically the same.

Before I go, once of the best course I took as an undergraduate was called vibrations and waves. At the time, I didn't really see the importance of of what I was learning, but the mathematics was cool. I still love thinking about it. Over the years, I've come to realise that waves are everywhere, all throughout physics, science, and, well everything. Want to model a flag, make a ball and spring model. Want to make a model of matter, ball and spring. And watch the vibrations!

Don't believe me? Watch this - waves are everywhere.

### Christian P. Robert - xi'an's og

would you wear those tee-shirts?!

Here are two examples of animal “face” tee-shirts I saw advertised in The New York Times and that I would not consider wearing. At any time.

Filed under: Kids, pictures Tagged: animals, Asian lady beetle, fashion, tarsier, tee-shirt, The New York Times

### Emily Lakdawalla - The Planetary Society Blog

Lowell Observatory's Matthew Knight addresses several points of confusion that have repeatedly come up in the coverage of Comet Lovejoy.

## January 23, 2015

### CERN Bulletin

CERN Bulletin Issue No. 01-02/2015
Link to e-Bulletin Issue No. 01-02/2015Link to all articles in this issue No.

### CERN Bulletin

CERN Bulletin Issue No. 51-52/2014
Link to e-Bulletin Issue No. 51-52/2014Link to all articles in this issue No.

### CERN Bulletin

CERN Bulletin Issue No. 04-05/2015
Link to e-Bulletin Issue No. 04-05/2015Link to all articles in this issue No.

### astrobites - astro-ph reader's digest

The Age of Solar System Exploration
If you haven’t heard about the Rosetta mission, and the European Space Agency’s remarkable feat of landing on a comet, then you must be like it’s lander Philae: living under a rock.

What you probably also didn’t hear much about is the slew of other ways (both recent past and near future) we are exploring up-close-and-personal the more unusual parts of the Solar System. The tiny stuff; the names you didn’t memorize in grade school. That one that doesn’t get to play with the “big boys” anymore. The years of 2014 and 2015 may well be known as the time when our exploration of the solar system truly took off, as we explored asteroids, comets, and minor planets.

Here’s a look back at what we’ve accomplished in the last year, and what we’re about to achieve in the year to come.

Scroll to the bottom to see an abbreviated list of important upcoming events for these missions.

The most recent image of Comet 67P/C-G, taken on January 16th by the orbiting Rosetta spacecraft. Rosetta and its lander Philae (the first objects to orbit and land on a comet) will follow the comet through its orbit to closest solar approach in August 2015. Image c/o ESA

ESA’s Rosetta Mission Lands on a Comet: August 2014 – December 2015

One of the biggest science news pieces of the year, the European Space Agency’s Rosetta spacecraft reached Comet 67P/C-G (right). It began an orbit on August 6th, 2014, after a journey of more than 10 years and 6.4 billion kilometers. On November 12th, the spacecraft’s landing probe Philae became mankind’s first object to land on a comet. Unfortunately, a malfunction in the landing system resulted in Philae bouncing a kilometer off the surface, coming eventually to rest in the shadow of a cliff.

Unable to get adequate sunlight to charge its batteries, Philae quickly went into hibernation mode. Before shutting down it was able to return measurements of gaseous water vapor, but was unable to drill into the surface to measure the content of the solid ice.

Many models suggest that the high water content of Earth may have come from collisions with comets or asteroids during the late stages of the Earth’s formation. Most water is made of ordinary hydrogen and oxygen, but a tiny fraction contains a deuterium atom (a hydrogen isotope made of a proton and a neutron) in hydrogen’s place. One of the main scientific goals of pursuing comets is to identify the source of Earth’s water. The key to accomplishing this is to see if the abundance of deuterium (see this Astrobite) in a comet’s water matches the levels found on Earth. Philae’s water vapor measurements indicate a deuterium abundance more than 3 times higher than on Earth. Perhaps this suggests asteroids are more responsible for Earth’s water supply than comets.

The Rosetta team hopes to be able to confirm this result, and perhaps obtain an ice sample with Philae’s drill, if the lander wakes up. Most of the team has little doubt the lander will resume function in the coming months. But since the orbiting Rosetta still has yet to pinpoint Philae’s final landing spot, the question of when the probe will be able to get the 5 to 7 more watts of energy it needs is a tough question to answer. The comet (and accompanying spacecraft) is approaching the Sun and will reach its closest point in August 2015. The team hopes the change in scenery may bring more sunlight to Philae’s solar cells.

In the meantime, Rosetta will make a close approach orbit of the comet in February 2015 — snapping photos which should resolve details down to a few inches — and is planning to soar through an outgassing jet in July, when the comet’s tail begins to form. The Rosetta mission is scheduled to end in December 2015, although large public support for the mission may help researchers extend its lifetime into 2016.

NASA’s spacecraft Dawn captures images of Ceres: our nearest dwarf planet neighbor and the largest asteroid in the asteroid belt. Dawn will enter an orbit around Ceres beginning in March 2015. Image c/o: NASA/JPL

NASA’s Dawn Mission Orbits Asteroid Ceres: March 2015 – July 2015

A few years ago, NASA’s Dawn spacecraft spent about 12 months orbiting Vesta, one of the largest asteroids in the asteroid belt. For more details about Dawn’s encounter with Vesta, see this past Astrobite.

After leaving the orbit around Vesta, Dawn has spent two and a half years traveling across the asteroid belt to catch up to Ceres, the largest known asteroid. On its own, Ceres makes up about 30% of the mass of the entire asteroid belt. It’s so massive, its gravity is strong enough to shape it into a rough sphere, so Ceres is also identified as a dwarf planet.

This week, NASA released the latest images of Ceres (left), taken as Dawn approaches the asteroid. In just a few weeks, on March 6th 2015, Dawn will enter orbit around Ceres. Asteroids and comets are pieces of the debris left over from the formation of the solar system planets. NASA wants to understand Ceres’ formation, its material makeup, and why it didn’t grow any larger. This information will help distinguish between theories that describe how planets formed in our solar system.

As has previously been discussed on Astrobites, long distance observations indicate the presence of water on the dwarf planet. Just like comets, asteroids may be responsible for delivering the Earth’s water supply, and the Dawn team hopes to improve upon these measurements. Dawn’s main science mission continues until July 2015, after which it will be shut off and remain in orbit around Ceres for a very long time.

An artist’s illustration of NASA’s New Horizons spacecraft, which will pass by Pluto in July 2015. The spacecraft will not be able to maintain an orbit around the tiny dwarf planet, but will instead fly farther out into the Kuiper Belt. Image c/o: NASA/JPL

NASA’s New Horizons Flies by Pluto into the Kuiper Belt: July 14 2015

Launched in 2006, NASA’s New Horizons has been navigating space for over 9 years, and has perhaps the most exciting itinerary of all the spacecrafts on this list. To save on fuel, New Horizons executed a gravity assist (or slingshot) maneuver around Jupiter in February 2007. Some beautiful photos of Jupiter resulted as an added benefit of this layover.

New Horizons has been in frequent phases of hibernation since it’s encounter with Jupiter, and is now making its approach to Pluto: probably the most popular dwarf planet. On July 14th 2015, New Horizons will make its closest approach within 10,000 kilometers of Pluto. The spacecraft won’t be stopping at Pluto, either, but will continue into the Kuiper Belt to investigate objects astronomers can barely see from Earth. To learn more about New Horizons, and its path after Pluto, see this Astrobite.

The Future of Solar System Science

With Rosetta, Dawn, and New Horizons continuing to gather information, the future looks bright for humanity’s goal of understanding our solar system. Asteroids, comets, dwarf planets, and Kuiper Belt Objects hold many clues for how the planets — including our own — formed from the initial ingredients around the Sun. In the next decade, NASA hopes to complete a mission to capture an asteroid and bring it into orbit around the Moon. This would be a remarkable opportunity to study the remnants of the solar system’s formation.

For the present, here is a timeline of the most important events coming in the future of space exploration:

February 2015: Rosetta makes close approach to Comet 67P/C-G, resolving features as small as several inches.
March 6 2015: Dawn enters orbit around Ceres
Spring/Summer 2015: Rosetta’s lander Philae (hopefully) wakes up and takes new samples from the surface
July 14 2015: New Horizons makes closest ever approach of Pluto, on its way into the Kuiper Belt
July 2015: Rosetta scheduled to make pass through Comet’s outgassing jet
July 2015: Scheduled end of Dawn science mission
August 2015: Comet 67P/C-G closest approach of Sun: Rosetta observes comet activity and tail
December 2015: Scheduled end of Rosetta science mission
January 2019: New Horizons makes possible pass-by of Kuiper Belt Object 1110113Y

### Symmetrybreaking - Fermilab/SLAC

Superconducting electromagnets of the LHC

You won't find these magnets in your kitchen.

Magnets are something most of us are familiar with, but you may not know that magnets are an integral part of almost all modern particle accelerators. These magnets aren’t the same as the one that held your art to your parent’s refrigerator when you were a kid. Although they have a north and south pole just as your fridge magnets do, accelerator magnets require quite a bit of engineering.

When an electrically charged particle such as a proton moves through a constant magnetic field, it moves in a circular path. The size of the circle depends on both the strength of the magnets and the energy of the beam. Increase the energy, and the ring gets bigger; increase the strength of the magnets, the ring gets smaller.

The Large Hadron Collider is an accelerator, a crucial word that reminds us that we use it to increase the energy of the beam particles. If the strength of the magnets remained the same, then as we increased the beam energy, the size of the ring would similarly have to increase. Since the size of the ring necessarily remains the same, we must increase the strength of the magnets as the beam energy is increased. For that reason, particle accelerators employ a special kind of magnet.

When you run an electric current through a wire, it creates a magnetic field; the strength of the magnetic field is proportional to the amount of electric current. Magnets created this way are called electromagnets. By controlling the amount of current, we can make electromagnets of any strength we want. We can even reverse the magnet’s polarity by reversing the direction of the current.

Given the connection between electrical current and magnetic field strength, it is clear that we need huge currents in our accelerator magnets. To accomplish this, we use superconductors, materials that lose their resistance to electric current when they are cooled enough. And “cooled” is an understatement. At 1.9 Kelvin (about 450 degrees Fahrenheit below zero), the centers of the magnets at the LHC are one of the coldest places in the universe—colder than the temperature of space between galaxies.

Given the central role of magnets in modern accelerators, scientists and engineers at Fermilab and CERN are constantly working to make even stronger ones. Although the main LHC magnets can generate a magnetic field about 800,000 times that generated by the Earth, future accelerators will require even more. The technology of electromagnets, first observed in the early 1800s, is a vibrant and crucial part of the laboratories’ futures.

Like what you see? Sign up for a free subscription to symmetry!

## January 22, 2015

### astrobites - astro-ph reader's digest

Simulating X-ray Binary Winds
• Title: Stellar wind in state transitions of high-mass X-ray binaries
• Authors: J. Čechura and P. Hadrava
• First Author’s Institution: Astronomical Institute, Academy of Sciences, Czech Republic
• Paper Status: Accepted for publication in Astronomy & Astrophysics

A 3D surface model of X-ray binary Cygnus X-1. Contours and lines represent regions of equal density. Fig. 10 from the paper.

How do you simulate a massive star’s behavior when its closest neighbor is a black hole? Astronomers routinely make simplifying assumptions to understand how stars behave. If there are thousands of stars orbiting one another, treat them as point masses. If there is a single, solitary star, treat it as a perfectly symmetrical sphere. But just like massless pendulums and frictionless pulleys, these ideal scenarios aren’t reality. Sometimes, to truly understand stars, you need to roll up your sleeves and start thinking about pesky details—things like three dimensions, X-ray photoionization, and the Coriolis force.

Windy with a chance of X-rays

In today’s paper, Čechura and Hadrava examine what happens to the runaway gas from the surface of massive stars—the stellar wind. In particular, they look at systems with massive stars so close to a companion neutron star or black hole that the stellar wind is jarred into a new orbit and heated to the point of emitting X-rays. This is a high-mass X-ray binary.

The authors begin with a 2D model to understand how the stellar wind behaves differently when one star is more or less massive than the other, or when the wind itself is programmed into the model in subtly different ways. As it turns out, emitting tons of X-rays is more than the end result of stellar wind particles slamming into an accretion disk. Those X-rays continue the story by ionizing nearby gas and slowing down the incoming wind. When the wind slows, the overall shape of the system changes thanks to gravity and the Coriolis force, which in turn affects how many X-rays are emitted!

Cygnus X-1’s split personality

With these variables better understood, the authors create a full-fledged 3D hydrodynamic model of a high-mass X-ray binary. A 3D model returns more accurate densities and velocities than a 2D model because the geometry is more realistic. They base this simulation on the well-studied X-ray binary Cygnus X-1, which is generally observed in one of two states: either it is emitting relatively few X-rays of high energy (low/hard), or it is emitting many X-rays of low energy (high/soft). In the low/hard state, wind from the massive star is actively flowing into an accretion disk around the companion. The high/soft state takes over when that flow is disrupted.

To simulate the transition from Cygnus X-1’s low/hard state to its high/soft state, the authors suddenly increase the X-ray luminosity of the compact companion. As a result, gas in the stellar wind never makes it to the accretion disk because it is bombarded with X-rays. It turns out that this X-ray photoionization process is even more important than the simpler 2D model suggested.

3D cross-section of Cygnus X-1’s stellar wind in the low/hard X-ray state, when material is flowing into the compact companion’s accretion disk. Each column represents a 90-degree change in viewing angle. From top to bottom, the rows show particle density, velocity magnitude, and degree of ionization. The black region in the ionization panels is an X-ray shadow, where no particles are photoionized. Fig. 7 from the paper.

3D cross-section of Cygnus X-1’s stellar wind in the high/soft X-ray state, when material from the stellar wind is not flowing into the compact companion’s accretion disk. As in the previous figure, the columns show three mutually perpendicular viewing angles and the rows show different physical parameters (density, velocity magnitude, and degree of ionization). Fig. 8 from the paper.

Of course, even this detailed 3D model isn’t perfect. In the future, the authors would like to more accurately consider radiative transfer as well as account for turbulence in the stellar wind. And Cygnus X-1 is a single test case! Still, this is a huge step forward from point masses, perfect spheres, or even a 2D simulation. Half the challenge in simulating reality is choosing which assumptions are reasonable tradeoffs to construct a useful model, and this paper illustrates just how important X-rays are in determining the behavior of an X-ray binary.

### Symmetrybreaking - Fermilab/SLAC

DECam’s nearby discoveries

The Dark Energy Camera does more than its name would lead you to believe.

The Dark Energy Camera, or DECam, peers deep into space from its mount on the 4-meter Victor Blanco Telescope high in the Chilean Andes.

Thirty percent of the camera’s observing time—about 105 nights per year—go to the team that built it: scientists working on the Dark Energy Survey.

Another small percentage of the year is spent on maintenance and upgrades to the telescope. So who else gets to use DECam? Dozens of other projects share its remaining time.

Many of them study objects far across the cosmos, but five of them investigate ones closer to home.

Overall, these five groups take up just 20 percent of the available time, but they’ve already taught us some interesting things about our planetary neighborhood and promise to tell us more in the future.

#### Far-out asteroids

Stony Brook University’s Aren Heinze and the University of Western Ontario’s Stanimir Metchev used DECam for four nights in early 2014 to search for unknown members of our solar system’s main asteroid belt, which sits between Mars and Jupiter.

To detect such faint objects, one needs to take a long exposure. However, the paths of these asteroids lie close enough to Earth that taking an exposure longer than a few minutes results in blurred images. Heinze and Metchev’s fix was to stack more than 100 images taken in less than two minutes each.

With this method, the team expects to measure the positions, motions and brightnesses of hundreds of main belt asteroids not seen before. They plan to release their survey results in late 2015, and an early partial analysis indicates they’ve already found hundreds of asteroids in a region smaller than DECam’s field of view—about 20 times the area of the full moon.

#### Whole new worlds

Scott Sheppard of the Carnegie Institution for Science in Washington DC and Chad Trujillo of Gemini Observatory in Hilo, Hawaii, use DECam to look for distant denizens of our solar system. The scientists have imaged the sky for two five-night stretches every year since November 2012.

Every night, the DECam’s sensitive 570-megapixel eye captures images of an area of sky totaling about 200 to 250 times the area of the full moon, returning to each field of view three times. Sheppard and Trujillo run the images from each night through software that tags everything that moves.

“We have to verify everything by eye,” Sheppard says. So they look through about 60 images a night, or 300 total from a perfect five-night observing run, a process that gives them a few dozen objects to study at Carnegie’s Magellan Telescope.

The scientists want to find worlds beyond Pluto and its brethren—a region called the Kuiper Belt, which lies some 30 to 50 astronomical units from the sun (compared to the Earth’s 1). On their first observing run, they caught one.

This new world, with the catalog name of 2012 VP113, comes as close as 80 astronomical units from the sun and journeys as far as 450. Along with Sedna, a minor planet discovered a decade ago, it is one of just two objects found in what was once thought of as a complete no man’s land.

Sheppard and Trujillo also have discovered another dwarf planet that is one of the top 10 brightest objects beyond Neptune, a new comet, and an asteroid that occasionally sprouts an unexpected tail of dust.

#### Mythical creatures

Northern Arizona University’s David Trilling and colleagues used the DECam for three nights in 2014 to look for “centaurs”—so called because they have characteristics of both asteroids and comets. Astronomers believe centaurs could be lost Kuiper Belt objects that now lie between Jupiter and Neptune.

Trilling’s team expects to find about 50 centaurs in a wide range of sizes. Because centaurs are nearer to the sun than Kuiper Belt objects, they are brighter and thus easier to observe. The scientists hope to learn more about the size distribution of Kuiper Belt objects by studying the sizes of centaurs. The group recently completed its observations and plan to report them later in 2015.

#### Next-door neighbors

Lori Allen of the National Optical Astronomy Observatory outside Tucson, Arizona, and her colleagues are looking for objects closer than 1.3 astronomical units from the sun. These near-Earth objects have orbits that can cross Earth’s—creating the potential for collision.

Allen’s team specializes in some of the least-studied NEOs: ones smaller than 50 meters across.

Even small NEOs can be destructive, as demonstrated by the February 2013 NEO that exploded above Chelyabinsk, Russia. The space rock was just 20 meters wide, but the shockwave from its blast shattered windows, which caused injuries to more than 1000 people.

In 2014, Allen’s team used the DECam for 10 nights. They have 20 more nights to use in 2015 and 2016.

They have yet to release specific findings from the survey’s first year, but the researchers say they have a handle of the distribution of NEOs down to just 10 meters wide. They also expect to discover about 100 NEOs the size of the one that exploded above Chelyabinsk.

#### Space waste

Most surveys looking for “space junk”—inactive satellites, parts of spacecraft and the like in orbit around the Earth—can see only pieces larger than about 20 centimeters. But there’s a lot more material out there.

How much is a question Patrick Seitzer of the University of Michigan and colleagues hope to answer. They used DECam to hunt for debris smaller than 10 centimeters, or the size of a smartphone, in geosynchronous orbit.

The astronomers need to capture at least four images of each piece of debris to determine its position, motion and brightness. This can tell them about the risk from small debris to satellites in geosynchronous orbit. Their results are scheduled for release in mid-2015.

Like what you see? Sign up for a free subscription to symmetry!

## January 21, 2015

### Lubos Motl - string vacua and pheno

A new paper connecting heterotic strings with an LHC anomaly
Is the LHC going to experimentally support details of string theory in a few months?

Just one week ago, I discussed a paper that has presented a model capable of explaining three approximately 2.5-sigma anomalies seen by the LHC, including the $$\tau\mu$$ decay of the Higgs boson $$h$$, by using a doubled Higgs sector along with the gauged $$L_\mu-L_\tau$$ symmetry.

I have mentioned a speculative addition of mine: those gauge groups could somewhat naturally appear in $$E_8\times E_8$$ heterotic string models, my still preferred class of string/M-theory compactifications to describe the Universe around us.

Today, there is a new paper
Explaining the CMS $$eejj$$ and $$e /\!\!\!\!{p}_T jj$$ Excess and Leptogenesis in Superstring Inspired $$E_6$$ Models
by Dhuria and 3 more Indian co-authors that apparently connects an emerging, so far small and inconclusive experimental anomaly at the LHC, with heterotic strings.

The authors consider superstring-inspired models with an $$E_6$$ group and supersymmetry whose R-parity is unbroken. And the anomaly they are able to explain is the 2.8-sigma CMS excess that I wrote about in July 2014 and that was attributed to a $$2.1\TeV$$ right-handed $$W^\pm_R$$-boson.

The new Indian paper shows that it is rather natural to explain the anomaly in terms of the heterotic models with gauge groups broken to$E_8\times E'_8 \to E_6\times SU(3)\times E'_8$ but they are careful about identifying the precise new particles that create the excess. In fact, it seems that the right-handed gauge bosons are not ideal to play the role. They will lead to problems with baryogenesis. All the baryon asymmetry will disappear because $$B-L$$ and $$B+L$$ are violated, either at low energies or intensely at the electroweak scale. So this theory would apparently predict that all matter annihilates against the antimatter.

Instead of the right-handed gauge bosons, they promote new exotic sleptons that result from the breaking of $$E_6$$ down to a cutely symmetric maximal subgroup$E_6\to SU(3)_C \times SU(3)_L \times SU(3)_R$ under which the fundamental representation decomposes as${\bf 27} = ({\bf 3}, {\bf 3}, {\bf 1}) \oplus ({\bf \bar 3}, {\bf 1}, {\bf \bar 3}) \oplus ({\bf 1}, {\bf \bar 3}, {\bf 3})$ which should look beautiful to all devout Catholics who love the Holy Trinity. The three $$SU(3)$$ factors represent the QCD color, the left-handed extension of the electroweak $$SU(2)_W$$, and its right-handed partner.

There are lots of additional technical features that you may want to study in the 8-page-long paper. But I want to emphasize some big-picture, emotional message. And it is the following.

The superpartners have been considered the most likely new particles that may emerge in particle physics experiments. They have the best motivation – the supersymmetric solution to the hierarchy problem (the lightness of the Higgs boson) – to appear at low energies. On the other hand, it's "sensible" to assume that all other new particles, e.g. those linked to grand unification or extra dimensions, are tied to very high energies and therefore unobservable in the near future.

But this expectation isn't rock-solid. In fact, just like the Standard Model fermions are light, there may be additional particles that naturally result from GUT or string theory model building that are light and accessible to the LHC, too. One could expect that "it is likely" that the gauge coupling unification miracle from minimal SUSY GUT ceases to work. But it may work, perhaps with some fixes, and although the fixes are disadvantages, the models may have some advantages that are even more irresistible than the gauge coupling unification.

The possibility that some other, non-SUSY aspects of string models will be found first is here and it is unbelievably attractive, indeed. I would bet that this particular ambitious scenario is "less likely than yes/not" (or whatever is the opposite to "more likely than not" LOL) but the probability isn't zero.

A lighter topic: intestines and thumbs on feet

By Don Lincoln. ;-)

### arXiv blog

How the Next Generation of Botnets Will Exploit Anonymous Networks, and How to Beat Them

Computer scientists are already devising strategies for neutralizing the next generation of malicious botnets .

### Quantum Diaries

How to build your own particle detector

Make a cloud chamber and watch fundamental particles zip through your living room! Image: Sandbox Studio, Chicago

The scale of the detectors at the Large Hadron Collider is almost incomprehensible: They weigh thousands of tons, contain millions of detecting elements and support a research program for an international community of thousands of scientists.

But particle detectors aren’t always so complicated. In fact, some particle detectors are so simple that you can make (and operate) them in your own home.

The Continuously Sensitive Diffusion Cloud Chamber is one such detector. Originally developed at UC Berkeley in 1938, this type of detector uses evaporated alcohol to make a ‘cloud’ that is extremely sensitive to passing particles.

Cosmic rays are particles that are constantly crashing into the Earth from space. When they hit Earth’s atmosphere, they release a shower of less massive particles, many of which invisibly rain down to us.

When a cosmic ray zips through a cloud, it creates ghostly particle tracks that are visible to the naked eye.

Building a cloud chamber is easy and requires only a few simple materials and steps:

#### Materials:

• Clear plastic or glass tub (such as a fish tank) with a solid lid (plastic or metal)
• Felt
• Isopropyl alcohol (90% or more. You can find this at a pharmacy or special order from a chemical supply company. Wear safety goggles when handling the alcohol.)
• Dry ice (frozen carbon dioxide. Often used at fish markets and grocery stores to keep products cool. Wear thick gloves when handling the dry ice.)

#### Steps:

1. Cut the felt so that it is the size of the bottom of the fish tank. Glue it down inside the tank (on the bottom where the sand and fake treasure chests would normally go).
2. Once the felt is secured, soak it in the isopropyl alcohol until it is saturated. Drain off any excess alcohol.
3. Place the lid on top of dry ice so that it lies flat. You might want to have the dry ice in a container or box so that it is more stable.
4. Flip the tank upside down, so that the felt-covered bottom of the tank is on top, and place the mouth of the tank on top of the lid.
5. Wait about 10 minutes… then turn off the lights and shine a flashlight into your tank.
Artwork by: Sandbox Studio, Chicago

#### What is happening inside your cloud chamber?

The alcohol absorbed by the felt is at room temperature and is slowly evaporating into the air. But as the evaporated alcohol sinks toward the dry ice, it cools down and wants to turn back into a liquid.

The air near the bottom of the tank is now supersaturated, which means that it is just below its atmospheric dew point. And just as water molecules cling to blades of grass on cool autumn mornings, the atmospheric alcohol will form cloud-like droplets on anything it can cling to.

#### Particles, coming through!

When a particle zips through your cloud chamber, it bumps into atmospheric molecules and knocks off some of their electrons, turning the molecules into charged ions. The atmospheric alcohol is attracted to these ions and clings to them, forming tiny droplets.

The resulting tracks left behind look like the contrails of airplane—long spindly lines marking the particle’s path through your cloud chamber.

#### What you can tell from your tracks?

Many different types of particles might pass through your cloud chamber. It might be hard to see, but you can actually differentiate between the types of particles based on the tracks they leave behind.

#### Short, fat tracks

Sorry—not a cosmic ray. When you see short, fat tracks, you’re seeing an atmospheric radon atom spitting out an alpha particle (a clump of two protons and two neutrons). Radon is a naturally occurring radioactive element, but it exists in such low concentrations in the air that it is less radioactive than peanut butter. Alpha particles spat out of radon atoms are bulky and low-energy, so they leave short, fat tracks.

#### Long, straight track

Congratulations! You’ve got muons! Muons are the heavier cousins of the electron and are produced when a cosmic ray bumps into an atmospheric molecule high up in the atmosphere. Because they are so massive, muons bludgeon their way through the air and leave clean, straight tracks.

#### Zig-zags and curly-cues

If your track looks like the path of a lost tourist in a foreign city, you’re looking at an electron or positron (the electron’s anti-matter twin). Electrons and positrons are created when a cosmic ray crashes into atmospheric molecules. Electrons and positrons are light particles and bounce around when they hit air molecules, leaving zig-zags and curly-cues.

#### Forked tracks

If your track splits, congratulations! You just saw a particle decay. Many particles are unstable and will decay into more stable particles. If your track suddenly forks, you are seeing physics in action!

Sarah Charley

### ZapperZ - Physics and Physicists

GUTs and TOEs
Another informative video, for the general public, from Don Lincoln and Fermilab.

Of course, if you had read my take on the so-called "Theory of Everything", you would know my stand on this when we consider emergent phenomena.

Zz.

### Quantum Diaries

Lepton Number Violation, Doubly Charged Higgs Bosons, and Vector Boson Fusion at the LHC

Doubly charged Higgs bosons and lepton number violation are wickedly cool.

Hi Folks,

The Standard Model (SM) of particle physics is presently the best description of matter and its interactions at small distances and high energies. It is constructed based on observed conservation laws of nature. However, not all conservation laws found in the SM are intentional, for example lepton number conservation. New physics models, such as those that introduce singly and doubly charged Higgs bosons, are flexible enough to reproduce previously observed data but can either conserve or violate these accidental conservation laws. Therefore, some of the best ways of testing if these types of laws are much more fundamental may be with the help of new physics.

## Observed Conservation Laws of Nature and the Standard Model

Conservation laws, like the conservation of energy or the conservation of linear momentum, have the most remarkable impact on life and the universe. Conservation of energy, for example, tells us that cars need fuel to operate and perpetual motion machines can never exist. A football sailing across a pitch does not suddenly jerk to the left at 90º because conversation of linear momentum, unless acted upon by a player (a force). This is Newton’s First Law of Motion. In particle physics, conservation laws are not taken lightly; they dictate how particles are allowed to behave and forbid some processes from occurring. To see this in action, lets consider a top quark (t) decaying into a W boson and a bottom quark (b).

asdasd

A top quark cannot radiate a W+ boson and remain a top quark because of conservation of electric charge. Top quarks have an electric charge of +2/3 e, whereas W+ bosons have an electric charge of +1e, and we know quite well that

(+2/3)e ≠ (+1)e + (+2/3)e.

For reference a proton has an electric charge of +1e and an electron has an electric charge of -1e. However, a top quark can radiate a W+ boson and become a bottom quark, which has electric charge of -1/3e. Since

(+2/3)e = (+1)e + (-1/3)e,

we see that electric charge is conserved.

Conservation of energy, angular momentum, electric charged, etc., are so well-established that the SM is constructed to automatically obey these laws. If we pick any mathematical term in the SM that describes how two or more particles interact (for example how the top quark, bottom quark, and W boson interact with each other) and then add up the electric charge of all the participating particles, we will find that the total electric charge is zero:

The top quark-bottom quark-W boson interaction terms in the Standard Model. Bars above quarks indicate that the quark is an antiparticle and has opposite charges.

## Accidental Conservation Laws

However, not all conservation laws that appear in the SM are intentional. Conservation of lepton number is an example of this. A lepton is any SM fermion that does not interact with the strong nuclear force. There are six leptons in total: the electron, muon, tau, electron-neutrino, muon-neutrino, and tau-neutrino. We assign lepton number

L=1 to all leptons (electron, muon, tau, and all three neutrinos),

L=-1 to all antileptons (positron, antimuon, antitau, and all three antineutrinos),

L=0 to all other particles.

With these quantum number assignments, we see that lepton number is a conserved in the SM. To clarify this important point: we get lepton number conservation for free due to our very rigid requirements when constructing the SM, namely the correct conservation laws (e.g., electric and color charge) and particle content. Since lepton number conservation was not intentional, we say that lepton number is accidentally conserved. Just as we counted the electric charge for the top-bottom-W interaction, we can count the net lepton number for the electron-neutrino-W interaction in the SM and see that lepton number really is zero:

The W boson-neutrino-electron interaction terms in the Standard Model. Bars above leptons indicate that the lepton is an antiparticle and has opposite charges.

However, lepton number conservation is not required to explain data. At no point in constructing the SM did we require that it be conserved. Because of this, many physicists question whether lepton number is actually conserved. It may be, but we do not know. This is indeed one topic that is actively researched. An interesting example of a scenario in which lepton number conservation could be tested is the class of theories with singly and doubly charged Higgs boson. That is right, there are theories containing additional Higgs bosons that an electric charged equal or double the electric charge of the proton.

Models with scalar SU(2) triplets contain additional neutral Higgs bosons as well as singly and doubly charged Higgs bosons.

Doubly charged Higgs bosons have an electric charge that is twice as large as a proton (2e), which leads to rather peculiar properties. As discussed above, every interaction between two or more particles must respect the SM conservation laws, such as conservation of electric charge. Because of this, a doubly charged Higgs (+2e) cannot decay into a top quark (+2/3 e) and an antibottom quark (+1/3 e),

(+2)e ≠ (+2/3)e + (+1/3)e.

However, a doubly charged Higgs (+2e) can decay into two W bosons (+1e) or two antileptons (+1e) with the same electric charge,

(+2)e = (+1)e + (+1)e.

but that is it. A doubly charged Higgs boson cannot decay into any other pair of SM particles because it would violate electric charge conservation. For these two types of interactions, we can also check whether or not lepton number is conserved:

For the decay into same-sign W boson pairs, the total lepton number is 0L + 0L + 0L = 0L. In this case, lepton number is conserved!

For the decay into same-sign leptons pairs, the total lepton number is 0L + (-1)L + (-1)L = -2L. In this case, lepton number is violated!

Doubly charged Higgs boson interactions for same-sign W boson pairs and same-sign electron pairs. Bars indicate antiparticles. C’s indicate charge flipping.

Therefore if we observe a doubly charged Higgs decaying into a pair of same-sign leptons, then we have evidence that lepton number is violated. If we only observe doubly charged Higgs decaying into same-sign W bosons, then one may speculate that lepton number is conserved in the SM.

## Doubly Charged Higgs Factories

Doubly charged Higgs bosons do not interact with quarks (otherwise it would violate electric charge conservation), so we have to rely on vector boson fusion (VBF) to produce them. VBF is when two bosons from on-coming quarks are radiated and then scatter off each other, as seen in the diagram below.

Diagram depicting the process known as WW Scattering, where two quarks from two protons each radiate a W boson that then elastically interact with one another.

If two down quarks, one from each oncoming proton, radiate a W- boson (-1e) and become up quarks, the two W- bosons can fuse into a negatively, doubly charged Higgs (-2e). If lepton number is violated, the Higgs boson can decay into a pair of same-sign electrons (2x -1e). Counting lepton number at the beginning of the process (L = 0 – 0 = 0) and at the end (L = 0 – 2 = -2!), we see that it changes by two units!

Same-sign W- pairs fusing into a doubly charged Higgs boson that decays into same-sign electrons.

If lepton number is not violated, we will never see this decay and only see decays to two very, very energetic W- boson (-1e). Searching for vector boson fusion as well as lepton number violation are important components of the overarching Large Hadron Collider (LHC) research program at CERN. Unfortunately, there is no evidence for the existence of doubly charged scalars. On the other hand, we do have evidence for vector boson scattering (VBS) of the same-sign W bosons! Additional plots can be found on ATLAS’ website.  Reaching this tremendous milestone is a triumph for the LHC experiments. Vector boson fusion is a very, very, very, very, very rare process in the Standard Model and difficult to separate from other SM processes. Finding evidence for it is a first step in using the VBF process as a probe of new physics.

Same-sign W boson scattering candidate event at the LHC ATLAS experiment. Slide credit: Junjie Zhu (Michigan)

We have observed that some quantities, like momentum and electric charge, are conserved in nature. Conservation laws are few and far between, but are powerful. The modern framework of particle physics has these laws built into them, but has also been found to accidentally conserve other quantities, like lepton number. However, as lepton number is not required to reproduce data, it may be the case that these accidental laws are not, in fact, conserved. Theories that introduce charged Higgs bosons can reproduce data but also predict new interactions, such as doubly charged Higgs bosons decaying to same-sign W boson pairs and, if lepton number is violated, to same-sign charged lepton pairs. These new, exotic particles can be produced through vector boson fusion of two same-sign W boson pairs. VBF is a rare process in the SM and can greatly increase if new particles exist. At last, there is evidence for vector boson scattering of same-sign W bosons, and may be the next step to discovering new particles and new laws of nature!

Happy Colliding

- Richard (@BraveLittleMuon)

### Clifford V. Johnson - Asymptotia

Flowers of the Sky
Here is a page of a lovely set of (public domain) images of comets and meteors, as depicted in various ways through the centuries. The above sample is from the famous [...] Click to continue reading this post

### Tommaso Dorigo - Scientificblogging

One Year In Pictures
A periodic backup of my mobile phone yesterday - mainly pictures and videos - was the occasion to give a look back at things I did and places I visited in 2014, for business and leisure. I thought it would be fun to share some of those pictures with you, with sparse comments. I know, Facebook does this for you automatically, but what does Facebook know of what is meaningful and what isn't ? So here we go.
The first pic was taken at Beaubourg, in Paris - it is a sculpture I absolutely love: "The king plays with the queen" by Max Ernst.

Still in Paris (for a vacation at the beginning of January), the grandiose interior of the Opera de Paris...

## January 20, 2015

### Jester - Resonaances

Planck: what's new
Slides from the recent Planck collaboration meeting are now available online. One can find there preliminary results that include an input from Planck's measurements of the polarization of the  Cosmic Microwave Background (some which were previously available via the legendary press release in French). I already wrote about the new  important limits on dark matter annihilation cross section. Here I picked up a few more things that may be of interest for a garden variety particle physicist.

• ΛCDM.
Here is a summary of Planck's best fit parameters of the standard cosmological model with and without the polarization info:

Note that the temperature-only numbers are slightly different than in the 2013 release, because of improved calibration and foreground cleaning.  Frustratingly, ΛCDM remains  solid. The polarization data do not change the overall picture, but they shrink some errors considerably. The Hubble parameter remains at a low value; the previous tension with Ia supernovae observations seems to be partly resolved and blamed on systematics on the supernovae side.  For the large scale structure fans, the parameter σ8 characterizing matter fluctuations today remains at a high value, in some tension with weak lensing and cluster counts.
• Neff.
There are also better limits on deviations from ΛCDM. One interesting result is the new improved constraint on the effective number of neutrinos, Neff in short. The way this result is presented may be confusing.  We know perfectly well there are exactly 3 light active (interacting via weak force) neutrinos; this has been established in the 90s at the LEP collider, and Planck has little to add in this respect. Heavy neutrinos, whether active or sterile, would not show in this measurement at all.  For light sterile neutrinos, Neff implies an upper bound on the mixing angle with the active ones. The real importance of  Neff lies in that it counts any light particles (other than photons) contributing to the energy density of the universe at the time of CMB decoupling. Outside the standard model neutrinos, other theorized particles could contribute any real positive number to Neff, depending on their temperature and spin. A few years ago there have been consistent hints of Neff  much larger 3, which would imply physics beyond the standard model. Alas, Planck has shot down these claims. The latest number combining Planck and Baryon Acoustic Oscillations is Neff =3.04±0.18, spot on 3.046 expected from the standard model neutrinos.  This represents an important constraint on any new physics model with very light (less than eV) particles.
• Σmν.
The limit on the sum of the neutrino masses keeps improving and gets into a really interesting regime. Recall that, from oscillation experiments, we can extract the neutrino mass differences: Δm32 ≈ 0.05 eV and Δm12≈0.009 eV up to a sign, but we don't know their absolute masses.  Planck and others have already excluded the possibility that all 3 neutrinos have approximately the same mass. Now they are not far from probing the so-called inverted hierarchy, where two neutrinos have approximately the same mass and the 3rd is much lighter, in which case Σmν ≈ 0.1 eV. Planck and Baryon Acoustic Oscillations set the limit Σmν < 0.16 eV at 95% CL, however this result is not strongly advertised because it is sensitive to the value of the Hubble parameter. Including non-Planck measurements leads to a weaker, more conservative limit Σmν < 0.23 eV, the same as quoted in the 2013 release.
• CνB.
For dessert, something cool. So far we could observe the cosmic neutrino background only through its contribution to the  energy density of radiation in the early universe. This affects observables that can be inferred from the CMB acoustic peaks, such as the Hubble expansion rate or the time of matter-radiation equality. Planck, for the first time, probes the properties of the CνB. Namely, it measures the  effective sound speed ceff and viscosity cvis parameters, which affect the growth of perturbations in the CνB. Free-streaming particles like the neutrinos should have ceff^2 =  cvis^2 = 1/3, while Planck measures ceff^2 = 0.3256±0.0063 and  cvis^2 = 0.336±0.039. The result is unsurprising, but it may help constraining some more exotic models of neutrino interactions.

To summarize, Planck continues to deliver disappointing results, and there's still more to follow ;)

### The n-Category Cafe

The Univalent Perspective on Classifying Spaces

I feel like I should apologize for not being more active at the Cafe recently. I’ve been busy, of course, and also most of my recent blog posts have been going to the HoTT blog, since I felt most of them were of interest only to the HoTT crowd (by which I mean, “people interested enough in HoTT to follow the HoTT blog” — which may of course include many Cafe readers as well). But today’s post, while also inspired by HoTT, is less technical and (I hope) of interest even to “classical” higher category theorists.

In general, a classifying space for bundles of $XX$’s is a space $BB$ such that maps $Y\to BY\to B$ are equivalent to bundles of $XX$’s over $YY$. In classical algebraic topology, such spaces are generally constructed as the geometric realization of the nerve of a category of $XX$’s, and as such they may be hard to visualize geometrically. However, it’s generally useful to think of $BB$ as a space whose points are $XX$’s, so that the classifying map $Y\to BY\to B$ of a bundle of $XX$’s assigns to each $y\in Yy\in Y$ the corresponding fiber (which is an $XX$). For instance, the classifying space $BOB O$ of vector bundles can be thought of as a space whose points are vector spaces, where the classifying map of vector bundle assigns to each point the fiber over that point (which is a vector space).

In classical algebraic topology, this point of view can’t be taken quite literally, although we can make some use of it by identifying a classifying space with its representable functor. For instance, if we want to define a map $f:BO\to BOf:B O\to B O$, we’d like to say “a point $v\in BOv\in B O$ is a vector space, so let’s do blah to it and get another vector space $f\left(v\right)\in BOf\left(v\right)\in B O$. We can’t do that, but we can do the next best thing: if blah is something that can be done fiberwise to a vector bundle in a natural way, then since $\mathrm{Hom}\left(Y,BO\right)Hom\left(Y,B O\right)$ is naturally equivalent to the collection of vector bundles over $YY$, our blah defines a natural transformation $\mathrm{Hom}\left(-,BO\right)\to \mathrm{Hom}\left(-,BO\right)Hom\left(-,B O\right) \to Hom\left(-,B O\right)$, and hence a map $f:BO\to BOf:B O \to B O$ by the Yoneda lemma.

However, in higher category theory and homotopy type theory, we can really take this perspective literally. That is, if by “space” we choose to mean “$\infty \infty$-groupoid” rather than “topological space up to homotopy”, then we can really define the classifying space to be the $\infty \infty$-groupoid of $XX$’s, whose points (objects) are $XX$’s, whose morphisms are equivalences between $XX$’s, and so on. Now, in defining a map such as our $ff$, we can actually just give a map from $XX$’s to $XX$’s, as long as we check that it’s functorial on equivalences — and if we’re working in HoTT, we don’t even have to do the second part, since everything we can write down in HoTT is automatically functorial/natural.

This gives a different perspective on some classifying-space constructions that can be more illuminating than a classical one. Below the fold I’ll discuss some examples that have come to my attention recently.

All of these examples have to do with the classifying space of “types equivalent to $XX$” for some fixed $XX$. Such a classifying space, often denoted $B\mathrm{Aut}\left(X\right)B Aut\left(X\right)$, has the property that maps $Y\to B\mathrm{Aut}\left(X\right)Y \to B Aut\left(X\right)$ are equivalent to maps (perhaps “fibrations” or “bundles”) $Z\to YZ\to Y$ all of whose fibers are equivalent (a homotopy type theorist might say “merely equivalent”) to $XX$. The notation $B\mathrm{Aut}\left(X\right)B Aut\left(X\right)$ accords with the classical notation $BGB G$ for the delooping of a (perhaps $\infty \infty$-) group: in fact this is a delooping of the group of automorphisms of $XX$.

Categorically (and homotopy-type-theoretically), we simply define $B\mathrm{Aut}\left(X\right)B Aut\left(X\right)$ to be the full sub-$\infty \infty$-groupoid of $\infty \mathrm{Gpd}\infty Gpd$ (the $\infty \infty$-groupoid of $\infty \infty$-groupoids) whose objects are those equivalent to $XX$. You might have thought I was going to say the full sub-$\infty \infty$-groupoid on the single object $XX$, and that would indeed give us an equivalent result, but the examples I’m about to discuss really do rely on having all the other equivalent objects in there. In particular, note that an arbitrary object of $B\mathrm{Aut}\left(X\right)B Aut\left(X\right)$ is an $\infty \infty$-groupoid that admits some equivalence to $XX$, but no such equivalence has been specified.

### Example 1: $B\mathrm{Aut}\left(2\right)B Aut\left(2\right)$

As the first example, let $X=2=\left\{0,1\right\}X = 2 = \\left\{0,1\\right\}$, the standard discrete space with two points. Then $\mathrm{Aut}\left(2\right)={C}_{2}Aut\left(2\right) = C_2$, the cyclic group on 2 elements, and so $B\mathrm{Aut}\left(2\right)=B{C}_{2}=K\left({C}_{2},1\right)B Aut\left(2\right) = B C_2 = K\left(C_2,1\right)$. Since ${C}_{2}C_2$ is an abelian group, $B{C}_{2}B C_2$ again has a (2-)group structure, i.e. we should have a multiplication operation $B{C}_{2}×B{C}_{2}\to B{C}_{2}B C_2 \times B C_2 \to B C_2$, an identity, inversion, etc.

Using the equivalence $B{C}_{2}\simeq B\mathrm{Aut}\left(2\right)B C_2 \simeq B Aut\left(2\right)$, we can describe all of these operations directly. A point $Z\in B\mathrm{Aut}\left(2\right)Z \in B Aut\left(2\right)$ is a space that’s equivalent to $22$, but without a specified equivalence. Thus, $ZZ$ is a set with two elements, but we haven’t chosen either of those elements to call “$00$” or “$11$”. As long as we perform constructions on $ZZ$ without making such an unnatural choice, we’ll get maps that act on $B\mathrm{Aut}\left(2\right)B Aut\left(2\right)$ and hence $B{C}_{2}B C_2$ as well.

The identity element of $B\mathrm{Aut}\left(2\right)B Aut\left(2\right)$ it’s fairly obvious: there’s only one canonical element of $B\mathrm{Aut}\left(2\right)B Aut\left(2\right)$, namely $22$ itself. The multiplication is not as obvious, and there may be more than one way to do it, but after messing around with it a bit you may come to the same conclusion I did: the product of $Z,W\in B\mathrm{Aut}\left(2\right)Z,W\in B Aut\left(2\right)$ should be $\mathrm{Iso}\left(Z,W\right)Iso\left(Z,W\right)$, the set of isomorphisms between $ZZ$ and $WW$. Note that when $ZZ$ and $WW$ are 2-element sets, so is $\mathrm{Iso}\left(Z,W\right)Iso\left(Z,W\right)$, but in general there’s no way to distinguish either of those isomorphisms from the other one, nor is $\mathrm{Iso}\left(Z,W\right)Iso\left(Z,W\right)$ naturally isomorphic to $ZZ$ or $WW$. It is, however, obviously commutative: $\mathrm{Iso}\left(Z,W\right)\cong \mathrm{Iso}\left(W,Z\right)Iso\left(Z,W\right) \cong Iso\left(W,Z\right)$.

Moreover, if $Z=2Z=2$ is the identity element, then $\mathrm{Iso}\left(2,W\right)Iso\left(2,W\right)$ is naturally isomorphic to $WW$: we can define $\mathrm{Iso}\left(2,W\right)\to WIso\left(2,W\right) \to W$ by evaluating at $0\in 20\in 2$. Similarly, $\mathrm{Iso}\left(Z,2\right)\cong ZIso\left(Z,2\right)\cong Z$, so our “identity element” has the desired property.

Furthermore, if $Z=WZ=W$, then $\mathrm{Iso}\left(Z,Z\right)Iso\left(Z,Z\right)$ does have a distinguished element, namely the identity. Thus, it naturally equivalent to $22$ by sending the identity to $0\in 20\in 2$. So every element of $B\mathrm{Aut}\left(2\right)B Aut\left(2\right)$ is its own inverse. The trickiest part is proving that this operation is associative. I’ll leave that to the reader (or you can try to decipher my Coq code).

(We did have to make some choices about whether to use $0\in 20\in 2$ or $1\in 21\in 2$. I expect that as long as we make those choices consistently, making them differently will result in equivalent 2-groups.)

### Example 2: An incoherent idempotent

In 1-category theory, an idempotent is a map $f:A\to Af:A\to A$ such that $f\circ f=ff \circ f = f$. In higher category theory, the equality $f\circ f=ff \circ f = f$ must be weakened to an isomorphism or equivalence, and then treated as extra data on which we ought to ask for additional axioms, such as that the two induced equivalences $f\circ f\circ f\simeq ff \circ f \circ f \simeq f$ coincide (up to an equivalence, of course, which then satisfies its own higher laws, etc.).

A natural question is if we have only an equivalence $f\circ f\simeq ff \circ f \simeq f$, whether it can be “improved” to a “fully coherent” idempotent in this sense. Jacob Lurie gave the following counterexample in Warning 1.2.4.8 of Higher Algebra:

let $GG$ denote the group of homeomorphisms of the unit interval $\left[0,1\right]\left[0,1\right]$ which fix the endpoints (which we regard as a discrete group), and let $\lambda :G\to G\lambda : G \to G$ denote the group homomorphism given by the formula

$\lambda \left(g\right)\left(t\right)=\left\{\begin{array}{ll}\frac{1}{2}g\left(2t\right)& \phantom{\rule{1em}{0ex}}\mathrm{if}\phantom{\rule{thickmathspace}{0ex}}0\le t\le \frac{1}{2}\\ t& \phantom{\rule{1em}{0ex}}\mathrm{if}\phantom{\rule{thickmathspace}{0ex}}\frac{1}{2}\le t\le 1.\end{array} \lambda\left(g\right)\left(t\right) = \begin\left\{cases\right\} \frac\left\{1\right\}\left\{2\right\} g\left(2t\right) & \quad if\; 0\le t \le \frac\left\{1\right\}\left\{2\right\}\\ t & \quad if\; \frac\left\{1\right\}\left\{2\right\}\le t \le 1. \end\left\{cases\right\} $

Choose an element $h\in Gh\in G$ such that $h\left(t\right)=2th\left(t\right)=2t$ for $0\le t\le \frac{1}{4}0\le t\le \frac\left\{1\right\}\left\{4\right\}$. Then $\lambda \left(g\right)\circ h=h\circ \lambda \left(\lambda \left(g\right)\right)\lambda\left(g\right)\circ h = h\circ \lambda\left(\lambda\left(g\right)\right)$ for each $g\in Gg\in G$, so that the group homomorphisms $\lambda ,{\lambda }^{2}:G\to G\lambda,\lambda^2 : G\to G$ are conjugate to one another. It follows that the induced map of classifying spaces $e:BG\to BGe:B G \to B G$ is homotopic to ${e}^{2}e^2$, and therefore idempotent in the homotopy category of spaces. However… $ee$ cannot be lifted to a [coherent] idempotent in the $\infty \infty$-category of spaces.

Let’s describe this map $ee$ in the more direct way I suggested above. Actually, let’s do something easier and just as good: let’s replace $\left[0,1\right]\left[0,1\right]$ by Cantor space ${2}^{ℕ}2^\left\{\mathbb\left\{N\right\}\right\}$. It’s reasonable to guess that this should work, since the essential property of $\left[0,1\right]\left[0,1\right]$ being used in the above construction is that it can be decomposed into two pieces (namely $\left[0,\frac{1}{2}\right]\left[0,\frac\left\{1\right\}\left\{2\right\}\right]$ and $\left[\frac{1}{2},1\right]\left[\frac\left\{1\right\}\left\{2\right\},1\right]$) which are both equivalent to itself, and ${2}^{ℕ}2^\left\{\mathbb\left\{N\right\}\right\}$ has this property as well:

${2}^{ℕ}\cong {2}^{ℕ+1}\cong {2}^{ℕ}×{2}^{1}\cong {2}^{ℕ}+{2}^{ℕ}.2^\left\{\mathbb\left\{N\right\}\right\} \cong 2^\left\{\mathbb\left\{N\right\}+1\right\} \cong 2^\left\{\mathbb\left\{N\right\}\right\} \times 2^1 \cong 2^\left\{\mathbb\left\{N\right\}\right\} + 2^\left\{\mathbb\left\{N\right\}\right\}.$

Moreover, ${2}^{ℕ}2^\left\{\mathbb\left\{N\right\}\right\}$ has the advantage that this decomposition is disjoint, i.e. a coproduct. Thus, we can also get rid of the assumption that our automorphisms preserve endpoints, which was just there in order to allow us to glue two different automorphisms on the two copies in the decomposition.

Therefore, our goal is now to construct an endomap of $B\mathrm{Aut}\left({2}^{ℕ}\right)B Aut\left(2^\left\{\mathbb\left\{N\right\}\right\}\right)$ which is incoherently, but not coherently, idempotent. As discussed above, the elements of $B\mathrm{Aut}\left({2}^{ℕ}\right)B Aut\left(2^\left\{\mathbb\left\{N\right\}\right\}\right)$ are spaces that are equivalent to ${2}^{ℕ}2^\left\{\mathbb\left\{N\right\}\right\}$, but without any such specified equivalence. Looking at the definition of Lurie’s $\lambda \lambda$, we can see that intuitively, what it does is shrink the interval to half of itself, acting functorially, and add a new copy of the interval at the end. Thus, it’s reasonable to define $e:B\mathrm{Aut}\left({2}^{ℕ}\right)\to B\mathrm{Aut}\left({2}^{ℕ}\right)e:B Aut\left(2^\left\{\mathbb\left\{N\right\}\right\}\right) \to B Aut\left(2^\left\{\mathbb\left\{N\right\}\right\}\right)$ by

$e\left(Z\right)=Z+{2}^{ℕ}.e\left(Z\right) = Z + 2^\left\{\mathbb\left\{N\right\}\right\}.$

Here $ZZ$ is some space equivalent to ${2}^{ℕ}2^\left\{\mathbb\left\{N\right\}\right\}$, and in order for this map to be well-defined, we need to show is that if $ZZ$ is equivalent to ${2}^{ℕ}2^\left\{\mathbb\left\{N\right\}\right\}$, then so is $Z+{2}^{ℕ}Z + 2^\left\{\mathbb\left\{N\right\}\right\}$. However, the decomposition ${2}^{ℕ}\cong {2}^{ℕ}+{2}^{ℕ}2^\left\{\mathbb\left\{N\right\}\right\} \cong 2^\left\{\mathbb\left\{N\right\}\right\} + 2^\left\{\mathbb\left\{N\right\}\right\}$ ensures this. Moreover, since our definition didn’t involve making any unnatural choices, it’s “obviously” (and in HoTT, automatically) functorial.

Now, is $ee$ incoherently-idempotent, i.e. do we have $e\left(e\left(Z\right)\right)\cong e\left(Z\right)e\left(e\left(Z\right)\right)\cong e\left(Z\right)$? Well, that is just asking whether

$\left(Z+{2}^{ℕ}\right)+{2}^{ℕ}\phantom{\rule{1em}{0ex}}\text{is equivalent to}\phantom{\rule{1em}{0ex}}Z+{2}^{ℕ} \left(Z + 2^\left\{\mathbb\left\{N\right\}\right\}\right) + 2^\left\{\mathbb\left\{N\right\}\right\} \quad\text\left\{is equivalent to\right\}\quad Z + 2^\left\{\mathbb\left\{N\right\}\right\} $

but this again follows from ${2}^{ℕ}\cong {2}^{ℕ}+{2}^{ℕ}2^\left\{\mathbb\left\{N\right\}\right\} \cong 2^\left\{\mathbb\left\{N\right\}\right\} + 2^\left\{\mathbb\left\{N\right\}\right\}$! Showing that $ee$ is not coherent is a bit harder, but still fairly straightforward using our description; I’ll leave it as an exercise, or you can try to decipher the Coq code.

### Example 3: Natural pointed sets

Let’s end by considering the following question: in what cases does the natural map $B{S}_{n-1}\to B{S}_{n}B S_\left\{n-1\right\} \to B S_\left\{n\right\}$ have a retraction, where ${S}_{n}S_n$ is the symmetric group on $nn$ elements? Looking at homotopy groups, this would imply that ${S}_{n-1}↪{S}_{n}S_\left\{n-1\right\} \hookrightarrow S_n$ has a retraction, which is true for $n<5n\lt 5$ but not otherwise. But let’s look instead at the map on classifying spaces.

The obvious way to think about this map is to identify $B{S}_{n}B S_n$ with $B\mathrm{Aut}\left(n\right)B Aut\left(\mathbf\left\{n\right\}\right)$, where $n\mathbf\left\{n\right\}$ is the discrete set with $nn$ elements, and similarly $B{S}_{n-1}B S_\left\{n-1\right\}$ with $B\mathrm{Aut}\left(n-1\right)B Aut\left(\mathbf\left\{n-1\right\}\right)$. Then an element of $B\mathrm{Aut}\left(n-1\right)B Aut\left(\mathbf\left\{n-1\right\}\right)$ is a set $ZZ$ with $n-1n-1$ elements, and the map $B{S}_{n-1}\to B{S}_{n}B S_\left\{n-1\right\} \to B S_\left\{n\right\}$ takes it to $Z+1Z+1$ which has $nn$ elements.

However, another possibility is to identify $B{S}_{n-1}B S_\left\{n-1\right\}$ instead with the classifying space of pointed sets with $nn$ elements. Since an isomorphism of pointed sets must respect the basepoint, this gives an equivalent groupoid, and now the map $B{S}_{n-1}\to B{S}_{n}B S_\left\{n-1\right\} \to B S_\left\{n\right\}$ is just forgetting the basepoint. With this identification, a putative retraction $B{S}_{n}\to B{S}_{n-1}B S_\left\{n\right\} \to B S_\left\{n-1\right\}$ would assign, to any set $ZZ$ with $nn$ elements, a pointed set $\left(r\left(Z\right),{r}_{0}\right)\left(r\left(Z\right),r_0\right)$ with $nn$ elements. Note that the underlying set $r\left(Z\right)r\left(Z\right)$ need not be $ZZ$ itself; they will of course be isomorphic (since both have $nn$ elements), but there is no specified or natural isomorphism. However, to say that $rr$ is a retraction of our given map says that if $ZZ$ started out pointed, then $\left(r\left(Z\right),{r}_{0}\right)\left(r\left(Z\right),r_0\right)$ is isomorphic to $\left(Z,{z}_{0}\right)\left(Z,z_0\right)$ as pointed sets.

Let’s do some small examples. When $n=1n=1$, our map $rr$ has to take a set with 1 element and assign to it a pointed set with 1 element. There’s obviously a unique way to do that, and just as obviously if we started out with a pointed set we get the same set back again.

The case $n=2n=2$ is a bit more interesting: our map $rr$ has to take a set $ZZ$ with 2 elements and assign to it a pointed set with 2 elements. One option, of course, is to define $r\left(Z\right)=2r\left(Z\right)=2$ for all $ZZ$. Since every pointed 2-element set is uniquely isomorphic to every other, this satisfies the requirement. Another option motivated by example 1, which is perhaps a little more satisfying, would be to define $r\left(Z\right)=\mathrm{Iso}\left(Z,Z\right)r\left(Z\right) = Iso\left(Z,Z\right)$, which is pointed by the identity.

The case $n=3n=3$ is more interesting still, since now it is not true that any two pointed 3-element sets are naturally isomorphic. Given a 3-element set $ZZ$, how do we assign to it functorially a pointed 3-element set? The best way I’ve thought of is to let $r\left(Z\right)r\left(Z\right)$ be the set of automorphisms $f\in \mathrm{Iso}\left(Z,Z\right)f\in Iso\left(Z,Z\right)$ such that ${f}^{3}=\mathrm{id}f^3 = id$. This has 3 elements, the identity and two 3-cycles, and we can take the identity as a basepoint. And if $ZZ$ came with a point ${z}_{0}z_0$, then we can define an isomorphism $Z\cong r\left(Z\right)Z \cong r\left(Z\right)$ by sending $z\in Zz\in Z$ to the unique $f\in r\left(Z\right)f\in r\left(Z\right)$ having the property that $f\left({z}_{0}\right)=zf\left(z_0\right)= z$.

The case $n=4n=4$ is somewhat similar: given a 4-element set $ZZ$, define $r\left(Z\right)r\left(Z\right)$ to be the set of automorphisms $f\in \mathrm{Iso}\left(Z,Z\right)f\in Iso\left(Z,Z\right)$ such that ${f}^{2}=\mathrm{id}f^2 = id$ and whose set of fixed points is either empty or all of $ZZ$. This has 4 elements and is pointed by the identity; in fact, it is the permutation representation of the Klein four-group. And once again, if $ZZ$ came with a point ${z}_{0}z_0$, we can define $Z\cong r\left(Z\right)Z \cong r\left(Z\right)$ by sending $z\in Zz\in Z$ to the unique $f\in r\left(Z\right)f\in r\left(Z\right)$ such that $f\left({z}_{0}\right)=zf\left(z_0\right)= z$.

I will end with a question that I don’t know the answer to: is there any way to see from this perspective on classifying spaces that such a retraction doesn’t exist in the case $n=5n=5$?

### Symmetrybreaking - Fermilab/SLAC

How to build your own particle detector

Make a cloud chamber and watch fundamental particles zip through your living room!

The scale of the detectors at the Large Hadron Collider is almost incomprehensible: They weigh thousands of tons, contain millions of detecting elements and support a research program for an international community of thousands of scientists.

But particle detectors aren’t always so complicated. In fact, some particle detectors are so simple that you can make (and operate) them in your own home.

The Continuously Sensitive Diffusion Cloud Chamber is one such detector. Originally developed at UC Berkeley in 1938, this type of detector uses evaporated alcohol to make a ‘cloud’ that is extremely sensitive to passing particles.

Cosmic rays are particles that are constantly crashing into the Earth from space. When they hit Earth’s atmosphere, they release a shower of less massive particles, many of which invisibly rain down to us.

When a cosmic ray zips through a cloud, it creates ghostly particle tracks that are visible to the naked eye.

Building a cloud chamber is easy and requires only a few simple materials and steps:

#### Materials:

• Clear plastic or glass tub (such as a fish tank) with a solid lid (plastic or metal)
• Felt
• Isopropyl alcohol (90% or more. You can find this at a pharmacy or special order from a chemical supply company. Wear safety goggles when handling the alcohol.)
• Dry ice (frozen carbon dioxide. Often used at fish markets and grocery stores to keep products cool. Wear thick gloves when handling the dry ice.)

#### Steps:

1. Cut the felt so that it is the size of the bottom of the fish tank. Glue it down inside the tank (on the bottom where the sand and fake treasure chests would normally go).
2. Once the felt is secured, soak it in the isopropyl alcohol until it is saturated. Drain off any excess alcohol.
3. Place the lid on top of dry ice so that it lies flat. You might want to have the dry ice in a container or box so that it is more stable.
4. Flip the tank upside down, so that the felt-covered bottom of the tank is on top, and place the mouth of the tank on top of the lid.
5. Wait about 10 minutes… then turn off the lights and shine a flashlight into your tank.

#### What is happening inside your cloud chamber?

The alcohol absorbed by the felt is at room temperature and is slowly evaporating into the air. But as the evaporated alcohol sinks toward the dry ice, it cools down and wants to turn back into a liquid.

The air near the bottom of the tank is now supersaturated, which means that it is just below its atmospheric dew point. And just as water molecules cling to blades of grass on cool autumn mornings, the atmospheric alcohol will form cloud-like droplets on anything it can cling to.

#### Particles, coming through!

When a particle zips through your cloud chamber, it bumps into atmospheric molecules and knocks off some of their electrons, turning the molecules into charged ions. The atmospheric alcohol is attracted to these ions and clings to them, forming tiny droplets.

The resulting tracks left behind look like the contrails of airplane—long spindly lines marking the particle’s path through your cloud chamber.

#### What you can tell from your tracks?

Many different types of particles might pass through your cloud chamber. It might be hard to see, but you can actually differentiate between the types of particles based on the tracks they leave behind.

#### Short, fat tracks

Sorry—not a cosmic ray. When you see short, fat tracks, you’re seeing an atmospheric radon atom spitting out an alpha particle (a clump of two protons and two neutrons). Radon is a naturally occurring radioactive element, but it exists in such low concentrations in the air that it is less radioactive than peanut butter. Alpha particles spat out of radon atoms are bulky and low-energy, so they leave short, fat tracks.

#### Long, straight track

Congratulations! You’ve got muons! Muons are the heavier cousins of the electron and are produced when a cosmic ray bumps into an atmospheric molecule high up in the atmosphere. Because they are so massive, muons bludgeon their way through the air and leave clean, straight tracks.

#### Zig-zags and curly-cues

If your track looks like the path of a lost tourist in a foreign city, you’re looking at an electron or positron (the electron’s anti-matter twin). Electrons and positrons are created when a cosmic ray crashes into atmospheric molecules. Electrons and positrons are light particles and bounce around when they hit air molecules, leaving zig-zags and curly-cues.

#### Forked tracks

If your track splits, congratulations! You just saw a particle decay. Many particles are unstable and will decay into more stable particles. If your track suddenly forks, you are seeing physics in action!

Like what you see? Sign up for a free subscription to symmetry!

### ZapperZ - Physics and Physicists

Macrorealism Violated By Cs Atoms
It is another example where the more they test QM, the more convincing it becomes.

This latest experiment is to test whether superposition truly exist via a very stringent test and applying the Leggett-Garg criteria.

In comparison with these earlier experiments, the atoms studied in the experiments by Robens et al.’s are the largest quantum objects with which the Leggett-Garg inequality has been tested using what is called a null measurement—a “noninvasive” measurement that allows the inequality to be confirmed in the most convincing way possible. In the researchers’ experiment, a cesium atom moves in one of two standing optical waves that have opposite electric-field polarizations, and the atom’s position is measured at various times. The two standing waves can be pictured as a tiny pair of overlapping one-dimensional egg-carton strips—one red, one blue (Fig. 1). The experiment consists of measuring correlation between the atom’s position at different times. Robens et al. first put the atom into a superposition of two internal hyperfine spin states; this corresponds to being in both cartons simultaneously. Next, the team slid the two optical waves past each other, which causes the atom to smear out over a distance of up to about 2 micrometers in a motion known as a quantum walk. Finally, the authors optically excited the atom, causing it to fluoresce and reveal its location at a single site. Knowing where the atom began allows them to calculate, on average, whether the atom moved left or right from its starting position. By repeating this experiment, they can obtain correlations between the atom’s position at different times, which are the inputs into the Leggett-Garg inequality.

You may read the result they got in the report. Also note that you also get free access to the actual paper.

But don't miss the importance of this work, as stated in this review.

Almost a century after the quantum revolution in science, it’s perhaps surprising that physicists are still trying to prove the existence of superpositions. The real motivation lies in the future of theoretical physics. Fledgling theories of macrorealism may well form the basis of the next generation “upgrade” to quantum theory by setting the scale of the quantum-classical boundary. Thanks to the results of this experiment, we can be sure that the boundary cannot lie below the scale at which the cesium atom has been shown to behave like a wave. How high is this scale? A theoretical measure of macroscopicity [8] (see 18 April 2013 Synopsis) gives the cesium atom a modest ranking of 6.8, above the only other object tested with null measurements [5], but far below where most suspect the boundary lies. (Schrödinger’s cat is a 57.) In fact, matter-wave interferometry experiments have already shown interference fringes with Buckminsterfullerene molecules [9], boasting a rating as high as 12. In my opinion, however, we can be surer of the demonstration of the quantumness of the cesium atom because of the authors’ exclusion of macrorealism via null result measurements. The next step is to try these experiments with atoms of larger mass, superposed over longer time scales and separated by greater distances. This will push the envelope of macroscopicity further and reveal yet more about the nature of the relationship between the quantum and the macroworld.

Zz.

### Lubos Motl - string vacua and pheno

Prof Collins explains string theory
Prof Emeritus Walter Lewin has been an excellent physics instructor who loved to include truly physical demonstrations of certain principles, laws, and concepts.

After you understand string theory, don't forget about inertia, either. ;-)

When the SJWs fired him and tried to erase him from the history of the Universe, a vacuum was created at MIT.

The sensible people at MIT were thinking about a way to fill this vacuum. After many meetings, the committee decided to hire a new string theory professor who is especially good at teaching, someone like Barton Zwiebach #2 but someone who can achieve an even more intimate contact with the students.

At the end, it became clear that they had to hire Prof Collins and her mandatory physics class on string theory is shown above. It is not too demanding even though e.g. the readers of texts by Mr Smolin or Mr Woit – or these not so Gentlemen themselves – may still find the material too technical.

But the rest will surely enjoy it. ;-)

Someone could think that this affiliation with MIT is just a joke but I assure you that Dr Paige Hopewell from the Bikini Calculus lecture above has been an excellent nuclear physicist affiliated with the MIT. While at Purdue, she would win an award in 2007, and so on.

### Clifford V. Johnson - Asymptotia

In Print…!
Here's the postcard they made to advertise the event of tomorrow (Tuesday)*. I'm pleased with how the design worked out, and I'm extra pleased about one important thing. This is the first time that any of my graphical work for the book has been printed professionally in any form on paper, and I am pleased to see that the pdf that I output actually properly gives the colours I've been working with on screen. There's always been this nagging background worry (especially after the struggles I had to do to get the right output from my home printers) that somehow it would all be terribly wrong... that the colours would [...] Click to continue reading this post

## January 19, 2015

### Jester - Resonaances

Weekend plot: spin-dependent dark matter
This weekend plot is borrowed from a nice recent review on dark matter detection:
It shows experimental limits on the spin-dependent scattering cross section of dark matter on protons. This observable is not where the most spectacular race is happening, but it is important for constraining more exotic models of dark matter. Typically, a scattering cross section in the non-relativistic limit is independent of spin or velocity of the colliding particles. However, there exist reasonable models of dark matter where the low-energy cross section is more complicated. One possibility is that the interaction strength is proportional to the scalar product of spin vectors of a dark matter particle and a nucleon (proton or neutron). This is usually referred to as the spin-dependent scattering, although other kinds of spin-dependent forces that also depend on the relative velocity are possible.

In all existing direct detection experiments, the target contains nuclei rather than single nucleons. Unlike in the spin-independent case, for spin-dependent scattering the cross section is not enhanced by coherent scattering over many nucleons. Instead, the interaction strength is proportional to the expectation values of the proton and neutron spin operators in the nucleus.  One can, very roughly, think of this process as a scattering on an odd unpaired nucleon. For this reason, xenon target experiments such as Xenon100 or LUX are less sensitive to the spin-dependent scattering on protons because xenon nuclei have an even number of protons.  In this case,  experiments that contain fluorine in their target molecules have the best sensitivity. This is the case of the COUPP, Picasso, and SIMPLE experiments, who currently set the strongest limit on the spin-dependent scattering cross section of dark matter on protons. Still, in absolute numbers, the limits are many orders of magnitude weaker than in the spin-independent case, where LUX has crossed the 10^-45 cm^2 line. The IceCube experiment can set stronger limits in some cases by measuring the high-energy neutrino flux from the Sun. But these limits depend on what dark matter annihilates into, therefore they are much more model-dependent than the direct detection limits.

### ZapperZ - Physics and Physicists

I Win The Nobel Prize And All I Got Was A Parking Space
I'm sure it is a slight exaggeration, but it is still amusing to read Shuji Nakamura's response on the benefits he got from UCSB after winning the physics Nobel Prize. On the benefits of winning a Nobel Prize:

"I don't have to teach anymore and I get a parking space. That's all I got from the University of California."

Zz.

### Georg von Hippel - Life on the lattice

Scientific Program "Fundamental Parameters of the Standard Model from Lattice QCD"
Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

We are therefore happy to announce the scientific program "Fundamental Parameters of the Standard Model from Lattice QCD" to be held from August 31 to September 11, 2015 at the Mainz Institute for Theoretical Physics (MITP) at Johannes Gutenberg University Mainz, Germany.

This scientific programme is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

We would like to invite you to consider attending this and to apply through our website. After the deadline (March 31, 2015), an admissions committee will evaluate all the applications.

Among other benefits. MITP offers all its participants office space and access to computing facilities during their stay. In addition, MITP will cover local housing expenses for accepted participants. The MITP team will arrange the accommodation individually and also book the accommodation for accepted participants.

We hope you will be able to join us in Mainz in 2015!

With best regards,

the organizers:
Gilberto Colangelo, Georg von Hippel, Heiko Lacker, Hartmut Wittig

### Georg von Hippel - Life on the lattice

This is just a short reminder of some upcoming deadlines for conferences/workshops in the organization of which I am in some way involved.

Abstract submission for QNP 2015 closes on 6th February 2015, and registration closes on 27th February 2015. Visit this link to submit and abstract, and this link to register.

Applications for the Scientific Programme "Fundamental Parameters from Lattice QCD" at MITP close on 31st March 2015. Visit this link to apply.

### CERN Bulletin

Daniel Brandt (1950-2014)

Nous avons le profond regret d’annoncer le décès de Monsieur Daniel BRANDT survenu le 14 décembre 2014.

Monsieur Daniel BRANDT, né le 21 janvier 1950, travaillait à l’Unité DG et était au CERN depuis le 1er mai 1981.

Le Directeur général a envoyé un message de condoléances à sa famille de la part du personnel du CERN.

Affaires sociales
Département des Ressources humaines

### arXiv blog

Turning PacMan Into A Street-Based Chase Game Using Smartphones

Computer scientists have developed a set of Android-based tools that turn games like PacMan into street-based chase games.

Anyone who grew up in the 1980s will be familiar with PacMan, the arcade game in which players use a joystick to guide a tiny yellow character through a two-dimensional maze. As it moves, the character must chomp its way through golden coins while avoiding being killed by ghosts who also sweep through the maze.

### Clifford V. Johnson - Asymptotia

Experiments with Colour
Well, that was interesting! I got a hankering to experiment with pastels the other day. I am not sure why. Then I remembered that I had a similar urge some years ago but had not got past the phase of actually investing in a few bits of equipment. So I dug them out and found a bit of time to experiment. It is not a medium I've really done anything in before and I have a feeling it is a good additional way of exploring technique, and feeling out colour design for parts of the book later on. Who knows? Anyway, all I know is that without my [...] Click to continue reading this post

## January 18, 2015

### Clifford V. Johnson - Asymptotia

LAIH Luncheon – Ramiro Gomez
Yesterday's Luncheon at the Los Angeles Institute for the Humanities, the first of the year, was another excellent one (even though it was a bit more compact than I'd have liked). We caught up with each other and discussed what's been happening with over the holiday season, and then had the artist Ramiro Gomez give a fantastic talk ("Luxury, Interrupted: Art Interventions for Social Change") about his work in highlighting the hidden people of Los Angeles - those cleaners, caregivers, gardeners and others who help make the city tick along, but who are treated as invisible by most. As someone who very regularly gets totally ignored (like I'm not even there!) while standing in front of my own house by many people in my neighbourhood who [...] Click to continue reading this post

### Quantum Diaries

The Ties That Bind

Beneath the ATLAS detector – note the well-placed cable ties. IMAGE: Claudia Marcelloni, ATLAS Experiment © 2014 CERN.

A few weeks ago, I found myself in one of the most beautiful places on earth: wedged between a metallic cable tray and a row of dusty cooling pipes at the bottom of Sector 13 of the ATLAS Detector at CERN. My wrists were scratched from hard plastic cable ties, I had an industrial vacuum strapped to my back, and my only light came from a battery powered LED fastened to the front of my helmet. It was beautiful.

The ATLAS Detector is one of the largest, most complex scientific instruments ever constructed. It is 46 meters long, 26 meters high, and sits 80 metres underground, completely surrounding one of four points on the Large Hadron Collider (LHC), where proton beams are brought together to collide at high energies.  It is designed to capture remnants of the collisions, which appear in the form of particle tracks and energy deposits in its active components. Information from these remnants allows us to reconstruct properties of the collisions and, in doing so, to improve our understanding of the basic building blocks and forces of nature.

On that particular day, a few dozen of my colleagues and I were weaving our way through the detector, removing dirt and stray objects that had accumulated during the previous two years. The LHC had been shut down during that time, in order to upgrade the accelerator and prepare its detectors for proton collisions at higher energy. ATLAS is constructed around a set of very large, powerful magnets, designed to curve charged particles coming from the collisions, allowing us to precisely measure their momenta. Any metallic objects left in the detector risk turning into fast-moving projectiles when the magnets are powered up, so it was important for us to do a good job.

ATLAS is divided into 16 phi sectors with #13 at the bottom. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN

The significance of the task, however, did not prevent my eyes from taking in the wonder of the beauty around me. ATLAS is shaped somewhat like a large barrel. For reference in construction, software, and physics analysis, we divide the angle around the beam axis, phi, into 16 sectors. Sector 13 is the lucky sector at the very bottom of the detector, which is where I found myself that morning. And I was right at ground zero, directly under the point of collision.

To get to that spot, I had to pass through a myriad of detector hardware, electronics, cables, and cooling pipes. One of the most striking aspects of the scenery is the ironic juxtaposition of construction-grade machinery, including built-in ladders and scaffolding, with delicate, highly sensitive detector components, some of which make positional measurements to micron (thousandth of a millimetre) precision. All of this is held in place by kilometres of cable trays, fixings, and what appear to be millions of plastic (sometimes sharp) cable ties.

Scaffolding and ladder mounted inside the precision muon spectrometer. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN.

The real beauty lies not in the parts themselves, but rather in the magnificent stories of international cooperation and collaboration that they tell. The cable tie that scratched my wrist secures a cable that was installed by an Iranian student from a Canadian university. Its purpose is to carry data from electronics designed in Germany, attached to a detector built in the USA and installed by a Russian technician.  On the other end, a Japanese readout system brings the data to a trigger designed in Australia, following the plans of a Moroccan scientist. The filtered data is processed by software written in Sweden following the plans of a French physicist at a Dutch laboratory, and then distributed by grid middleware designed by a Brazilian student at CERN. This allows the data to be analyzed by a Chinese physicist in Argentina working in a group chaired by an Israeli researcher and overseen by a British coordinator.  And what about the cable tie?  No idea, but that doesn’t take away from its beauty.

There are 178 institutions from 38 different countries participating in the ATLAS Experiment, which is only the beginning.  When one considers the international make-up of each of the institutions, it would be safe to claim that well over 100 countries from all corners of the globe are represented in the collaboration.  While this rich diversity is a wonderful story, the real beauty lies in the commonality.

All of the scientists, with their diverse social, cultural and linguistic backgrounds, share a common goal: a commitment to the success of the experiment. The plastic cable tie might scratch, but it is tight and well placed; its cable is held correctly and the data are delivered, as expected. This enormous, complex enterprise works because the researchers who built it are driven by the essential nature of the mission: to improve our understanding of the world we live in. We share a common dedication to the future, we know it depends on research like this, and we are thrilled to be a part of it.

ATLAS Collaboration members in discussion. What discoveries are in store this year? IMAGE: Claudia Marcelloni, ATLAS Experiment © 2008 CERN.

This spring, the LHC will restart at an energy level higher than any accelerator has ever achieved before. This will allow the researchers from ATLAS, as well as the thousands of other physicists from partner experiments sharing the accelerator, to explore the fundamental components of our universe in more detail than ever before. These scientists share a common dream of discovery that will manifest itself in the excitement of the coming months. Whether or not that discovery comes this year or some time in the future, Sector 13 of the ATLAS detector reflects all the beauty of that dream.

## January 17, 2015

### Sean Carroll - Preposterous Universe

We Are All Machines That Think

My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”

Julien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

### Tommaso Dorigo - Scientificblogging

The Hard Life Of The Science Outreach Agent
This morning I woke up at 6AM, had a shower and breakfast, dressed up, and rushed out in the cold of the fading night to catch a train to Mestre, where my car was parked. From there I drove due north for two hours, to a place in the mountains called Pieve di Cadore. A comfortable ride in normal weather, but this morning the weather was horrible, with an insisting water bombing from above which slowly turned to heavy sleet as I gained altitude. The drive was very unnerving as my car is old and not well equipped for these winter conditions - hydroplaning was frequent. But I made it.

### Lubos Motl - string vacua and pheno

Papers by BICEP2, Keck, and Planck out soon
...and other news from the CMB Minnesota conference...
Off-topic: I won't post a new blog post on the "warmest 2014" measurements and claims. See an updated blog post on RRSS AMSU for a few new comments and a graph on the GISS and NCDC results.
The Twitter account of Kevork Abazajian of UC Irvine seems to be the most useful public source where you may learn about some of the most important announcements made at a recent CMB+Pol conference in Minnesota (January 14th-16th, 2015).

Is BICEP2's more powerful successor still seeing the gravitational waves?

Here are the tweets:

Charles Lawrence (Planck): Planck ultimate results will be out in weeks. CMB lensing potential was detected to 40σ by Planck. Measurement by Planck of $$1s\to 2s$$ H transition from CMB has uncertainties 5.5 times better than the laboratory. Planck is not systematics limited on any angular scale. Future space mission needs 10-20x less noise. Try & find a foreground-free spot for polarization experiments (snark intended)-Planck 857 GHz map.

100, 143, 217, 353 GHz polarization data won't be released in the 2015 @Planck data release.

Anthony Chalinor (Planck): Temperature to polarization leakage in upcoming data release is not corrected for, so users beware. Planck finds that adding (light) massive sterile neutrinos does nothing to reduce their tension with the lensing+BAO data.

Francois Boulanger (Planck): B-mode signal will not be detected without the removal of dust polarization from Planck with high accuracy and confidence. Dust SED does not vary strongly across the sky, which was surprising.

Matthieu Tristram (Planck): Planck finds no region where the dust polarization can be neglected compared to primordial B-modes. (LM: This seems like a sloppy blanket statement to me: whether one is negligible depends on $$\ell$$, doesn't it?)

Sabino Matarrese (Planck): Starobinsky $$\varphi^2$$ & exponential inflationary potential are most favored by the Planck primordial power spectrum reconstructions. No evidence of a primordial isocurvature non-Gaussianity is seen in Planck 2015. $$f_{NL} \sim 0.01$$ non-Gaussianity of standard inflation will take LSS (halo bias & bispectrum) + 21 cm + CMB.

Matias Zaldarriaga (theorist): if high $$r$$ is detected, then something other that $$N$$ $$e$$-folds is setting the inflationary dynamics. He is effectively giving $$r\lt 0.01$$ as a theory-favored upper limit from inflation on the tensor amplitude.

Abazajian @Kevaba: cosmology has the highest experimental sensitivity to neutrino mass and is forecast to maintain that position.

Lorenzo Sorbo (theorist): non-boring tensors! Parity violation is detectable at 9σ. Parity violations can produce a differing amount of left- and right-handed gravitons, and produce non-zero TB and EB modes. Cosmological matter power spectrum gives neutrino mass constraints because neutrinos transition from radiation like to matter like. Shape and amplitude of power spectrum gives a handle on the neutrino mass. @Planck gives $$0.3\eV$$ limits, the oscillation scale. $$dP_k/P_k\sim 1\%$$ levels on matter power spec gives $$20\meV$$ constraints on neutrino masses. CMB-S4 experiments alone should be able to get down to the $$34\meV$$ level, $$15\meV$$ level with BAO measurements.

Olivier Doré (SPHEREx): SPHEREx mission for all-sky spectra for every 6.2" pixels to $$R=40$$ in NIR. Quite a legacy! SPHEREx will detect with high significance single-field inflation non-Gaussianity. SPHEREx will detect *every* quasar in the Universe, approximately 1.5 million. SPHEREx astroph: 1412.4872.

The @SPTelescope polarization main survey patch of 500 square degrees is currently underway.

Bradford Benson (SPTpol): preliminary results presentation of SPTpol BB modes detection with 5σ level of lensing scale modes. SPT-3G will have 16,000 3-band multichroic pixels with 3 720 mm 4K alumina lenses w/ 3x FOV. SPT-3G will have 150σ detection of lensing B modes & forecast $$\sigma(N_{eff})=0.06$$.

Suzanne Staggs (ACTpol): ACTpol has detected CMB lensing B modes at 4.5σ. neutrinos & dark energy forecasts for Advanced ACT. Exact numbers are available in de Bernardis poster.

Nils Halverson (POLARBEAR): POLARBEAR rejects "no lensing B-modes" at 4.2σ. Simons Array of 3x POLARBEAR-2 forecast sensitivity $$\sigma(m_\nu)=40\meV$$, $$\sigma(r=0.1)=\sigma(ns)=0.006$$.

Paolo de Bernardis poster: Advanced ACT plus BOSS ultimate sensitivity $$96\meV$$ for $$ν$$ mass.

John Kováč (BICEP2): BICEP2 sees excess power at 1 degree scale in BB.
BICEP2 + Planck + Keck Array analysis SOON. Cannot be shown yet.
Keck Array is 2.5 times more sensitive than BICEP2. The analysis is underway. With the dataset we had back when we published, we were only able to exclude dust at 1.7 sigma. No departure of SED from simple scaling law is very good news.
At end of Kováč's talk: BICEP2 + Planck out by end of month. Those + Keck Array 150 GHz by spring 2015. All of this + other Keck frequencies will be released by the end of 2015.
Aurelien Fraisse (SPIDER): SPIDER 6 detector, 2 frequencies flight under way, & foreground limited, not systematics. $$r \lt 0.03$$ at 3σ, low foreground.

Al Kogut (PIPER): PIPER will be doing almost all of sky B modes at multifrequency, 8 flights get to $$r \lt 0.007$$ (2σ).

CLASS will be able to measure $$r = 0.01$$, even with galactic foregrounds. Site construction underway.

Lloyd Knox (theorist): detecting relic neutrinos is possible via gravitational effects in the CMB. The dynamics of the phase shift in acoustic peaks results from variation in $$N_{eff}$$.

Uroš Seljak (theorist): multiple deflections in the weak lensing signal is important when the convergence sensitivity gets to ~1% level. Effects not at 10% in CLκκ, more like 1%. Krause et al. is in preparation. Delensing of B-modes has a theoretical limit at 0.2 μK arcmin or $$r=2\times 10^{-5}$$.

Carlo Contaldi: little was known about the dust polarization before BICEP2 & Planck. SPIDER = 6 x BICEP2 - 30 km of atmosphere and less exposure time. Detailed modeling of the dust polarization took place. Large B field uncertainty, input was taken from starlight observations. Full 3D models for the "southern patch" including the BICEP2 window reproduce the WMAP 23 GHz channel. Small & large scales great, not interm.

Raphael Flauger (theorist): BICEP2 BB + Planck 353 GHz give no evidence for primordial B modes. Plus, the sun sets outside.

## January 16, 2015

### Symmetrybreaking - Fermilab/SLAC

20-ton magnet heads to New York

A superconducting magnet begins its journey from SLAC laboratory in California to Brookhaven Lab in New York.

Imagine an MRI magnet with a central chamber spanning some 9 feet—massive enough to accommodate a standing African elephant. Physicists at the US Department of Energy’s Brookhaven National Laboratory need just such an extraordinary piece of equipment for an upcoming experiment. And, as luck would have it, physicists at SLAC National Accelerator Laboratory happen to have one on hand.

Instead of looking at the world’s largest land animal, this magnet takes aim at the internal structure of something much smaller: the atomic nucleus.

Researchers at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) specialize in subatomic investigations, smashing atoms and tracking the showers of fast-flying debris. RHIC scientists have been sifting through collision data nuclei for 13 years, but to go even deeper they need to upgrade their detector technology. That’s where a massive cylindrical magnet comes in.

“The technical difficulty in manufacturing such a magnet is staggering,” says Brookhaven Lab physicist David Morrison, co-spokesperson for PHENIX, one of RHIC’s two main experiments. “The technology may be similar to an MRI—also a superconducting solenoid with a hollow center—but many times larger and completely customized. These magnets look very simple from the outside, but the internal structure contains very sophisticated engineering. You can’t just order one of these beasts from a catalogue. ”

The proposed detector upgrade—called sPHENIX—launched the search for this elusive magnet. After assessing magnets at physics labs across the world, the PHENIX collaboration found an ideal candidate in storage across the country.

At SLAC in California, a 40,000-pound beauty had recently finished a brilliant experimental run. This particular solenoid magnet—a thick, hollow pipe about 3.5 meters across and 3.9 meters long—once sat at the heart of a detector in SLAC’s BaBar experiment, which explored the asymmetry between matter and antimatter from 1999 to 2008.

“We disassembled the detector and most of the parts have already gone to the scrap yard,” says Bill Wisniewski, who serves as the deputy to the SLAC Particle Physics and Astrophysics director and was closely involved with planning the move. “It’s just such a pleasure to see that there’s some hope that a major component of the detector—the solenoid—will be reused.”

The magnet was loaded onto a truck and departed SLAC today, beginning its long and careful journey to Brookhaven’s campus in New York.

“The particles that bind and constitute most of the visible matter in the universe remain quite mysterious,” says PHENIX co-spokesperson Jamie Nagle, a physicist at the University of Colorado. “We’ve made extraordinary strides at RHIC, but the BaBar magnet will take us even further. We’re grateful for this chance to give this one-of-a-kind equipment a second life, and I’m very excited to see how it shapes the future of nuclear physics.”

#### The BaBar solenoid

The BaBar magnet, a 30,865-pound solenoid housed in an 8250-pound frame, was built by the Italian company Ansaldo. Ansaldo’s superconducting magnets have found their way into many pioneering physics experiments, including the ATLAS and CMS detectors of the Large Hadron Collider. The inner ring of the BaBar magnet spans 2.8 meters with a total outer diameter of nearly 3.5 meters—nearly the width of the Statue of Liberty’s arm.

During its run at SLAC, the BaBar experiment made many strides in fundamental physics, including contributions to the work awarded the 2008 Nobel Prize in Physics for the theory behind “charge-parity violation,” the idea that matter and antimatter behave in slightly different ways. This concept explains in part why the universe today is filled with matter and not antimatter.

“BaBar was a seminal experiment in particle physics, and the magnet’s strength, size and uniform field proved essential to its discoveries,” says John Haggerty, the Brookhaven physicist leading the acquisition of the BaBar magnet. “It’s a remarkable piece of engineering, and it has potential beyond its original purpose.”

In May 2013, Haggerty visited SLAC to meet with Wesley Craddock, the engineer who worked with the magnet since its installation, and Mike Racine, the technician who supervised its removal and storage. “It was immediately clear that this excellent solenoid was in very good condition and almost ready to move,” Haggerty says.

Adds Morrison, “The BaBar magnet is larger than our initial plans called for, but using this incredible instrument will save considerable resources by repurposing existing national lab assets.”

Brookhaven Lab was granted ownership of the BaBar solenoid in July 2013, but there was still the issue of the entire continent that sat between SLAC and the experimental hall of the PHENIX detector.

#### Moving the magnet

The Department of Energy is no stranger to sharing massive magnets. In the summer of 2013, the 50-foot-wide Muon g-2 ring moved from Brookhaven Lab to Fermilab, where it will search for undiscovered particles hidden in the vacuum.

“As you might imagine, shipping this magnet requires very careful consideration,” says Peter Wanderer, who heads Brookhaven’s Superconducting Magnet Division and worked with colleagues Michael Anerella and Paul Kovach on engineering for the big move. “You’re not only dealing with an oddly shaped and very heavy object, but also one that needs to be protected against even the slightest bit of damage. This kind of high-field, high-uniformity magnet can be surprisingly sensitive.”

Preparations for the move required consulting with one of the solenoid’s original designers in Italy, Pasquale Fabbricatore, and designing special shipping fixtures to stabilize components of the magnet.

After months of preparation at both SLAC and Brookhaven, the magnet—inside its custom packaging—was loaded onto a specialized truck this morning, and slowly began its journey to New York.

“I’m sad to see it go,” Racine says. “It’s the only one like it in the world. But I’m happy to see it be reused.”

After the magnet arrives, a team of experts will conduct mechanical, electrical, and cryogenic tests to prepare for its use in the upgrade to the sPHENIX upgrade.

“We hope to have sPHENIX in action by 2021—including the BaBar magnet at its heart—but we have to remember that it is currently a proposal, and physics is full of surprises,” Morrison says.

The BaBar magnet will be particularly helpful in identifying upsilons—the bound state of a very heavy bottom quark and an equally heavy anti-bottom quark. There are three closely related kinds of upsilons, each of which melts, or dissociates, at a different well-defined trillion-degree temperature. This happens in the state of matter known as quark-gluon plasma, or QGP, which was discovered at RHIC.

“We can use these upsilons as a very precise thermometer for the QGP and understand its transition into normal matter,” Morrison says. “Something similar happened in the early universe as it began to cool microseconds after the big bang.”

Like what you see? Sign up for a free subscription to symmetry!

### Symmetrybreaking - Fermilab/SLAC

Scientists complete array on Mexican volcano

An international team of astrophysicists has completed an advanced detector to map the most energetic phenomena in the universe.

On Thursday, atop Volcán Sierra Negra, on a flat ledge near the highest point in Mexico, technicians filled the last of a collection of 300 cylindrical vats containing millions of gallons of ultrapure water.

Together, the vats serve as the High-Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory, a vast particle detector covering an area larger than 5 acres. Scientists are using it to catch signs of some of the highest-energy astroparticles to reach the Earth.

The vats sit at an altitude of 4100 meters (13,500 feet) on a rocky site within view of the nearby Large Millimeter Telescope Alfonso Serrano. The area remained undeveloped until construction of the LMT, which began in 1997, brought with it the first access road, along with electricity and data lines.

Temperatures at the top of the mountain are usually just cool enough for snow year-round, even though the atmosphere at the bottom of the mountain is warm enough to host palm trees and agave.

“The local atmosphere is part of the detector,” says Alberto Carramiñana, general director of INAOE, the National Institute of Astrophysics, Optics and Electronics.

Scientists at HAWC are working to understand high-energy particles that come from space. High-energy gamma rays come from extreme environments such as supernova explosions, active galactic nuclei and gamma-ray bursts. They’re also associated with high-energy cosmic rays, the origins of which are still unknown.

When incoming gamma rays and cosmic rays from space interact with Earth’s atmosphere, they produce a cascade of particles that shower the Earth. When these high-energy secondary particles reach the vats, they shoot through the water inside faster than particles of light can, producing an optical shock wave called “Cherenkov radiation.” The boom looks like a glowing blue, violet or ultraviolet cone.

The Pierre Auger Cosmic Ray Observatory in western Argentina, in operation since 2004, uses similar surface detector tanks to catch cosmic rays, but its focus is particles at higher energies—up to millions of giga-electronvolts. HAWC observes widely and deeply between the energy range of 100 giga-electronvolts and 100,000 giga-electronvolts.

“HAWC is a unique water Cherenkov observatory, with no actual peer in the world,” Carramiñana says.

Results from HAWC will complement the Fermi Gamma-ray Space Telescope, which observes at lower energy levels, as well as dozens of other tools across the electromagnetic spectrum.

The vats at HAWC are made of corrugated steel, and each one holds a sealed, opaque bladder containing 50,000 gallons of liquid, according to Manuel Odilón de Rosas Sandoval, HAWC tank assembly coordinator. Each tank is 4 meters (13 feet) high and 7.3 meters (24 feet) in diameter and includes four light-reading photomultiplier tubes to detect the Cherenkov radiation.

From its perch, HAWC sees the high-energy spectrum, in which particles have more energy in their motion than in their mass. The device is open to particles from about 15 percent of the sky at a time and, as the Earth rotates, is exposed to about 2/3 of the sky per day.

Combining data from the 1200 sensors, astrophysicists can piece together the precise origins of the particle shower. With tens of thousands of events hitting the vats every second, around a terabyte of data will arrive per day. The device will record half a trillion events per year.

The observatory, which was proposed in 2006 and began construction in 2012, is scheduled to operate for 10 years. “I look forward to the operational lifetime of HAWC,” Carramiñana says. “We are not sure what we will find.”

More than 100 researchers from 30 partner organizations in Mexico and the United States collaborate on HAWC, with two additional associated scientists in Poland and Costa Rica. Prominent American partners include the University of Maryland, NASA’s Goddard Space Flight Center and Los Alamos National Laboratory. Funding comes from the Department of Energy, the National Science Foundation and Mexico’s National Council of Science and Technology.

Like what you see? Sign up for a free subscription to symmetry!

### Quantum Diaries

Will Self’s CERN

“It doesn’t look to me like the rose window of Notre Dame. It looks like a filthy big machine down a hole.” — Will Self

Like any documentary, biography, or other educational program on the radio, Will Self’s five-part radio program Self Orbits CERN is partially a work of fiction. It is based, to be sure, on a real walk through the French countryside along the route of the Large Hadron Collider, on the quest for a promised “sense of wonder”. And it is based on real tours at CERN and real conversations. But editorial and narrative choices have to be made in producing a radio program, and in that sense it is exactly the story that Will Self wants to tell. He is, after all, a storyteller.

It is a story of a vast scientific bureaucracy that promises “to steal fire from the gods” through an over-polished public relations team, with day-to-day work done by narrow, technically-minded savants who dodge the big philosophical questions suggested by their work. It is a story of big ugly new machines whose function is incomprehensible. It is the story of a walk through thunderstorms and countryside punctuated by awkward meetings with a cast of characters who are always asked the same questions, and apparently never give a satisfactory answer.

Self’s CERN is not the CERN I recognize, but I can recognize the elements of his visit and how he might have put them together that way. Yes, CERN has secretariats and human resources and procurement, all the boring things that any big employer that builds on a vast scale has to have. And yes, many people working at CERN are specialists in the technical problems that define their jobs. Some of us are interested in the wider philosophical questions implied by trying to understand what the universe is made of and how it works, but some of us are simply really excited about the challenges of a tiny part of the overall project.

“I think you understand more than you let on.”Professor Akram Khan

The central conflict of the program feels a bit like it was engineered by Self, or at least made inevitable by his deliberately-cultivated ignorance. Why, for example, does he wait until halfway through the walk to ask for the basic overview of particle physics that he feels he’s missing, unless it adds to the drama he wants to create? By the end of the program, he admits that asking for explanations when he hasn’t learned much background is a bit unfair. But the trouble is not whether he knows the mathematics. The trouble, rather, is that he’s listened to a typical, very short summary of why we care about particle physics, and taken it literally. He has decided in advance that CERN is a quasi-religious entity that’s somehow prepared to answer big philosophical questions, and never quite reconsiders the discussion based on what’s actually on offer.

If his point is that particle physicists who speak to the public are sometimes careless, he’s absolutely right. We might say we are looking for how or why the universe was created, when really we mean we are learning what it’s made of and the rules for how that stuff interacts, which in turn lets us trace what happened in the past almost (but not quite) back to the moment of the Big Bang. When we say we’re replicating the conditions at that moment, we mean we’re creating particles so massive that they require the energy density that was present back then. We might say that the Higgs boson explains mass, when more precisely it’s part of the model that gives a mechanism for mass to exist in models whose symmetries forbid it. Usually a visit to CERN involves several different explanations from different people, from the high-level and media-savvy down to the technical details of particular systems. Most science journalists would put this information together to present the perspective they wanted, but Self apparently takes everything at face value, and asks everyone he meets for the big picture connections. His narrative is edited to literally cut off technical explanations, because he wants to hear about beauty and philosophy.

Will Self wants the people searching for facts about the universe to also interpret them in the broadest sense, but this is much harder than he implies. As part of a meeting of the UK CMS Collaboration at the University of Bristol last week, I had the opportunity to attend a seminar by Professor James Ladyman, who discussed the philosophy of science and the relationship of working scientists to it. One of the major points he drove home was just how specialized the philosophy of science can be: that the tremendous existing body of work on, for example, interpreting Quantum Mechanics requires years of research and thought which is distinct from learning to do calculations. Very few people have had time to learn both, and their work is important, but great scientific or great philosophical work is usually done by people who have specialized in only one or the other. In fact, we usually specialize a great deal more, into specific kinds of quantum mechanical interactions (e.g. LHC collisions) and specific ways of studying them (particular detectors and interactions).

Toward the end of the final episode, Self finds himself at Voltaire’s chateau near Ferney, France. Here, at last, is what he is looking for: a place where a polymath mused in beautiful surroundings on both philosophy and the natural world. Why have we lost that holistic approach to science? It turns out there are two very good reasons. First, we know an awful lot more than Voltaire did, which requires tremendous specialization discussed above. But second, science and philosophy are no longer the monopoly of rich European men with leisure time. It’s easy to do a bit of everything when you have very few peers and no obligation to complete any specific task. Scientists now have jobs that give them specific roles, working together as a part of a much wider task, in the case of CERN a literally global project. I might dabble in philosophy as an individual, but I recognize that my expertise is limited, and I really enjoy collaborating with my colleagues to cover together all the details we need to learn about the universe.

In Self’s world, physicists should be able to explain their work to writers, artists, and philosophers, and I agree: we should be able to explain it to everyone. But he — or at least, the character he plays in his own story — goes further, implying that scientific work whose goals and methods have not been explained well, or that cannot be recast in aesthetic and moral terms, is intrinsically suspect and potentially valueless. This is a false dichotomy: it’s perfectly possible, even likely, to have important research that is often explained poorly! Ultimately, Self Orbits CERN asks the right questions, but it is too busy musing about what the answers should be to pay attention to what they really are.

For all that, I recommend listening to the five 15-minute episodes. The music is lovely, the story engaging, and the description of the French countryside invigorating. The jokes were great, according to Miranda Sawyer (and you should probably trust her sense of humour rather than the woefully miscalibrated sense of humor that I brought from America). If you agree with me that Self has gone wrong in how he asks questions about science and which answers he expects, well, perhaps you will find some answers or new ideas for yourself.

### Jon Butterworth - Life and Physics

A follow up on research impact and the REF

Anyone connected with UK academia, who follows news about it, or indeed who has met a UK academic socially over the last couple of years, will probably have heard about the Research Excellence Framework (REF). All UK universities had their research assessed in a long-drawn-out process which will influence how billions of pounds of research funding are distributed. Similar excercises go on every six or so years.

The results are not a one-dimensional league table, which is good; so everyone has their favourite way of combining them to make their own league table, which is entertaining. My favourite is “research intensity” (see below, from the THE):

A new element in the REF this time was the inclusion of some assessment of “Impact”. This (like the REF itself) is far from universally popular. Personally I’m relatively supportive of this element in principle though, as I wrote here. Essentially, while I don’t think all academic research should be driven by predictions of its impact beyond academia, I do think that it should be part of the mix. The research activity of any major physics department should, even serendipitously, have some impact outside of the academic discipline (as well as lots in it), and it is worth collecting and assessing some evidence for this. Your mileage in other subjects may vary.

I also considered whether my Guardian blog might constitute a form of impact-beyond-academia for the discovery of the Higgs boson and the other work of the Large Hadron Collider, and I even asked readers for evidence and help (thanks!). In the end we did submit a “case study” on this. There is a summary of the case that was submitted here. The studies generally have more hard evidence than is given in that précis, but you get the idea.

Similar summaries of all UCL’s impact case studies are given here. Enjoy…

Filed under: Physics, Politics, Science, Science Policy, Writing Tagged: Guardian, Higgs, LHC, UCL

## January 15, 2015

### Andrew Jaffe - Leaves on the Line

Oscillators, Integrals, and Bugs

I am in my third year teaching a course in Quantum Mechanics, and we spend a lot of time working with a very simple system known as the harmonic oscillator — the physics of a pendulum, or a spring. In fact, the simple harmonic oscillator (SHO) is ubiquitous in almost all of physics, because we can often represent the behaviour of some system as approximately the motion of an SHO, with some corrections that we can calculate using a technique called perturbation theory.

It turns out that in order to describe the state of a quantum SHO, we need to work with the Gaussian function, essentially the combination exp(-y²/2), multiplied by another set of functions called Hermite polynomials. These latter functions are just, as the name says, polynomials, which means that they are just sums of terms like ayⁿ where a is some constant and n is 0, 1, 2, 3, … Now, one of the properties of the Gaussian function is that it dives to zero really fast as y gets far from zero, so fast that multiplying by any polynomial still goes to zero quickly. This, in turn, means that we can integrate polynomials, or the product of polynomials (which are just other, more complicated polynomials) multiplied by our Gaussian, and get nice (not infinite) answers.

The details depend on exactly which Hermite polynomials I pick — 7 and 16 fail, as shown, but some combinations give the correct answer, which is in fact zero unless the two numbers differ by just one. In fact, if you force Mathematica to split the calculation into separate integrals for each term, and add them up at the end, you get the right answer.

I’ve tried to report this to Wolfram, but haven’t heard back yet. Has anyone else experienced this?

## January 14, 2015

### ATLAS Experiment

The Ties That Bind

A few weeks ago, I found myself in one of the most beautiful places on earth: wedged between a metallic cable tray and a row of dusty cooling pipes at the bottom of Sector 13 of the ATLAS Detector at CERN. My wrists were scratched from hard plastic cable ties, I had an industrial vacuum strapped to my back, and my only light came from a battery powered LED fastened to the front of my helmet. It was beautiful.

Beneath the ATLAS detector – note the well-placed cable ties. IMAGE: Claudia Marcelloni, ATLAS Experiment © 2014 CERN.

The ATLAS Detector is one of the largest, most complex scientific instruments ever constructed. It is 46 meters long, 26 meters high, and sits 80 metres underground, completely surrounding one of four points on the Large Hadron Collider (LHC), where proton beams are brought together to collide at high energies.  It is designed to capture remnants of the collisions, which appear in the form of particle tracks and energy deposits in its active components. Information from these remnants allows us to reconstruct properties of the collisions and, in doing so, to improve our understanding of the basic building blocks and forces of nature.

On that particular day, a few dozen of my colleagues and I were weaving our way through the detector, removing dirt and stray objects that had accumulated during the previous two years. The LHC had been shut down during that time, in order to upgrade the accelerator and prepare its detectors for proton collisions at higher energy. ATLAS is constructed around a set of very large, powerful magnets, designed to curve charged particles coming from the collisions, allowing us to precisely measure their momenta. Any metallic objects left in the detector risk turning into fast-moving projectiles when the magnets are powered up, so it was important for us to do a good job.

ATLAS is divided into 16 phi sectors with #13 at the bottom. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN

The significance of the task, however, did not prevent my eyes from taking in the wonder of the beauty around me. ATLAS is shaped somewhat like a large barrel. For reference in construction, software, and physics analysis, we divide the angle around the beam axis, phi, into 16 sectors. Sector 13 is the lucky sector at the very bottom of the detector, which is where I found myself that morning. And I was right at ground zero, directly under the point of collision.

To get to that spot, I had to pass through a myriad of detector hardware, electronics, cables, and cooling pipes. One of the most striking aspects of the scenery is the ironic juxtaposition of construction-grade machinery, including built-in ladders and scaffolding, with delicate, highly sensitive detector components, some of which make positional measurements to micron (thousandth of a millimetre) precision. All of this is held in place by kilometres of cable trays, fixings, and what appear to be millions of plastic (sometimes sharp) cable ties.

Scaffolding and ladder mounted inside the precision muon spectrometer. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN.

The real beauty lies not in the parts themselves, but rather in the magnificent stories of international cooperation and collaboration that they tell. The cable tie that scratched my wrist secures a cable that was installed by an Iranian student from a Canadian university. Its purpose is to carry data from electronics designed in Germany, attached to a detector built in the USA and installed by a Russian technician.  On the other end, a Japanese readout system brings the data to a trigger designed in Australia, following the plans of a Moroccan scientist. The filtered data is processed by software written in Sweden following the plans of a French physicist at a Dutch laboratory, and then distributed by grid middleware designed by a Brazilian student at CERN. This allows the data to be analyzed by a Chinese physicist in Argentina working in a group chaired by an Israeli researcher and overseen by a British coordinator.  And what about the cable tie?  No idea, but that doesn’t take away from its beauty.

There are 178 institutions from 38 different countries participating in the ATLAS Experiment, which is only the beginning.  When one considers the international make-up of each of the institutions, it would be safe to claim that well over 100 countries from all corners of the globe are represented in the collaboration.  While this rich diversity is a wonderful story, the real beauty lies in the commonality.

All of the scientists, with their diverse social, cultural and linguistic backgrounds, share a common goal: a commitment to the success of the experiment. The plastic cable tie might scratch, but it is tight and well placed; its cable is held correctly and the data are delivered, as expected. This enormous, complex enterprise works because the researchers who built it are driven by the essential nature of the mission: to improve our understanding of the world we live in. We share a common dedication to the future, we know it depends on research like this, and we are thrilled to be a part of it.

ATLAS Collaboration members in discussion. What discoveries are in store this year?  IMAGE: Claudia Marcelloni, ATLAS Experiment © 2008 CERN.

This spring, the LHC will restart at an energy level higher than any accelerator has ever achieved before. This will allow the researchers from ATLAS, as well as the thousands of other physicists from partner experiments sharing the accelerator, to explore the fundamental components of our universe in more detail than ever before. These scientists share a common dream of discovery that will manifest itself in the excitement of the coming months. Whether or not that discovery comes this year or some time in the future, Sector 13 of the ATLAS detector reflects all the beauty of that dream.

 Steven Goldfarb is a physicist from the University of Michigan working on the ATLAS Experiment at CERN. He currently serves as the Outreach & Education Coordinator, a member of the ATLAS Muon Project, and an active host for ATLAS Virtual Visits. Send a note to info@atlas-live.ch and he will happily host a visit from your school.

### ZapperZ - Physics and Physicists

Superstrings For Dummies
Here's another educational video by Don Lincoln out of Fermilab. This time, it is on the basic idea (and the emphasis here is on BASIC) of String/Superstrings.

Zz.

### Jon Butterworth - Life and Physics

Prepare yourself for the restart

As the preparations for the higher-energy restart of the LHC continue, good to to see this Horizon “Hunt for the Higgs” (with Jim Al-Khalili, Jim Gates, Adam Davison, me, and others) is available again on BBC iPlayer for a while. I recommend it as good preparation/revision. As is Smashing Physics of course. Neither BBC iPlayer nor the book are available in the US or Canada sadly, but  don’t despair, the book is out on 27th Jan as Most Wanted Particle (see for example here).

Filed under: Particle Physics, Physics, Science Tagged: BBC, books, cern, LHC, Smashing Physics, video

## January 13, 2015

### Lubos Motl - string vacua and pheno

A model that agrees with tau-mu Higgs decays and 2 other anomalies
...and its incomplete divine stringy incarnation...

I originally missed a hep-ph preprint almost a week ago,
Explaining $$h\to \mu^\pm \tau^\mp$$, $$B\to K^*\mu^+\mu^-$$, and $$B\to K\mu^+\mu^-/B\to Ke^+e^−$$ in a two-Higgs-doublet model with gauged $$L_\mu−L_\tau$$
by Crivellin, D'Ambrosio, and Heeck, probably because it had such a repulsively boring title. By the way, do you agree with the hype saying that the new Mathjax 2.5 beta is loading 30-40 percent faster than Mathjax 2.4 that was used on this blog up to yesterday morning?

The title of the preprint is uninspiring even though it contains all the good stuff. Less is sometimes more. At any rate, CMS recently reported a 2.4-sigma excess in the search for the decays of the Higgs boson$h\to \mu^\pm \tau^\mp$ which is flavor-violating. A muon plus an antitau; or an antimuon plus a tau. Bizarre. The 2.4-sigma excess corresponds to the claim that about 1% of the Higgs bosons decay in this weird way! Correct me if I am wrong but I think that this excess has only been discussed in the comment section of this blog but I was very excited about it in July.

Aside from this flavor-violating hint, the LHCb experiment has reported several anomalies and the two most famous ones may be explained by the model promoted by this paper. One of them was discussed on TRF repeatedly:

The $$B$$-mesons may decay to $$K$$-mesons plus a charged lepton pair, $$\ell^+\ell^-$$, and the processes with $$\ell=e$$ and $$\ell=\mu$$ should be almost equally frequent according to the Standard Model but LHCb seems to see a difference between the electron-producing and muon-producing processes. The significance of the signal is 2.6 sigma.

The final, third deviation is seen by LHCb, too. The rate of the $$B$$ decay to an off-shell $$K^*$$ along with the muon-antimuon pair, $$\mu^+\mu^-$$, seems to deviate from the Standard Model by 2-3 sigma, too.

Each of these three anomalies is significant approximately at the 2.5-sigma level and they seem to have something in common. The second generation – muons – is treated a bit differently. It doesn't seem to be just another copy of the first generation (or the third generation).

The model by the CERN-Naples-Brussels team claims to be compatible with all these three anomalies. Within this model, the three anomalies are no longer independent from each other – which may strengthen your belief that they are not just flukes that will go away.

If you were willing to oversimplify just a little bit, you could argue that these three anomalies are showing "almost the same thing" so you may add these excesses in the Pythagorean way. And $$\sqrt{3}\times 2.5 \approx 4.3$$. With this optimistic interpretation, we may be approaching a 5-sigma excess. ;-)

These three physicists construct a model. It is a two-Higgs-doublet model (2HDM). The number of Higgs doublets is doubled relatively to the Standard Model – to yield the spectrum we know from minimal SUSY. But 2HDM is meant to be a more general model of the Higgs sector, a model ignoring the constraints on the parameters that are implied by supersymmetry. (But it is also a more special model because it ignores or decouples all the other superpartners.)

And there's one special new feature that they need before they explain the anomalies. Normally, the lepton number $$L$$ – and especially the three generation-specific lepton numbers $$L_e,L_\mu,L_\tau$$ – are (approximate?) global symmetries. But these three folks promote one particular combination, namely the difference $$L_\mu-L_\tau$$, to a gauge symmetry – one that is spontaneously broken by a scalar field.

This gauging of the symmetry adds a new spin-one boson, $$Z'$$, which has some mass, and right-handed neutrinos acquire some Majorana masses because of that, too. These new elementary particles and interactions also influence the processes such as the decays of the Higgs bosons and $$B$$-mesons – those we encountered in the anomalies.

What I find particularly attractive is that the gauging of $$L_\mu-L_\tau$$ may support an old crazy $$E_8$$ idea of mine. It is a well-known fact that the adjoint (in this case also fundamental) representation $${\bf 248}$$ of the exceptional Lie group $$E_8$$ decomposes under the maximal $$E_6\times SU(3)$$ subgroup as${\bf 248} = ({\bf 78},{\bf 1}) + ({\bf 1},{\bf 8}) + ({\bf 27},{\bf 3}) + ({\bf \bar{27}},{\bf \bar 3})$ It is the direct sum of the adjoint representations of the subgroup's factors; and of the tensor product of the fundamental representations (plus the complex conjugate representation: note that $$E_6$$ is the only simple exceptional Lie group that has complex representations).

If you use $$E_6$$ or its subgroup as a grand unified group, the representation $${\bf 27}$$ produces one generation of quarks and leptons. It works but what is very cool is that the decomposition of the representation of $$E_8$$ seems to automatically produce three copies of the representation $${\bf 27}$$.

It almost looks like if the $$E_8$$ group were predicting three generations. The three generations may be complex-rotated by the $$SU(3)_g$$ group, the centralizer of the grand unified group $$E_6$$ within the $$E_8$$ group. Isn't it cool? I added the $$g$$ subscript for "generational".

A problem with this cute story is that the most natural stringy reincarnation of this $$E_8$$ picture, the $$E_8\times E_8$$ heterotic string theory (or its strongly coupled limit, the Hořava-Witten heterotic M-theory) doesn't normally support this way of counting the generations. Recall that in 1985, this became the first realistic embedding of the Standard Model (and SUSY and grand unification, not to mention gravity) within string theory. But the number of generations is usually written as $$N_g=|\chi|/2$$, one-half of the Euler characteristic of the Calabi-Yau manifold. The latter constant may be anything. All traces of the special role of $$3$$ are eliminated, and so on. A related defect is that the rest of the $$E_8$$ group outside $$E_6$$ is broken "by the compactification" which is a "stringy effect" so no four-dimensional effective field theory description ever sees the other $$E_8$$ gauge bosons – except for the GUT $$E_6$$ gauge bosons.

But from a different perspective, there could still be something special about the three generations – due to some effective, approximate, or local restoration of the whole $$E_8$$ symmetry. The simplest heterotic compactifications identify the field strength in the $$SU(3)_E$$ part of the gauge group – a subgroup of $$E_8$$ – with the field strength in the gravitational $$SU(3)_{CY}$$ holonomy – this $$SU(3)_{CY}$$ is a subgroup of $$SO(6)$$ rotating the six Calabi-Yau dimensions.

The grand unified group is only an $$E_6$$ or smaller because it's the centralizer of $$SU(3)_g$$ within $$E_8$$. And I had to take the centralizer of $$SU(3)_g$$ because that's the components of the field strength that break the gauge group in $$d=10$$ spacetime dimensions. Perhaps, we should think that this field strength – or some of its components – are "small" in magnitude, so that one generator of this $$SU(3)_g$$, and $$L_\mu-L_\tau$$ is indeed one generator of $$SU(3)_g$$ if $$(e,\mu,\tau)$$ are interpreted as the fundamental triplet of $$SU(3)_g$$, is "much less broken" than others.

If the relevant component of the field strength may be considered "small" in this sense, it could be possible to organize the fermionic spectrum into the part of the $$E_8$$ multiplet. And one should find some field-theoretical $$Z'$$ boson responsible for the spontaneous breaking of this generator of the generational $$SU(3)_g$$.

As you can see, if the heterotic models may be formulated in a slightly special, unorthodox, outside-the-box way (and yes, it's a somewhat big "if"), one may have a natural stringy model that achieves "more than grand" unification, explains why there are three generations of fermions, and accounts for three so far weak anomalies observed by CMS and LHCb (which will dramatically strengthen in a few months if they are real).

Hat tip: Tommaso Dorigo

### Symmetrybreaking - Fermilab/SLAC

Dark horse of the dark matter hunt

Dark matter might be made up of a type of particle not many scientists are looking for: the axion.

Dark matter, the substance making up 85 percent of all the mass in the universe, is invisible. The goal of ADMX is to detect it by turning it into photons, particles of light. Dark matter was forged in the early universe, under conditions of extreme heat. ADMX, on the other hand, operates in extreme cold. Dark matter comprises most of the mass of a galaxy. To find it, ADMX will use sophisticated devices microscopic in size.

Scientists on ADMX—short for the Axion Dark Matter eXperiment—are searching for hypothetical particles called axions. The axion is a dark matter candidate that is also a bit of a dark horse, even as this esoteric branch of physics goes.

Unlike most dark matter candidate possibilities, axions are very low in mass and interact very weakly with particles of ordinary matter and so are difficult to detect. However, according to theory, axions can turn into photons, which are much more interactive and easier to detect.

In July 2014, the US Department of Energy picked three dark matter experiments as most promising for continued support, including ADMX. The other two—the Large Underground Xenon (LUX) detector and the Cryogenic Dark Matter Search (CDMS)—are both designed to hunt for another dark matter candidate, weakly interacting massive particles, or WIMPs.

With the upgrade funded by the Department of Energy, the ADMX team has added a liquid helium-cooled refrigerator to chill its sensitive detectors, known as superconducting quantum interference devices (SQUIDs). The ADMX experiment uses its powerful magnetic field to turn dark matter axions into microwave photons, which a SQUID can detect when operating at a specific frequency corresponding to the mass that of the axion.

Axions may be as puny as one trillionth of the mass of an electron. Compare that to WIMPs, which are predicted to be hundreds of thousands of times more massive than electrons, making them heavier than protons and neutrons.

The other two DOE-boosted experiments, CDMS and LUX, have plenty of competition around the world in their search for WIMPs. But ADMX stands nearly alone as a large-scale hunter for axions. Leslie Rosenberg, University of Washington physicist and a leader of the ADMX project, sees this as a call to work quickly before others catch up. “People are getting nervous about WIMP dark matter,” he says. So the pressure is on to “do a definitive experiment, and either detect this [axion] or reject the hypothesis.”

#### The answer to a problem

Axions are hypothetical particles proposed in the late 1970s, originally to fix a problem entirely unrelated to dark matter.

As physicists developed the theory of the strong nuclear force, which binds quarks together inside protons and neutrons, they noticed something wrong. Interactions inside neutrons should have made them electrically asymmetrical, so that they would flip when subjected to an electric field. However, experiments show no such thing, so something must have been missing in the theory.

“If you could just impose the symmetry, maybe that would be an answer, but you cannot,” says retired Stanford University physicist Helen Quinn. Instead, in 1977 she and Roberto Peccei, who was also at Stanford at that time, proposed a simple modification to the mathematics describing the strong force. The Peccei-Quinn model, as it is now known, both removed the neutron asymmetry and instead predicted a new particle: the axion.

Axions are appealing from a conceptual point of view, Rosenberg says. “I learned about axions when I was a graduate student, and it really hit a resonance with me then. Stuff that wasn't making sense suddenly made sense because of the axion.”

#### A dark matter candidate

Unlike the Higgs boson, axions lie outside the Standard Model of particle physics and are not governed by the same forces. If they exist, axions are transparent to light, don’t interact directly with ordinary matter except in very tenuous ways, and could have been produced in sufficient amounts in the early universe to make up the 85 percent of mass we call dark matter.

“Provided axions exist, they're almost certain to be some fraction of dark matter,” says Oxford University theoretical physicist Joseph Conlon.

“Axions are an explanation that fits in with everything we know about physics and all the ideas of how you might extend physics,” he says. “I think axions are one particle that almost all particle theorists would probably bet rather large amounts of money on that they do exist, even if they are very, very hard to detect.”

Even if, like Conlon, we’re willing to wager that axions exist, it’s another matter to say they exist in such quantities and at the proper mass range to show up in our detectors.

Rosenberg trusts that ADMX will work, and after that, it’s up to nature to reveal its hand: “What I can say is we'll likely have an experiment that at least over a broad mass range will either detect this axion or reject the hypothesis at high confidence.”

Finding any axion detection would be a vindication of the theory developed by Quinn, Peccei and others. Finding many axions could finally solve the dark matter problem and would make this dark horse particle a champion.

Like what you see? Sign up for a free subscription to symmetry!

### Matt Strassler - Of Particular Significance

Giving Public Talk Jan. 20th in Cambridge, MA

Hope all of you had a good holiday and a good start to the New Year!

I myself continue to be extraordinarily busy as we move into 2015, but I am glad to say that some of that activity involves communicating science to the public.  In fact, a week from today I will be giving a public talk — really a short talk and a longer question/answer period — in Cambridge, just outside of Boston and not far from MIT. This event is a part of the monthly “CafeSci” series, which is affiliated with the famous NOVA science television programs produced for decades by public TV/Radio station WGBH in Boston.

Note for those of you have gone before to CafeSci events: it will be in a new venue, not far from Kendall Square. Here’s the announcement:

Tuesday, January 20th at 7pm (about 1 hour long)
Le Laboratoire Cambridge (NEW LOCATION)
http://www.lelaboratoirecambridge.com/
650 East Kendall St, Cambridge, MA