# Particle Physics Planet

## March 10, 2014

### ZapperZ - Physics and Physicists

PhysicsWorld Special Edition On Physics Education
The March 2014 issue of Physics World focuses on physics education. It can be downloaded for free (with registration). The blurb on this says:

In the March 2014 issue of Physics World a PDF copy of which you can download free of charge – we offer a snapshot of just some of the many innovative ideas that exist for learning and teaching physics. It’s not an exhaustive selection, but includes topics that we felt were interesting or novel.
Zz.

### Emily Lakdawalla - The Planetary Society Blog

The Very Large Telescope sights Rosetta's comet target, sees activity beginning
Rosetta's comet target, 67P/Churyumov-Gerasimenko, has emerged from behind the Sun as seen from Earth, and the Very Large Telescope has photographed it. The new images show that cometary activity has already begun as Rosetta approaches for its August rendezvous.

### The Great Beyond - Nature blog

Call for acid-bath stem-cell paper to be retracted

Less than 40 days after a team led by Haruko Obokata of the RIKEN Center for Developmental Biology in Kobe, Japan, presented two stunning papers claiming a method of using a simple acid-bath method to reprogramme mature mammalian cells back to an embryonic state – so called STAP cells – researchers in Japan, including one of the paper’s co-authors, are calling for them to be retracted.

Within weeks of their January 30 publication, the paper was criticized for irregularities and apparent duplicated images. Numerous scientists also had difficulty reproducing the supposedly simple method. The team responded with the promise of corrections and a list of tips to help other scientists to reproduce the results.

Over the weekend, however, two more serious problems surfaced. The Nature paper was found to contain two images apparently duplicated from Obokata’s doctoral dissertation. Her thesis also reported experiments dealing with cells that were supposedly in an embryonic state, but the cells reported in the Nature paper were said to be derived from a different process in an altogether different experiment.

The revelation has led to a flurry of calls – including some from senior scientists in Japan – for the paper to be retracted.

Perhaps the most damning comes from Teruhiko Wakayama, a cloning expert at Yamanashi University and a corresponding author on one of the papers. Interviewed by NHK news, Wakayama said: “I have lost faith in the paper. Overall there are now just too many uncertainties about it. I think we have to wait for some confirmation.” Wakayama calls for an investigation of all the laboratory notebooks and data. He continues: “To check the legitimacy of the paper, we should retract it, prepare proper data and images, and then use those to demonstrate, with confidence, that the paper is correct.” Wakayama reportedly contacted all of the authors requesting that they agree to retract the paper. RIKEN says it is still investigating the case.

### Andrew Jaffe - Leaves on the Line

Around Asia in search of a meal

I’m recently back from my mammoth trip through Asia (though in fact I’m up in Edinburgh as I write this, visiting as a fellow of the Higgs Centre For Theoretical Physics).

I’ve already written a little about the middle week of my voyage, observing at the James Clerk Maxwell Telescope, and I hope to get back to that soon — at least to post some pictures of and from Mauna Kea. But even more than telescopes, or mountains, or spectacular vistas, I seemed to have spent much of the trip thinking about and eating food. (Even at the telescope, food was important — and the chefs at Halu Pohaku do some amazing things for us sleep-deprived astronomers, though I was too tired to record it except as a vague memory.) But down at sea level, I ate some amazing meals.

When I first arrived in Taipei, my old colleague Proty Wu picked me up at the airport, and took me to meet my fellow speakers and other Taiwanese astronomers at the amazing Din Tai Fung , a world-famous chain of dumpling restaurants. (There are branches in North America but alas none in the UK.) As a scientist, I particularly appreciated the clean room they use to prepare the dumplings to their exacting standards:

Later in the week, a few of us went to a branch of another famous Taipei-based chain, Shin Yeh, for a somewhat traditional Taiwanese restaurant meal. It was amazing, and I wish I could remember some of the specifics. Alas, I’ve only recorded the aftermath:

From Taipei, I was off to Hawaii. Before and after my observing trip, I spent a few days in Honolulu, where I managed to find a nice plate of sushi at Doraku — good, but not too much better than I’ve had in London or New York, despite the proximity to Japan.

From Hawaii, I had to fly back for a transfer in Taipei, where I was happy to find plenty more dumplings (as well as pleasantly sweet Taiwanese pineapple cake). Certainly some of the best airport food I’ve had (for the record, my other favourites are sausages in Munich, and sushi at the Ebisu counter at San Francisco):

From there, my last stop was 40 hours in Beijing. Much more to say about that visit, but the culinary part of the trip had a couple of highlights. After a morning spent wandering around the Forbidden City (aka the Palace Museum), I was getting tired and hungry. I tried to find Tian Di Yi Jia, supposedly “An Incredible Imperial-Style Restaurant”. Alas, some combination of not having a website, not having Roman-lettered signs, and the likelihood that it had closed down meant an hour’s wandering Beijing’s streets was in vain. Instead, I ended up at this hole in the wall: And was very happy indeed, in particular with the amazing slithery, tangy eggplant: That night, I ended up at The Grandma’s, an outpost of yet another chain, seemingly a different chain than Grandma’s Kitchen, which apparently serves American food. Definitely not American food:

It was a very tasty trip. I think there was science, too.

### Peter Coles - In the Dark

That Fishy Saying of Einstein…

There are two interesting things about the above Einstein meme that has been doing the rounds. The first is that there’s absolutely no evidence that I can find that Albert Einstein ever said the words attributed to him; that’s also true for the vast majority of Einstein quotes, in fact.

The other interesting thing (and I risk being labelled a pedant here) is that there are species of fish, such as the Mangrove Rivulus, that really are able to climb trees…

### Lubos Motl - string vacua and pheno

Three interesting hep-th papers
First, a comment about the phenomenological hep-ph archive. Three new "primarily hep-ph" papers among twelve, namely the papers #5, #6, #8, are talking about the $$3.5\keV$$ X-ray line that Jester described as a possible dark matter signal. Jester would talk about "sterile neutrinos" but the three new papers try to identify the dark matter particle with a radiative neutrino; decaying moduli; and axinos. If you're intrigued by the $$3.5\keV$$ line, maybe you should bookmark the list of followups to the empirical paper by Bolbul et al..

Off-topic: a new colleague of Bill O'Reilly was hired by Rupert Murdoch. His name is Barack Obama and in this first job, he introduces the new "Cosmos" hosted by Neil deGrasse Tyson 34 years after it was done by Carl Sagan. Incidentally, Obama is likely to name his law school classmate Andrew Schapiro as the new ambassador to Czechia.

Now, hep-th, theory.

Michael Douglas and two Stony-Brook and/or partially Bonn collaborators talk about 8-dimensional F-theory vacua and the fate of vector multiplets in them. Recall that F-theory has formally 12 spacetime dimensions but two of them are infinitesimal and must be compactified on a 2-torus. That allows one to compactify F-theory on a K3 (which has 4 real dimensions), leaving 7+1 large dimensions, as long as the K3 has toroidal (elliptical) fibers.

The base of such an eliptically fibered K3 manifold is a sphere $$S^2$$ or, as we call it in complex geometry, $${\mathbb P}^1$$, the 1-dimensional projective space (a complex one, so we mean $$\mathbb{CP}^1$$). On this sphere, there are at most (if you maximally separate them) 24 singular places – because of the extra 7+1 large dimensions, the loci are the places where 24 $$(p,q)$$ sevenbranes live, and you could expect 24 vector multiplets. However, four of them are effectively "eaten" by some tensor multiplets, they show in detail, in a mechanism known as the Cremmer-Scherk (CS: not to be confused with Chern-Simons or Czecho-Slovak or Computer-Science) mechanism.

Hans Peter Nilles and Patrick Vaudrevange of Bonn and Garching/Munich discuss the string phenomenology – how to use string theory to describe the real world of particle physics – in a way that almost looks like there aren't thousands of papers about it. They are picking the most elegant models respecting "five golden rules" of unification with a good taste: $$SO(10)$$ spinor for SM fermionic matter; incomplete GUT multiplets for the Higgs pair; repetition of families deduced from the compactification manifold's properties; $$\NNN=1$$ SUSY; and R-parity plus discrete symmetries.

They decide that the models respecting these "five golden rules" of good behavior arise in more or less all the major phenomenological descriptions and they are numerous and lots of work has to be done to pick the right model etc.

Alexander Vilenkin and Jun Zhang of Tufts provide us with some evidence that "bounces replacing big crunches" don't really solve anything. Recall that a cosmology with a negative cosmological constant typically wants to end up with a collapse and a "Big Crunch", the time-reversed mirror image of the Big Bang (not quite because the entropy never decreases etc.).

The Cosmos shrinks to (near) zero size near the Big Crunch and it's ugly and deadly, so people have argued that some Planckian effects imply that the world isn't really over during the Big Crunch. Instead, it bounces back and expands again, and we could have had such "bounces" in our own history. However, the authors determine that this scenario still implies that the cosmic history is "past-incomplete" – something is missing in the spacetime if you extrapolate geodesics into the past – if it were past-incomplete in the model without bounces.

It's probably not a hard observation but it surely agrees with what I think about the usefulness (more precisely, uselessness) of the bounces and cyclic cosmologies. You may jump and bounce on a trampoline, for example, but your energy ultimately dissipates and your jumping comes to a halt. You could have stopped jumping right away. So something is inevitably deteriorating about the bounces and "cycles" – they shrink either in the future or in the past. The "balanced" intermediate scenario is as unstable as Einstein's static Universe. So by these bounces, you may at most delay some problems but you don't solve them. I personally tend to think that if everything in the Universe is squeezed and crushed into near-Planckian curvatures etc., the time is really over and the continuation of the "same spacetime" is unphysical from a sufficiently positivist, observation-rooted perspective.

### Emily Lakdawalla - The Planetary Society Blog

The new Cosmos: My review is coming; what did you think?
As I write this, much of the U.S. has seen the debut of the new Cosmos, except for those of us on the left coast. I'm not going to watch it live; I want to watch it with my children, and 9:00 is past their bedtime. So I will watch it with them tomorrow. In the meantime, I want to know what you thought about it! Did you watch?

### Jester - Resonaances

Weekend Plot: all of dark matter
To put my recent posts into a bigger perspective, here's a graph summarizing all of dark matter particles discovered so far via direct or indirect detection:

The graph shows the number of years the signal has survived vs. the inferred mass of the dark matter particle. The particle names follow the usual Particle Data Group conventions. The label's size is related to the statistical significance of the signal. The colors correspond to the Bayesian likelihood that the signal originates from dark matter, from uncertain (red) to very unlikely (blue). The masses of the discovered particles span impressive 11 orders of magnitude, although the largest concentration is near the weak scale (this is called the WIMP miracle). If I forgot any particle for which a compelling evidence exists, let me know, and I will add it to the graph.

Here are the original references for the Bulbulon, BoehmotCollaron, CDMesonDaemon, CresstonHooperon, Wenigon, Pamelon, and the mother of Bert and Ernie

## March 09, 2014

### Christian P. Robert - xi'an's og

shrinkage-thresholding MALA for Bayesian variable selection

Amandine Shreck along with her co-authors Gersende Fort, Sylvain LeCorff, and Eric Moulines, all from Telecom Paristech, has undertaken to revisit the problem of large p small n variable selection. The approach they advocate mixes Langevin algorithms with trans-model moves with shrinkage thresholding. The corresponding Markov sampler is shown to be geometrically ergodic, which may be a première in that area. The paper was arXived in December but I only read it on my flight to Calgary, not overly distracted by the frozen plains of Manitoba and Saskatchewan. Nor by my neighbour watching Hunger Games II.)

A shrinkage-thresholding operator is defined as acting on the regressor matrix towards producing sparse versions of this matrix. (I actually had trouble picturing the model until Section 2.2 where the authors define the multivariate regression model, making the regressors a matrix indeed. With a rather unrealistic iid Gaussian noise. And with an unknown number of relevant rows, hence a varying dimension model. Note that this is a strange regression in that the regression coefficients are known and constant across all models.) Because the Langevin algorithm requires a gradient to operate, the log target is divided between a differentiable and a non-differentiable parts, the later accommodating the Dirac masses in the dominating measure. The new MALA moves involve applying the above shrinkage-thresholding operator to a regular Langevin proposal, hence moving to sub-spaces and sparser representations.

The thresholding functions are based on positive part operators, which means that the Markov chain does not visit some neighbourhoods of zero in the embedding and in the sparser spaces. In other words, the proposal operates between models of varying dimensions without further ado because the point null hypotheses are replaced with those neighbourhoods. Hence it is not exactly simulating from the “original” posterior, which may be a minor caveat or not. Not if defining the neighbourhoods is driven by an informed or at least spelled-out choice of a neighbourhood of zero where the coefficients are essentially identified with zero. The difficulty is then in defining how close is close enough. Especially since the thresholding functions seem to all depend on a single number which does not depend on the regressor matrix. It would be interesting to see if the g-prior version could be developed as well… Actually, I would have also included a dose of g-prior in the Langevin move, rather than using an homogeneous normal noise.

The paper contains a large experimental part where the performances of the method are evaluated on various simulated datasets. It includes a comparison with reversible jump MCMC, which slightly puzzles me: (a) I cannot see from the paper whether or not the RJMCMC is applied to the modified (thresholded) posterior, as a regular RJMCMC would not aim at the same target, but the appendix does not indicate a change of target; (b) the mean error criterion for which STMALA does better than RJMCMC is not defined, but the decrease of this criterion along iterations seems to indicate that convergence has not yet occured, since it does not completely level up after 3 10⁵ iterations.

I must have mentioned it in another earlier post, but I find somewhat ironical to see those thresholding functions making a comeback after seeing the James-Stein and smooth shrinkage estimators taking over the then so-called pre-test versions in the 1970′s (Judge and Bock, 1978) and 1980′s. There are obvious reasons for this return, moving away from quadratic loss being one.

Filed under: Statistics, University life Tagged: Bayesian variable selection, ergodicity, Langevin MCMC algorithm, RJMCMC, spike-and-slab prior, variable dimension models

### Peter Coles - In the Dark

A Spring Physics Problem

It’s been a while since I posted anything in the Cute Problems  category, so since Spring is in the air I thought I’d post a physics problem which involves springing into the air…

Two identical fleas, each of which has mass m, sit at opposite ends of a straight uniform rigid hair of mass M, which is lying flat and at rest on a smooth frictionless table. If the two fleas make simultaneous jumps with the same speed and angle of take-off relative to the hair as they view it, under what circumstances can they change ends in one jump without colliding in mid air?

UPDATE Monday 10th March: No complete answers yet, so let’s try this slightly easier version:

Two identical fleas, each of which has mass m, sit at opposite ends of a straight uniform rigid hair of mass M, which is lying flat and at rest on a smooth frictionless table. Show that, by making simultaneous jumps with the same speed and angle of take-off relative to the hair as they view it, the two fleas can change ends without colliding in mid-air as long as 6m>M.

### Peter Coles - In the Dark

Spring Song, Meirionydd

Spring Song, Meirionydd
A white combustion rules these fields,
and testifies to men, and rams;
the mind of winter thaws, and yields–
Great God, the world is drunk with lambs.

The high grey stone is clean of snows,
the streams come tumbling, far from dams;
the wind is green, the day’s eye grows–
Great God, the world is drunk with lambs.

The heart, gone light as all the ewes,
redounds with milk, and epigrams
that make no sense; except their news–
Great God, the world is drunk with lambs.

In gold October, grown to size,
they’ll know the hook, and hang with hams,
but March is all their enterprise–
Great God, the world is drunk with lambs.

by John Dressel.

### The n-Category Cafe

Review of the Elements of 2-Categories

Guest post by Dimitri Zaganidis

First of all, I would like to thank Emily for organizing the Kan extension seminar. It is a pleasure to be part of it. I want also to thank my advisor Kathryn Hess and my office mate Martina Rovelli for their revisions.

In the fifth installment of the Kan Extension Seminar we read the paper “Review of the Elements of 2-categories” by G.M Kelly and Ross Street. This article was published in the Proceedings of the Sydney Category Theory Seminar, and its purpose is to “serve as a common introduction to the authors’ paper in this volume”.

The article has three main parts, the first of them being definitions in elementary terms of double categories and 2-categories, together with the notion of pasting. In a second chapter, they review adjunctions in 2-categories with a nice expression of the naturality of the bijection given by mates using double categories. The last part of the article introduces monads in 2-categories, and specializing to 2-monads towards the end.

### Double categories and 2-categories

The article starts with the definition of a double category as a category object in the (not locally small) category of categories $\mathrm{CAT}\mathbf\left\{CAT\right\}$. (I think that there might be some set theoretic issues with such a category, but you can add small everywhere if you want to stay safe.)

The authors then switch to a description of such an object in terms of objects, horizontal arrows, vertical arrows, and squares, with various compositions and units. I will explain a bit how to go from one description to the other.

A category object is constituted of a category of objects, a category of morphisms, target and source functors, identity functor and a composition.

The category of objects is the category whose morphisms are “the objects” and whose morphisms are the vertical arrows. The category of morphisms is the category whose objects are the horizontal morphisms and whose morphisms are the squares, with vertical composition.

Since the functors $\mathrm{Obj},\mathrm{Mor}:\mathrm{CAT}⟶\mathrm{SET}\mathrm\left\{Obj\right\}, \mathrm\left\{Mor\right\}: \mathbf\left\{CAT\right\} \longrightarrow \mathbf\left\{SET\right\}$ preserve pullbacks, by applying them to a double category seen as a category object, we get actual categories. Applying $\mathrm{Obj}\mathrm\left\{Obj\right\}$ to the double category, we get the category whose objects are “the objects” and whose morphisms are the horizontal arrows. Applying $\mathrm{Mor}\mathrm\left\{Mor\right\}$, we get the category whose objects are the vertical morphisms and whose morphisms are the squares, but this time with horizontal composition.

An interesting thing to notice is that the symmetry of the explicit description of a double category is much more apparent than the symmetry of its description as a category object.

One can define a $22$-category as a double category with a discrete category of objects, or as a $\mathrm{CAT}\mathbf\left\{CAT\right\}$-enriched category, exactly as one can define a simplicially enriched small category as either a category enriched over $\mathrm{sSet}\mathbf\left\{sSet\right\}$ or as a category object in $\mathrm{sSet}\mathbf\left\{sSet\right\}$ with a discrete simplicial set of objects.

The second viewpoint on 2-categories leads to definitions of 2-functors and 2-natural transformations and also to modifications, once one makes clear what enrichment a category of 2-functors inherits.

It is also worthwhile mentioning that the pasting operation makes computations easier to make, because they are more visual. The proof of proposition 2.1 of this paper is a good illustration of this.

The basic example of a 2-category is $\mathrm{CAT}\mathbf\left\{CAT\right\}$ itself, with natural transformations as 2-cells (squares).

As category theory describes set-like constructions, 2-category theory describes category-like constructions. You can usually build up categories with as objects sets with extra structure. In the same way, small V-categories, V-functors, and V-natural transformations form a 2-category.

My first motivation to learn about 2-categories was the 2-category of quasi-categories defined by Joyal and which has been studied by Emily Riehl and Dominic Verity in the article The 2-category theory of quasi-categories in particular the category-like constructions one can make with quasi-categories, such as adjunctions and limits.

### Adjunctions and mates in 2-categories

It is not a surprise that 2-categories are the right framework in which to define adjunctions. To build the general definition from the usual one, you just need to replace categories by objects in a 2-category, functors by 1-cells of the 2-category, and natural transformations by its 2-cells.

Adjunctions in a 2-category $𝒞\mathcal\left\{C\right\}$ compose (as in $\mathrm{CAT}\mathbf\left\{CAT\right\}$), and one can form two, a priori distinct double categories of adjunctions. Both of them will have the objects of $𝒞\mathcal\left\{C\right\}$ as objects and the horizontal morphisms being the morphisms of $𝒞\mathcal\left\{C\right\}$, while their vertical morphisms are the adjunctions (going in the same direction as the right adjoint, by convention). The two double categories differ on the squares. Given adjunctions $f⊣u f \dashv u$ and $f\prime ⊣u\prime f\text{'} \dashv u\text{'}$ together with 1-cells $a:A⟶A\prime a:A \longrightarrow A\text{'}$ (between the domains of $uu$ and $u\prime u\text{'}$) and $b:B⟶B\prime b:B \longrightarrow B\text{'}$ (between the codomains of $uu$ and $u\prime u\text{'}$), the squares of the first double category are 2-cells $bu⇒u\prime ab u \Rightarrow u\text{'}a$ while the squares of the second are 2-cells $f\prime b⇒aff\text{'}b \Rightarrow a f$.

Now, the bijective correspondence between these kind of 2-cells given by mates induces an isomorphism of double categories. This means in particular that the horizontal (or vertical) composite of mates is equal to the mate of the corresponding composite.

This is a very beautiful way to express the naturality of the mate correspondence, and it provides a one-line proof of the fact that two 1-cells that are left adjoints to a same 1-cell are naturally isomorphic.

2-categories are also the right framework to define monads. A monad in a 2-category $𝒞\mathcal\left\{C\right\}$ and on an object $BB$ is a 1-cell $t:B⟶Bt:B \longrightarrow B$ together with 2-cells $\mu :{t}^{2}⇒t\mu: t^2 \Rightarrow t$ and $\eta :{1}_{B}⇒t\eta: 1_B \Rightarrow t$, verifying the usual equations $\mu \circ \left(t\mu \right)=\mu \circ \left(\mu t\right)\mu \circ \left(t\mu\right)= \mu \circ \left(\mu t\right)$ and $\mu \circ \left(t\eta \right)={1}_{B}=\mu \circ \left(\eta t\right)\mu \circ\left(t\eta\right) = 1_B = \mu \circ\left(\eta t\right)$. Since 2-functors preserve both horizontal and vertical compositions, for all objects $XX$ of $𝒞\mathcal\left\{C\right\}$, $tt$ induces a monad on $𝒞\left(X,B\right)\mathcal\left\{C\right\}\left(X,B\right)$, given by post-composition $\left({t}_{*},{\mu }_{*},{\eta }_{*}\right)\left(t_\left\{\ast\right\},\mu_\left\{\ast\right\},\eta_\left\{\ast\right\}\right)$. The authors call * an action of $tt$ on $s:X⟶Bs:X \longrightarrow B$* a ${t}_{*}t_\ast$ algebra structure on $ss$.

In Ross Street’s original paper, a monad morphism $\left(B,t,\mu ,\eta \right)⟶\left(B\prime ,t\prime ,\mu \prime ,\eta \prime \right)\left(B,t,\mu, \eta\right) \longrightarrow \left(B\text{'},t\text{'},\mu\text{'}, \eta\text{'}\right)$ is a 1-cell $f:B⟶B\prime f: B \longrightarrow B\text{'}$ together with a $22$-cell $\varphi :t\prime f⇒ft\phi: t\text{'}f \Rightarrow f t$ verifying certain conditions.

In this paper, morphisms of monads are defined only for monads on the same object, letting the $11$-cell part of a monad transformation of the previous article be the identity. This leads the authors to reverse the direction of the morphism, since the $22$-cell seems to go in the reverse direction of the $11$-cell!

One might think that fixing $f=1f=1$ is needed by the result which explains that there is a bijection between monad morphisms $t⇒t\prime t \Rightarrow t\text{'}$ and actions of $tt$ on $t\prime t\text{'}$ making $t\prime t\text{'}$ a “$\left(t,t\prime \right)\left(t,t\text{'}\right)$-bimodule”. In fact, in the case where $ff$ is not necessarily the identity, there is a bijection between 2-cells $\varphi :tf⇒ft\prime \phi:t f \Rightarrow f t\text{'}$ such that $\left(f,\varphi \right)\left(f,\phi\right)$ is a monad functor and actions of $tt$ on $\mathrm{ft}\prime ft\text{'}$ making $\mathrm{ft}\prime ft\text{'}$ a “$\left(t,t\prime \right)\left(t,t\text{'}\right)$-bimodule”. A statement of the same kind can be also made for monad functor transformations (in the sense of the formal theory of monads). A 2-cell $\sigma :f⇒f\prime \sigma : f \Rightarrow f\text{'}$ is a monad functor transformation $\left(f,\varphi \right)⟶\left(f\prime ,\varphi \prime \right)\left(f,\phi\right) \longrightarrow \left(f\text{'}, \phi\text{'}\right)$ if and only if $\sigma t\prime :ft\prime ⇒f\prime t\prime \sigma t\text{'}: f t\text{'} \Rightarrow f\text{'} t\text{'}$ is a morphism of “$\left(t,t\prime \right)\left(t,t\text{'}\right)$-bimodules”.

A 2-category admits the construction of algebras if for every monad $\left(B,t,\mu ,\eta \right)\left(B,t,\mu, \eta\right)$, the 2-functor $X↦𝒞\left(X,B{\right)}^{\left({t}_{*},{\mu }_{*},{\eta }_{*}\right)}X \mapsto \mathcal \left\{C\right\}\left(X,B\right)^\left\{\left(t_\ast, \mu_\ast, \eta_\ast\right)\right\}$ is representable. The representing object is called the object of $tt$-algebras. By Yoneda, the free-forgetful adjunction can be made internal in this case.

The terminology is justified, because in the $22$-category $\mathrm{CAT}\mathbf\left\{CAT\right\}$, it specializes to the usual notions of the category of $tt$-algebras and the corresponding free-forgetful adjunction.

A monad in $𝒞\mathcal\left\{C\right\}$ is the same as a 2-functor $\mathrm{Mnd}⟶𝒞\mathbf\left\{Mnd\right\} \longrightarrow \mathcal\left\{C\right\}$, where $\mathrm{Mnd}\mathbf\left\{Mnd\right\}$ is the 2-category with one object and ${\Delta }_{+}\Delta_+$, the algebraist’s simplicial category as monoidal hom-category (with ordinal sum). Since moreover, $𝒞\left(X,B{\right)}^{\left({t}_{*},{\mu }_{*},{\eta }_{*}\right)}\cong \left[\mathrm{Mnd},CAT\right]\left({\Delta }_{+\infty },𝒞\left(X,-\right)\right),\mathcal \left\{C\right\}\left(X,B\right)^\left\{\left(t_\ast, \mu_\ast, \eta_\ast\right)\right\} \cong \left[\mathbf\left\{Mnd\right\}, \CAT\right]\left( \Delta_\left\{+\infty\right\}, \mathcal\left\{C\right\}\left(X,-\right)\right),$ (where ${\Delta }_{+\infty }\Delta_\left\{+\infty\right\}$ is the subcategory of maps of $\Delta \Delta$ preserving maxima, which is acted on by ${\Delta }_{+}\Delta_+$ via ordinal sum) one can see that the object of t-algebras can be expressed as a weighted limit.

As a consequence, it is not surprising that a 2-category admits the construction of algebras under some completeness assumptions.

### Doctrines

In the last part of the article, the authors review the notion of a doctrine, which is a 2-monad in 2-$\mathrm{CAT}\mathbf\left\{CAT\right\}$, i.e., a 2-functor $D:𝒞⟶𝒞D: \mathcal \left\{C\right\} \longrightarrow \mathcal\left\{C\right\}$, where $𝒞\mathcal\left\{C\right\}$ is a 2-category, and 2-natural transformations $mm$ and $jj$, which are respectively the multiplication and the unit, verifying the usual identities. The fact that it is both a monad on a 2-category and in another one can be a bit disturbing at first.

If $\left(D,m,j\right)\left(D,m,j\right)$ is a doctrine over a 2-category $𝒞\mathcal\left\{C\right\}$, then its algebras will be objects $XX$ of $𝒞\mathcal\left\{C\right\}$ together with an action $\mathrm{DX}⟶XDX \longrightarrow X$, exactly as in the case of algebras over a usual monad.

Already with morphisms, we can take advantage of the fact that a 2-category $𝒞\mathcal\left\{C\right\}$ has 2-cells, and define $DD$-morphisms to be lax in the sense that the diagram $\begin{array}{ccc}\mathrm{DX}& ⟶& \mathrm{DY}\\ ↓& & ↓\\ X& ⟶& Y\end{array} \begin\left\{matrix\right\} DX & \longrightarrow & DY \\ \downarrow & & \downarrow \\ X & \longrightarrow & Y \end\left\{matrix\right\} $ is not supposed to be commutative, but is rather filled by a 2-cell with some coherence properties.

As one might expect, we can actually form a 2-category of such $DD$-algebras by adding 2-cells, using again the $22$-cells existing in $𝒞\mathcal\left\{C\right\}$.

If we keep only the $DD$-morphisms that are strict, we obtain the object of algebras (which should be a $22$-category) that we discussed before.

One example of a doctrine is ${\Delta }_{+}×-:\mathrm{CAT}⟶\mathrm{CAT}\Delta_+ \times - : \mathbf\left\{CAT\right\} \longrightarrow \mathbf\left\{CAT\right\}$ together with the multiplication induced by the ordinal sum, and unit given on $𝒟\mathcal \left\{D\right\}$ by the functor $𝒟⟶{\Delta }_{+}×𝒟\mathcal\left\{D\right\} \longrightarrow \Delta_+ \times \mathcal\left\{D\right\}$ that sends $dd$ to $\left(\varnothing ,d\right)\left(\emptyset,d\right)$.

The algebras for this doctrine will be categories equipped with a monad acting on them, while the $DD$-morphisms are transformations of monads, and the $DD$-2-cells are exactly the monad functor transformations of Street’s article.

Here, since we have two different 2-categories of algebras (with strict $DD$-morphisms or with all of them), one can wonder if monad morphisms $D⟶D\prime D \longrightarrow D\text{'}$ will induce $22$-functors $D\prime D\text{'}$-$\mathrm{Alg}⟶D\mathbf\left\{Alg\right\} \longrightarrow D$-$\mathrm{Alg}\mathbf\left\{Alg\right\}$ on the level of these $22$-categories.

This is indeed the case, and one can actually go even one step further and define monad modifications, using the fact that 2-$\mathrm{CAT}\mathbf\left\{CAT\right\}$ is in fact a 3-category! These modifications between two given monad morphisms are in fact in bijective correspondence with the 2-natural transformations between the $22$-functors induced by these monad morphisms on the level of algebras (with lax D-morphisms). Note that they are not the same as monad morphisms transformations of Street’s article.

This bijection is nice because it implies that you can compare 2-categories of algebras by only looking at the doctrines: if they are equivalent, so are the 2-categories of algebras.

The fact that this bijection does not hold when we restrict only to strict morphism was really surprising to me, but I guess this is the price to pay to use the 3-category structure.

During the last days of April, the Kan extension seminar will be reading the article “Two dimensional monad theory”, by Blackwell, Kelly and Powell. We will then have more to say about these 2-monads!

### Jester - Resonaances

One More Try
My blogging juices have been drying up for some time now, and at this point Résonaances is close to withering. This could be expected. The glorious year 2012 with all the excitement of the Higgs boson discovery was inevitably followed by post-coital depression, only amplified  by the shutdown of the LHC for repairs.

One problem with blogging these days is that, in the short run, things are expected to get worse rather than better. The year 2013 was depressing but at least we could not complain of the lack of action. The LHC was flooding us with new results based on the data collected in the first run. On the Higgs front, the 125 GeV particle discovered the year before was established, beyond reasonable doubt, as a Higgs boson related to electroweak symmetry breaking. The CMB results from the Planck experiment were a sweeping victory for the Lambda-CDM description of  the universe at large scales. The LUX experiment provided the best limits so far on the WIMP-nucleon cross section and slashed the hope that we may be on the verge of detecting dark matter. Plus a cherry on the top: ACME limits on electron's electric dipole moment increased the strain on any extension of the Standard Model with new particles at the TeV scale. Yes, a lot to remember, not much to cherish...

And what about 2014? Are there any results to be released this year that could be at least marginally exciting for particle physicists? I don't see much, and the opinion polls that I have conducted are not optimistic either. Basically, we just expect more of the same: the LHC, Planck, ICECUBE, AMS-02, Fermi... Of course, there is always a non-zero probability that some new results from these experiments will turn out to be a smoking gun for new physics, but the later in the game the dimmer the chances are. The only qualitatively new piece of data among those will be the Planck polarization data, but even that is unlikely to be a game-changer. One may also keep an eye on lightweight contenders: small precision experiments that pursue indirect limits on new physics. Recently there's been new such limits on non-standard interactions between electrons and quarks  from JLab's PVDIS Collaboration who study  low-energy scattering of electrons on nuclei. A similar experiment in JLab called Q-weak promises new results and improved limits this year. If there's anything else like that in the queue I'll be glad if you let me know in the comments section.

So how to live? How to blog? How to make it till 2015 when the sky is supposed to get brighter? I have no idea but I'll try to go on for a little longer. Back soon.

## March 08, 2014

### Christian P. Robert - xi'an's og

les sciences face aux créationnismes [book review]

I spotted this small book during my last visit to CBGP in Montpellier, and borrowed it from the local librarian. It is written (in French) by Guillaume Lecointre, who is professor of Biology at the Muséum National d’Histoire Naturelle in Paris, specialised in population evolution and philogenies. The book is published by Editions Quae, a scientific editor supported by four founding French institutes (CIRAD, IFREMER, INRA and IRSTEA), hence no wonder I would spot it in an INRA lab. The theme of the book is not to argue against creationism and intelligent design theories, but rather to analyse how the debates between scientists—interestingly this term scientist sounds much more like a cult in English than the French noun scientifique— and creationists are conducted and to suggest how they should be conducted. While there are redundancies in the text, I found the overall argumentation quite convincing, with the driving lines that creationists are bypassing the rules of scientific investigation and exchange to bring the debate at a philosophical or ideological level foreign to science definition. Lecointre deconstructs the elements put forward in such debates, from replacing the incompleteness of the scientific knowledge and the temporary nature of scientific theories with a total relativism, to engaging scientific supporters from scientific fields not directly related with the theory of evolution, to confusing methodological materialism with philosophical materialism and more fundamentally to imply that science and scientific theories must have a moral or ideological content, and to posturing as anti-establishment and anti-dogmatic free minds… I also liked the points that (a) what really drives the proponents of intelligent design is a refusal of randomness in the evolution, without any global or cosmic purpose; (b) scientists are very ill-prepared to debate with creationists, because the later do not follow a scientific reasoning; (c) journalists are most often contributing to the confusion by picking out-of-their-field “experts” and encouraging the relativity argument. Hence a reasonable recommendation to abstain from oral debates and to stick to pointing out the complete absence of scientific methodology in creationists’ arguments. (Obviously, readers of Alan Sokal’s Beyond the Hoax will be familiar most of the arguments produced in les sciences face aux créationnismes.)

Filed under: Books Tagged: "intelligent" design, Alan Sokal, creationism, evolution, materialism, Philosophy of Science, Science

### Quantum Diaries

Nobody understands quantum mechanics? Nonsense!

Despite the old canard about nobody understanding quantum mechanics, physicists do understand it.  With all of the interpretations ever conceived for quantum mechanics[1], this claim may seem a bit of a stretch, but like the proverbial ostrich with its head in the sand, many physicists prefer to claim they do not understand quantum mechanics, rather than just admit that it is what it is and move on.

What is it about quantum mechanics that generates so much controversy and even had Albert Einstein (1879 – 1955) refusing to accept it? There are three points about quantum mechanics that generate controversy. It is probabilistic, eschews realism, and is local. Let us look at these three points in more detail.

1. Quantum mechanics is probabilistic, not determinist. Consider a radioactive atom. It is impossible, within the confines of quantum mechanics, to predict when an individual atom will decay. There is no measurement or series of measurements that can be made on a given atom to allow me to predict when it will decay. I can calculate the probability of when it will decay or the time it takes half of a sample to decay but not the exact time a given atom will decay. This lack of ability to predict exact outcomes, but only probabilities, permeates all of quantum mechanics. No possible set of measurements on the initial state of a system allows one to predict precisely the result of all possible experiments on that state.
2. Quantum mechanics eschews realism[2]. This is a corollary of the first point. A quantum mechanical system does not have well defined values for properties that have not been directly measured. This has been compared to the moon only existing when someone is looking at it. For deterministic systems one can always safely infer back from a measurement what the system was like before the measurement. Hence if I measure a particle’s position and motion I can infer not only where it will go but where it has come from. The probabilistic nature of quantum mechanics prevents this backward looking inference. If I measure the spin of an atom, there is no certainty that is had only that value before the measurement. It is this aspect of quantum mechanics that most disturbs people, but quantum mechanics is what it is.
3. Quantum mechanics is local. To be precise, no action at point A will have an observable effect at point B that is instantaneous, or non-causal.  Note the word observable. Locality is often denied in an attempt to circumvent Point 2, but when restricted to what is observable, locality holds. Despite the Pentagon’s best efforts, no messages have been sent using quantum non-locality.

Realism, at least, is a common aspect of the macroscopic world. Even a baby quickly learns that the ball is behind the box even when he cannot see it. But much about the microscopic world is not obviously determinist, the weather in Vancouver for example (it is snowing as I write this). Nevertheless, we cling to determinism and realism like a child to his security blanket. It seems to me that determinism or realism, if they exist, would be at least as hard to understand as their lack. There is no theorem that states the universe should be deterministic and not probabilistic or vice versa. Perhaps god, contrary to Einstein’s assertion, does indeed like a good game of craps[3].

So quantum mechanics, at least at the surface level, has features many do not like. What has the response been? They have followed the example set by Philip Gosse (1810 – 1888) with the Omphalos hypothesis[4]. Gosse, being a literal Christian, had trouble with the geological evidence that the world was older than 6,000, so he came up with an interpretation of history that the world was created only 6,000 years ago but in such a manner that it appeared much older. This can be called an interpretation of history because it leaves all predictions for observations intact but changes the internal aspects of the model so that they match his preconceived ideas. To some extent, Tycho Brahe (1546 – 1601) used the same technique to keep the earth at the center of the universe. He had the earth fixed and the sun circle the earth and the other planets the sun. With the information available at the time, this was consistent with all observations.

The general technique is to adjust those aspects of the model that are not constrained by observation to make it conform to one’s ideas of how the universe should behave. In quantum mechanics these efforts are called interpretations. Hugh Everett (1930 – 1982) proposed many worlds in an attempt to make quantum mechanics deterministic and realistic. But it was only in the unobservable parts of the interpretation that this was achieved and the results of experiments in this world are still unpredictable. Louis de Broglie (1892 – 1987) and later David Bohm (1917 – 1992) introduced pilot waves in an effort to restore realism and determinism. In doing do they gave up locality. Like Gosse’s work, theirs was nice proof in principle that, with sufficient ingenuity, the universe could be made to conform to almost any preconceived ideas, or at least appear to do so. Reassuring I guess, but like Gosse it was done by introducing non-observable aspects to the model: not just unobserved but in principle unobservable. The observable aspects of the universe, at least as far as quantum mechanics is correct, are as stated in the three points above: probabilistic, nonrealistic and local.

Me, I am not convinced that there is anything to understand about quantum mechanics beyond the rules for its use given in standard quantum mechanics text books. However, interpretations of quantum mechanics might, possibly might, suggest different ways to tackle unsolved problems like quantum gravity and they do give one something to discuss after one has had a few beers (or is that a few too many beers).

[1] See my February 2014 post “Reality and the Interpretations of Quantum Mechanics.”

[2] Realism as defined in the paper by Einstein, Podolsky and Rosen, Physical Review 47 (10): 777–780 (1935).

[3] Or dice.

### Tommaso Dorigo - Scientificblogging

Top Asymmetry: The Latest From DZERO
It is nice to see that the Tevatron experiments are continuing to produce excellent scientific measurements well after the demise of the detectors. Of course the CDF and DZERO collaborations have shrunk in size and in available man-years for data analysis since the end of data taking, as most researchers have increased and gradually maxed their participations to
other experiments - typically the ones at the Large Hadro Collider; but a hard core of dedicated physicists remains actively involved in the analysis of the 10 inverse femtobarns of proton-antiproton collisions acquired in Run 2, in the conviction that the Tevatron data still provides a basis for scientific results that cannot be obtained elsewhere.

### The n-Category Cafe

Network Theory Talks at Oxford

One of my dreams these days is to get people to apply modern math to ecology and biology, to help us design technologies that work with nature instead of against it. I call this dream ‘green mathematics’. But this will take some time to reach, since living systems are subtle, and most mathematicians are more familiar with physics.

So, I’ve been warming up by studying the mathematics of chemistry, evolutionary game theory, electrical engineering, control theory and information theory. There are a lot of ideas in common to all these fields, but making them clear requires some category theory. I call this project ‘network theory’. I’m giving some talks about it at Oxford.

(This diagram is written in Systems Biology Graphical Notation.)

Here’s the plan:

#### Network Theory

Nature and the world of human technology are full of networks. People like to draw diagrams of networks: flow charts, electrical circuit diagrams, signal-flow graphs, Bayesian networks, Feynman diagrams and the like. Mathematically minded people know that in principle these diagrams fit into a common framework: category theory. But we are still far from a unified theory of networks. After an overview, we will look at three portions of the jigsaw puzzle in three separate talks:

I. Electrical circuits and signal-flow graphs.

II. Stochastic Petri nets, chemical reaction networks and Feynman diagrams.

III. Bayesian networks, information and entropy.

All these talks will be in Lecture Theatre B of the Computer Science Department—you can see a map here, but the entrance is on Keble Road. Here are the times:

• Friday 21 February 2014, 2 pm: Network Theory: overview. Also available on YouTube.

• Tuesday 25 February, 3:30 pm: Network Theory I: electrical circuits and signal-flow graphs. Also available on YouTube.

• Tuesday 4 March, 3:30 pm: Network Theory II: stochastic Petri nets, chemical reaction networks and Feynman diagrams. Also available on YouTube.

• Tuesday 11 March, 3:30 pm: Network Theory III: Bayesian networks, information and entropy.

I thank Samson Abramsky, Bob Coecke and Jamie Vicary of the Computer Science Department for inviting me, and Ulrike Tillmann and Minhyong Kim of the Mathematical Institute for helping me get set up. I also thank all the people who helped do the work I’ll be talking about, most notably Jacob Biamonte, Jason Erbele, Brendan Fong, Tobias Fritz, Tom Leinster, Tu Pham, and Franciscus Rebro.

Ulrike Tillmann has also kindly invited me to give a topology seminar:

#### Operads and the Tree of Life

Trees are not just combinatorial structures: they are also biological structures, both in the obvious way but also in the study of evolution. Starting from DNA samples from living species, biologists use increasingly sophisticated mathematical techniques to reconstruct the most likely “phylogenetic tree” describing how these species evolved from earlier ones. In their work on this subject, they have encountered an interesting example of an operad, which is obtained by applying a variant of the Boardmann–Vogt “W construction” to the operad for commutative monoids. The operations in this operad are labelled trees of a certain sort, and it plays a universal role in the study of stochastic processes that involve branching. It also shows up in tropical algebra. This talk is based on work in progress with Nina Otter.

I’m not sure exactly where this will take place, but probably somewhere in the Mathematical Institute, shown on this map. Here’s the time:

• Monday 24 February, 3:30 pm, Operads and the Tree of Life.

If you’re nearby, I hope you can come to some of these talks — and say hi!

(This diagram was drawn by Darwin.)

### Peter Coles - In the Dark

A Bit of Green Trivia..

Following on from yesterday’s post about George Green, I thought I’d add this little bit of Green trivia.

George Green’s sponsor and patron  was the mathematician Edward Bromhead, a Baronet and member of the landed gentry of the county of Lincolnshire. Two generations later in the Bromhead family you will find a certain Gonville Bromhead (presumably named after Gonville & Caius College, the Cambridge college that both Edward Bromhead and George Green attended). As a young man, in January 1879, Lt. Gonville Bromhead fought in the Battle of Rorke’s Drift. Almost a century later he was played by Michael Caine in the film Zulu.

Not a lot of people know that.

### astrobites - astro-ph reader's digest

UR #13: RR Lyrae and OSCAAR the Exoplanet-Analyzer

The undergrad research series is where we feature the research that you’re doing. If you’ve missed the previous installments, you can find them under the “Undergraduate Research” category here.

If you, too, have been working on a project that you want to share, we want to hear from you! Think you’re up to the challenge of describing your research carefully and clearly to a broad audience, in only one paragraph? Then send us a summary of it!

You can share what you’re doing by clicking on the “Your Research” tab above (or by clicking here) and using the form provided to submit a brief (fewer than 200 words) write-up of your work. The target audience is one familiar with astrophysics but not necessarily your specific subfield, so write clearly and try to avoid jargon. Feel free to also include either a visual regarding your research or else a photo of yourself.

We look forward to hearing from you!

************

Meredith Durbin
Pomona College
http://meredith-durbin.com

Meredith is a senior at Pomona College doing a thesis with Carnegie Observatories under the supervision of Dr. Victoria Scowcroft. This work is part of the Carnegie Hubble Program, an effort to recalibrate the Hubble constant to 2% error or better using Spitzer Space Telescope and eventually JWST data.

The Mid-Infrared RR Lyrae Period-Luminosity Relation

Logarithm of pulsation period vs. apparent magnitude for 36 RR Lyrae in 3.6 microns.

A century ago, Henrietta Swan Leavitt discovered the relationship between pulsation period and intrinsic luminosity for Cepheid variable stars. Today, that relationship has been widely used to map the local universe, and has been found to apply to other types of variable stars as well, including RR Lyrae. RR Lyrae occur at the intersection of the horizontal branch and the instability strip on the HR diagram, and are dimmer, older, and even more accurate distance indicators than Cepheids. With the advent of space telescopes, we can now observe them in the mid-infrared, which is excellent for distance scale calibration primarily due to the reduced effects of interstellar extinction. Here we present a new calibration of the mid-IR RR Lyrae period-luminosity relation in 3.6 and 4.5 microns using a total of 48 RR Lyrae from the globular cluster Omega Centauri. We have also done preliminary investigations into metallicity effects; in optical and near-IR wavelengths the period-luminosity relation requires a metallicity term due to metal absorption lines in the spectrum, but we find no evidence for such a term in the mid-IR.

************

Taylor Andrew Morris
Sewanee: The University of the South

Taylor is currently a sophomore. This work was primarily done over the summer of 2013 as an internship through his institution at the Cordell-Lorenz Observatory under the supervision of Dr. Douglas Durig. Learn more about OSCAAR at https://github.com/OSCAAR/OSCAAR.

Light curve produced by OSCAAR of WASP 52-b’s September 5th transit.

Exoplanet Science with OSCAAR
Exoplanets with large magnitude depths often transit bright host stars, allowing Earth-based, photometric measurements of flux over time to be acquired with appropriate techniques on even modest astronomical equipment. OSCAAR (Open Source Code for Accelerating Astronomical Research) is an open-source, Python-based, differential photometry software package designed for gathering and analyzing data on Jupiter to Neptune sized exoplanets. While beta-testing the OSCAAR code, an efficient data-collection system and effective research procedure for transit analysis was developed at the Cordell-Lorenz observatory in Sewanee, TN. Promising transit data was obtained for exoplanets such as WASP-52 b and WASP-59 b. We produced the first user-generated exoplanet light curves and Markov Chain Monte Carlo (MCMC) fitting results utilizing OSCAAR and compare them to the currently available orbital parameters. The orbital parameters we fitted for largely agree with results obtained previously on more sophisticated equipment, showcasing the effectiveness of OSCAAR and the potential for more complex exoplanet based projects to be undertaken.

## March 07, 2014

### Emily Lakdawalla - The Planetary Society Blog

Why Cosmos should matter, especially to Hollywood
For a town dependent on Stars, there are far too few people here who look up at the sky. But come this Sunday, March 9, the epic series of science, space and humanity will return: Cosmos: A Spacetime Odyssey. Why does it matter for Hollywood, specifically? I'll tell you why it will. And then why it should.

### Jester - Resonaances

Signal of WIMP dark matter
You may have  heard about the excess of gamma-ray emission from the center of the Milky Way measured by the Fermi telescope. This excess can be interpreted as a signal of a 30-40 GeV dark matter particle - the so-called hooperon -  annihilating into a pair of b-quarks. The inferred annihilation cross section is of order 1 picobarn, perfectly fitting the thermal dark matter paradigm.  The story is not exactly new; the anomaly and its dark matter interpretation was first claimed 4 years ago. Since then there has been a steady trickle of papers by different groups arguing that the signal is robust and proposing dark matter or astrophysical explanations. Last week the story hit several news outlets, see for example here for a nice write up. What has changed that the anomaly was upgraded from a tantalizing hint to a compelling evidence of WIMP dark matter?

First, here is a bit more detailed description of the signal.  The Fermi satellite measures gamma rays from all sky with a good angular and energy resolution. Many boring astrophysical processes produce gamma rays, for example cosmic rays scattering on the interstellar medium, or violent events happening around black holes and pulsars. However, known point sources, galactic and extragalactic diffuse emission, and the emission from the Fermi Bubbles do not seem to be enough to explain what's going on in the center of our galaxy. A better fit is obtained if one adds a new component with a spatial distribution sharply peaked around the galactic center and the energy spectrum with a broad peak near 2 GeV, see the plot. How much better fit?  This paper quotes 40 sigma preference for this new component in the inner galaxy region. That's hell of a significance, even after translating the astrophysical sigmas to the ones used in conventional statistics ;)

Now, WIMP dark matter can easily reproduce the new component.  Cold dark matter is expected to be sharply peaked near the galactic center, with the 1/r or similar profile. Furthermore, when dark matter annihilates into charged particles, the latter can radiate a part of their energy producing photons via the final state radiation, Compton scattering, and bremsstrahlung. This leads to emission of gamma rays with the energy spectrum depending on the dark matter mass and the identity of particles it annihilates into.   Annihilation into leptons (electron, muons, taus) would produce a sharper peak than what is observed. As the plot shows, annihilation into quarks, whether the bottom or lighter one, fits the signal much better. All in all, the excess can be explained by a 15-40 GeV dark matter particle annihilating into quarks with the cross section in the  0.1-1 pb range.

This was known before, more or less. As far as I understand, the recent paper by Daylan et al. adds the following. They repeat the analysis using a subset of the Fermi data where the photon direction  is more reliably reconstructed.  This allows them to better study the morphology of the signal. They show that the excess is steeply falling (approximately as 1/r^1.4) all the way to about 2 kiloparsecs from the galactic center. Moreover, they demonstrate that  the excess is to a good degree spherically symmetric. This can be regarded as an argument against conventional astrophysical explanations. For example, a school of several thousand milisecond pulsars could produce a similar energy spectrum as the excess, but would not be expected to be distributed this way.

Ah, and what does the Fermi collaboration have to say about it? As far as I know, there is no official statement about the excess. In this talk one finds the quote "[In the inner galaxy], diffuse emission and point sources account for most of the emission observed in the region".  So we seem to have two slightly discrepant stories here: 40 sigma vs. nothing to see. If the truth were in the middle that would  be great ;)

In any case, continuous emission from the galactic center will never be regarded as a convincing evidence of dark matter.  To really get excited we would need to find a matching signal in a less messy environment. One possibility is the dwarf galaxies - small galaxies consisting mostly of dark matter that orbit the Milky Way. The Fermi collaboration recently reported the limits on the dark matter annihilation cross section based on observations of 25 dwarf galaxies, see the plot. Intriguingly, there is a small excess (global p-value 0.08) that may be consistent with the dark matter interpretation of the signal from the galactic centre... More data should clarify the situation, but for that we probably need to wait a few more years.

### Sean Carroll - Preposterous Universe

Guest Post: Katherine Freese on Dark Matter Developments

The hunt for dark matter has been heating up once again, driven (as usual) by tantalizing experimental hints. This time the hints are coming mainly from outer space rather than underground laboratories, which makes them harder to check independently, but there’s a chance something real is going on. We need more data to be sure, as scientists have been saying since the time Eratosthenes measured the circumference of the Earth.

As I mentioned briefly last week, Katherine Freese of the University of Michigan has a new book coming out, The Cosmic Cocktail, that deals precisely with the mysteries of dark matter. Katie was also recently at the UCLA Dark Matter Meeting, and has agreed to share some of her impressions with us. (She also insisted on using the photo on the right, as a way of reminding us that this is supposed to be fun.)

Dark Matter Everywhere (at the biannual UCLA Dark Matter Meeting)

The UCLA Dark Matter Meeting is my favorite meeting, period. It takes place every other year, usually at the Marriott Marina del Rey right near Venice Beach, but this year on UCLA campus. Last week almost two hundred people congregated, both theorists and experimentalists, to discuss our latest attempts to solve the dark matter problem. Most of the mass in galaxies, including our Milky Way, is not comprised of ordinary atomic material, but instead of as yet unidentified dark matter. The goal of dark matter hunters is to resolve this puzzle. Experimentalist Dave Cline of the UCLA Physics Department runs the dark matter meeting, with talks often running from dawn till midnight. Every session goes way over, but somehow the disorganization leads everybody to have lots of discussion, interaction between theorists and experimentalists, and even more cocktails. It is, quite simply, the best meeting. I am usually on the organizing committee, and cannot resist sending in lots of names of people who will give great talks and add to the fun.

Last week at the meeting we were treated to multiple hints of potential dark matter signals. To me the most interesting were the talks by Dan Hooper and Tim Linden on the observations of excess high-energy photons — gamma-rays — coming from the Central Milky Way, possibly produced by annihilating WIMP dark matter particles. (See this arxiv paper.) Weakly Interacting Massive Particles (WIMPs) are to my mind the best dark matter candidates. Since they are their own antiparticles, they annihilate among themselves whenever they encounter one another. The Center of the Milky Way has a large concentration of dark matter, so that a lot of this annihilation could be going on. The end products of the annihilation would include exactly the gamma-rays found by Hooper and his collaborators. They searched the data from the FERMI satellite, the premier gamma-ray mission (funded by NASA and DoE as well as various European agencies), for hints of excess gamma-rays. They found a clear excess extending to about 10 angular degrees from the Galactic Center. This excess could be caused by WIMPs weighing about 30 GeV, or 30 proton masses. Their paper called these results “a compelling case for annihilating dark matter.” After the talk, Dave Cline decided to put out a press release from the meeting, and asked the opinion of us organizers. Most significantly, Elliott Bloom, a leader of the FERMI satellite that obtained the data, had no objection, though the FERMI team itself has as yet issued no statement.

Many putative dark matter signals have come and gone, and we will have to see if this one holds up. Two years ago the 130 GeV line was all the rage — gamma-rays of 130 GeV energy that were tentatively observed in the FERMI data towards the Galactic Center. (Slides from Andrea Albert’s talk.) This line, originally proposed by Stockholm’s Lars Bergstrom, would have been the expectation if two WIMPs annihilated directly to photons. People puzzled over some anomalies of the data, but with improved statistics there isn’t much evidence left for the line. The question is, will the 30 GeV WIMP suffer the same fate? As further data come in from the FERMI satellite we will find out.

What about direct detection of WIMPs? Laboratory experiments deep underground, in abandoned mines or underneath mountains, have been searching for direct signals of astrophysical WIMPs striking nuclei in the detectors. At the meeting the SuperCDMS experiment hammered on light WIMP dark matter with negative results. The possibility of light dark matter, that was so popular recently, remains puzzling. 10 GeV dark matter seemed to be detected in many underground laboratory experiments: DAMA, CoGeNT, CRESST, and in April 2013 even CDMS in their silicon detectors. Yet other experiments, XENON and LUX, saw no events, in drastic tension with the positive signals. (I told Rick Gaitskell, a leader of the LUX experiment, that I was very unhappy with him for these results, but as he pointed out, we can’t argue with nature.) Last week at the conference, SuperCMDS, the most recent incarnation of the CDMS experiment, looked to much lower energies and again saw nothing. (Slides from Lauren Hsu’s talk.) The question remains: are we comparing apples and oranges? These detectors are made of a wide variety of types of nuclei and we don’t know how to relate the results. Wick Haxton’s talk surprised me by discussion of nuclear physics uncertainties I hadn’t been aware of, that in principle could reconcile all the disagreements between experiments, even DAMA and LUX. Most people think that the experimental claims of 10 GeV dark matter are wrong, but I am taking a wait and see attitude.

We also heard about the hints of detection of a completely different dark matter candidate: sterile neutrinos. (Slides from George Fuller’s talk.) In addition to the three known neutrinos of the Standard Model of Particle Physics, there could be another one that doesn’t interact with the standard model. Yet its decay could lead to x-ray lines. Two separate groups found indications of lines in data from the Chandra and XMM-Newton space satellites that would be consistent with a 7 keV neutrino (7 millionths of a proton mass). Could it be that there is more than one type of dark matter particle? Sure, why not?

On the last evening of the meeting, a number of us went to the Baja Cantina, our favorite spot for margaritas. Rick Gaitskell was smart: he talked us into the $60.00 pitchers, high enough quality that the 6AM alarm clocks the next day (that got many of us out of bed and headed to flights leaving from LAX) didn’t kill us completely. We have such a fun community of dark matter enthusiasts. May we find the stuff soon! ### Emily Lakdawalla - The Planetary Society Blog [Updated] To Europa!...Slowly. First Impressions of NASA's New Budget Request Europa may get a mission...eventually. We give our first take on the 2015 NASA Budget request. How does Planetary Exploration fare? Which projects were cancelled? Will NASA capture an asteroid? And most importantly, what can you do about it? ### ZapperZ - Physics and Physicists Physics Talk With No Powerpoint Slides? Oh, say it isn't so! In an effort to get a better interaction between speaker and audience, organizers at a biweekly forum on the LHC at Fermilab banned the use of any Powerpoint presentation by the speaker. “Without slides, the participants go further off-script, with more interaction and curiosity,” says Andrew Askew, an assistant professor of physics at Florida State University and a co-organizer of the forum. “We wanted to draw out the importance of the audience.” In one recent meeting, physics professor John Paul Chou of Rutgers University (pictured above) presented to a full room holding a single page of handwritten notes and a marker. The talk became more dialogue than monologue as members of the audience, freed from their usual need to follow a series of information-stuffed slides flying by at top speed, managed to interrupt with questions and comments. It is definitely a development and a change that I find interesting and support... so some extent. You see, something like this will be amazingly fun and useful IF the speaker is engaging and actually pays attention to the audience. I'm sure you've been in seminars (or even a class) where the speaker simply rambled on and on looking at the screen, without even looking behind him/her to see if the audience was even there! So how well something like this goes depends very much on the speaker. Still, not having the powerpoint slides will force these speakers to be more creative and inevitably, will create a less formal atmosphere during such a presentation. And from the report, having more of a dialog than a monolog is exactly what the organizers were trying to accomplish. It is interesting to note that while these physicists are going back to the "primitive" form of communication, others in the education field are trying various technologies and techniques to get away from the primitive form of teaching. It is now almost common that college lecturers use Powerpoint in their lectures, and other forms of teaching techniques and technologies are being used in the classrooms. Yet, at the top, we go back to chalkboard/whiteboard to communicate. Zz. ### Symmetrybreaking - Fermilab/SLAC Start spreading the SNEWS A worldwide network keeps astronomers and physicists ready for the next nearby supernova. When it comes to studying supernovae, if you don’t SNEWS, you lose. SNEWS, the Supernova Early Warning System, is a worldwide network designed to do just what the name implies: let astronomers and physicists know when a nearby supernova appears. This can be a tricky business, since supernovae appear in our galaxy roughly once every 30 years, and the window for studying them can vary—anywhere from a few weeks down to a few hours. ### Emily Lakdawalla - The Planetary Society Blog That time I took a selfie with Neil Tyson and the President of the United States Last week, my fellow Board Member Neil deGrasse Tyson and I were invited to be presenters at the first edition of the White House Film Festival. Neil asked the President if we could take a selfie with him. In those few moments, the President, Neil, and I spoke about science and space exploration. ### Matt Strassler - Of Particular Significance What if the Large Hadron Collider Finds Nothing Else? In my last post, I expressed the view that a particle accelerator with proton-proton collisions of (roughly) 100 TeV of energy, significantly more powerful than the currently operational Large Hadron Collider [LHC] that helped scientists discover the Higgs particle, is an obvious and important next steps in our process of learning about the elementary workings of nature. And I described how we don’t yet know whether it will be an exploratory machine or a machine with a clear scientific target; it will depend on what the LHC does or does not discover over the coming few years. What will it mean, for the 100 TeV collider project and more generally, if the LHC, having made possible the discovery of the Higgs particle, provides us with no more clues? Specifically, over the next few years, hundreds of tests of the Standard Model (the equations that govern the known particles and forces) will be carried out in measurements made by the ATLAS, CMS and LHCb experiments at the LHC. Suppose that, as it has so far, the Standard Model passes every test that the experiments carry out? In particular, suppose the Higgs particle discovered in 2012 appears, after a few more years of intensive study, to be, as far the LHC can reveal, a Standard Model Higgs — the simplest possible type of Higgs particle? Before we go any further, let’s keep in mind that we already know that the Standard Model isn’t all there is to nature. The Standard Model does not provide a consistent theory of gravity, nor does it explain neutrino masses, dark matter or “dark energy” (also known as the cosmological constant). Moreover, many of its features are just things we have to accept without explanation, such as the strengths of the forces, the existence of “three generations” (i.e., that there are two heavier cousins of the electron, two for the up quark and two for the down quark), the values of the masses of the various particles, etc. However, even though the Standard Model has its limitations, it is possible that everything that can actually be measured at the LHC — which cannot measure neutrino masses or directly observe dark matter or dark energy — will be well-described by the Standard Model. What if this is the case? Michelson and Morley, and What They Discovered In science, giving strong evidence that something isn’t there can be as important as discovering something that is there — and it’s often harder to do, because you have to thoroughly exclude all possibilities. [It's very hard to show that your lost keys are nowhere in the house --- you have to convince yourself that you looked everywhere.] A famous example is the case of Albert Michelson, in his two experiments (one in 1881, a second with Edward Morley in 1887) trying to detect the “ether wind”. Light had been shown to be a wave in the 1800s; and like all waves known at the time, it was assumed to be a wave in something material, just as sound waves are waves in air, and ocean waves are waves in water. This material was termed the “luminiferous ether”. As we can detect our motion through air or through water in various ways, it seemed that it should be possible to detect our motion through the ether, specifically by looking for the possibility that light traveling in different directions travels at slightly different speeds. This is what Michelson and Morley were trying to do: detect the movement of the Earth through the luminiferous ether. Both of Michelson’s measurements failed to detect any ether wind, and did so expertly and convincingly. And for the convincing method that he invented — an experimental device called an interferometer, which had many other uses too — Michelson won the Nobel Prize in 1907. Meanwhile the failure to detect the ether drove both FitzGerald and Lorentz to consider radical new ideas about how matter might be deformed as it moves through the ether. Although these ideas weren’t right, they were important steps that Einstein was able to re-purpose, even more radically, in his 1905 equations of special relativity. In Michelson’s case, the failure to discover the ether was itself a discovery, recognized only in retrospect: a discovery that the ether did not exist. (Or, if you’d like to say that it does exist, which some people do, then what was discovered is that the ether is utterly unlike any normal material substance in which waves are observed; no matter how fast or in what direction you are moving relative to me, both of us are at rest relative to the ether.) So one must not be too quick to assume that a lack of discovery is actually a step backwards; it may actually be a huge step forward. Epicycles or a Revolution? There were various attempts to make sense of Michelson and Morley’s experiment. Some interpretations involved tweaks of the notion of the ether. Tweaks of this type, in which some original idea (here, the ether) is retained, but adjusted somehow to explain the data, are often referred to as “epicycles” by scientists. (This is analogous to the way an epicycle was used by Ptolemy to explain the complex motions of the planets in the sky, in order to retain an earth-centered universe; the sun-centered solar system requires no such epicycles.) A tweak of this sort could have been the right direction to explain Michelson and Morley’s data, but as it turned out, it was not. Instead, the non-detection of the ether wind required something more dramatic — for it turned out that waves of light, though at first glance very similar to other types of waves, were in fact extraordinarily different. There simply was no ether wind for Michelson and Morley to detect. If the LHC discovers nothing beyond the Standard Model, we will face what I see as a similar mystery. As I explained here, the Standard Model, with no other particles added to it, is a consistent but extraordinarily “unnatural” (i.e. extremely non-generic) example of a quantum field theory. This is a big deal. Just as nineteenth-century physicists deeply understood both the theory of waves and many specific examples of waves in nature and had excellent reasons to expect a detectable ether, twenty-first century physicists understand quantum field theory and naturalness both from the theoretical point of view and from many examples in nature, and have very good reasons to expect particle physics to be described by a natural theory. (Our examples come both from condensed matter physics [e.g. metals, magnets, fluids, etc.] and from particle physics [e.g. the physics of hadrons].) Extremely unnatural systems — that is, physical systems described by quantum field theories that are highly non-generic — simply have not previously turned up in nature… which is just as we would expect from our theoretical understanding. [Experts: As I emphasized in my Santa Barbara talk last week, appealing to anthropic arguments about the hierarchy between gravity and the other forces does not allow you to escape from the naturalness problem.] So what might it mean if an unnatural quantum field theory describes all of the measurements at the LHC? It may mean that our understanding of particle physics requires an epicyclic change — a tweak. The implications of a tweak would potentially be minor. A tweak might only require us to keep doing what we’re doing, exploring in the same direction but a little further, working a little harder — i.e. to keep colliding protons together, but go up in collision energy a bit more, from the LHC to the 100 TeV collider. For instance, perhaps the Standard Model is supplemented by additional particles that, rather than having masses that put them within reach of the LHC, as would inevitably be the case in a natural extension of the Standard Model (here’s an example), are just a little bit heavier than expected. In this case the world would be somewhat unnatural, but not too much, perhaps through some relatively minor accident of nature; and a 100 TeV collider would have enough energy per collision to discover and reveal the nature of these particles. Or perhaps a tweak is entirely the wrong idea, and instead our understanding is fundamentally amiss. Perhaps another Einstein will be needed to radically reshape the way we think about what we know. A dramatic rethink is both more exciting and more disturbing. It was an intellectual challenge for 19th century physicists to imagine, from the result of the Michelson-Morley experiment, that key clues to its explanation would be found in seeking violations of Newton’s equations for how energy and momentum depend on velocity. (The first experiments on this issue were carried out in 1901, but definitive experiments took another 15 years.) It was an even greater challenge to envision that the already-known unexplained shift in the orbit of Mercury would also be related to the Michelson-Morley (non)-discovery, as Einstein, in trying to adjust Newton’s gravity to make it consistent with the theory of special relativity, showed in 1913. My point is that the experiments that were needed to properly interpret Michelson-Morley’s result • did not involve trying to detect motion through the ether, • did not involve building even more powerful and accurate interferometers, • and were not immediately obvious to the practitioners in 1888. This should give us pause. We might, if we continue as we are, be heading in the wrong direction. Difficult as it is to do, we have to take seriously the possibility that if (and remember this is still a very big “if”) the LHC finds only what is predicted by the Standard Model, the reason may involve a significant reorganization of our knowledge, perhaps even as great as relativity’s re-making of our concepts of space and time. Were that the case, it is possible that higher-energy colliders would tell us nothing, and give us no clues at all. An exploratory 100 TeV collider is not guaranteed to reveal secrets of nature, any more than a better version of Michelson-Morley’s interferometer would have been guaranteed to do so. It may be that a completely different direction of exploration, including directions that currently would seem silly or pointless, will be necessary. This is not to say that a 100 TeV collider isn’t needed! It might be that all we need is a tweak of our current understanding, and then such a machine is exactly what we need, and will be the only way to resolve the current mysteries. Or it might be that the 100 TeV machine is just what we need to learn something revolutionary. But we also need to be looking for other lines of investigation, perhaps ones that today would sound unrelated to particle physics, or even unrelated to any known fundamental question about nature. Let me provide one example from recent history — one which did not lead to a discovery, but still illustrates that this is not all about 19th century history. An Example One of the great contributions to science of Nima Arkani-Hamed, Savas Dimopoulos and Gia Dvali was to observe (in a 1998 paper I’ll refer to as ADD, after the authors’ initials) that no one had ever excluded the possibility that we, and all the particles from which we’re made, can move around freely in three spatial dimensions, but are stuck (as it were) as though to the corner edge of a thin rod — a rod as much as one millimeter wide, into which only gravitational fields (but not, for example, electric fields or magnetic fields) may penetrate. Moreover, they emphasized that the presence of these extra dimensions might explain why gravity is so much weaker than the other known forces. Fig. 1: ADD’s paper pointed out that no experiment as of 1998 could yet rule out the possibility that our familiar three-dimensional world is a corner of a five-dimensional world, where the two extra dimensions are finite but perhaps as large as a millimeter. Given the incredible number of experiments over the past two centuries that have probed distances vastly smaller than a millimeter, the claim that there could exist millimeter-sized unknown dimensions was amazing, and came as a tremendous shock — certainly to me. At first, I simply didn’t believe that the ADD paper could be right. But it was. One of the most important immediate effects of the ADD paper was to generate a strong motivation for a new class of experiments that could be done, rather inexpensively, on the top of a table. If the world were as they imagined it might be, then Newton’s (and Einstein’s) law for gravity, which states that the force between two stationary objects depends on the distance r between them as 1/r², would increase faster than this at distances shorter than the width of the rod in Figure 1. This is illustrated in Figure 2. Fig. 2: If the world were as sketched in Figure 1, then Newton/Einstein’s law of gravity would be violated at distances shorter than the width of the rod in Figure 1. The blue line shows Newton/Einstein’s prediction; the red line shows what a universe like that in Figure 1 would predict instead. Experiments done in the last few years agree with the blue curve down to a small fraction of a millimeter. These experiments are not easy — gravity is very, very weak compared to electrical forces, and lots of electrical effects can show up at very short distances and have to be cleverly avoided. But some of the best experimentalists in the world figured out how to do it (see here and here). After the experiments were done, Newton/Einstein’s law was verified down to a few hundredths of a millimeter. If we live on the corner of a rod, as in Figure 1, it’s much, much smaller than a millimeter in width. But it could have been true. And if it had, it might not have been discovered by a huge particle accelerator. It might have been discovered in these small inexpensive experiments that could have been performed years earlier. The experiments weren’t carried out earlier mainly because no one had pointed out quite how important they could be. Ok Fine; What Other Experiments Should We Do? So what are the non-obvious experiments we should be doing now or in the near future? Well, if I had a really good suggestion for a new class of experiments, I would tell you — or rather, I would write about it in a scientific paper. (Actually, I do know of an important class of measurements, and I have written a scientific paper about them; but these are measurements to be done at the LHC, and don’t involve a entirely new experiment.) Although I’m thinking about these things, I do not yet have any good ideas. Until I do, or someone else does, this is all just talk — and talk does not impress physicists. Indeed, you might object that my remarks in this post have been almost without content, and possibly without merit. I agree with that objection. Still, I have some reasons for making these points. In part, I want to highlight, for a wide audience, the possible historic importance of what might now be happening in particle physics. And I especially want to draw the attention of young people. There have been experts in my field who have written that non-discoveries at the LHC constitute a “nightmare scenario” for particle physics… that there might be nothing for particle physicists to do for a long time. But I want to point out that on the contrary, not only may it not be a nightmare, it might actually represent an extraordinary opportunity. Not discovering the ether opened people’s minds, and eventually opened the door for Einstein to walk through. And if the LHC shows us that particle physics is not described by a natural quantum field theory, it may, similarly, open the door for a young person to show us that our understanding of quantum field theory and naturalness, while as intelligent and sensible and precise as the 19th century understanding of waves, does not apply unaltered to particle physics, and must be significantly revised. Of course the LHC is still a young machine, and it may still permit additional major discoveries, rendering everything I’ve said here moot. But young people entering the field, or soon to enter it, should not assume that the experts necessarily understand where the field’s future lies. Like FitzGerald and Lorentz, even the most brilliant and creative among us might be suffering from our own hard-won and well-established assumptions, and we might soon need the vision of a brilliant young genius — perhaps a theorist with a clever set of equations, or perhaps an experimentalist with a clever new question and a clever measurement to answer it — to set us straight, and put us onto the right path. Filed under: Higgs, History of Science, LHC Background Info, Other Collider News, Particle Physics, Quantum Field Theory, The Scientific Process Tagged: Einstein, energy, ExtraDimensions, gravity, Higgs, LHC, particle physics, relativity ### Peter Coles - In the Dark From Darkness to Green On Wednesday this week I spent a very enjoyable few hours in London attending the Inaugural Lecture of Professor Alan Heavens at South Kensington Technical College Imperial College, London. It was a very good lecture indeed, not only for its scientific content but also for the plentiful touches of droll humour in which Alan specialises. It was also followed by a nice drinks reception and buffet. The talk was entitled Cosmology in the Dark, so naturally I had to mention it on this blog! At the end of the lecture, the vote of thanks was delivered in typically effervescent style by the ebullient Prof. Malcolm Longair who actually supervised Alan’s undergraduate project at the Cavendish laboratory way back in 1980, if I recall the date correctly. In his speech, Malcolm referred to the following quote from History of the Theories of the Aether and Electricity (Whittaker, 1951) which he was kind enough to send me when I asked by email: The century which elapsed between the death of Newton and the scientific activity of Green was the darkest in the history of (Cambridge) University. It is true that (Henry) Cavendish and (Thomas) Young were educated at Cambridge; but they, after taking their undergraduate courses, removed to London. In the entire period the only natural philosopher of distinction was (John) Michell; and for some reason which at this distance of time it is difficult to understand fully, Michell’s researches seem to have attracted little or no attention among his collegiate contemporaries and successors, who silently acquiesced when his discoveries were attributed to others, and allowed his name to perish entirely from the Cambridge tradition. I wasn’t aware of this analysis previously, but it re-iterates something I have posted about before. It stresses the enormous historical importance of British mathematician and physicist George Green, who lived from 1793 until 1841, and who left a substantial legacy for modern theoretical physicists, in Green’s theorems and Green’s functions; he is also credited as being the first person to use the word “potential” in electrostatics. Green was the son of a Nottingham miller who, amazingly, taught himself mathematics and did most of his best work, especially his remarkable Essay on the Application of mathematical Analysis to the theories of Electricity and Magnetism (1828) before starting his studies as an undergraduate at the University of Cambridge which he did at the age of 30. Lacking independent finance, Green could not go to University until his father died, whereupon he leased out the mill he inherited to pay for his studies. Extremely unusually for English mathematicians of his time, Green taught himself from books that were published in France. This gave him a huge advantage over his national contemporaries in that he learned the form of differential calculus that originated with Leibniz, which was far more elegant than that devised by Isaac Newton (which was called the method of fluxions). Whittaker remarks upon this: Green undoubtedly received his own early inspiration from . . . (the great French analysts), chiefly from Poisson; but in clearness of physical insight and conciseness of exposition he far excelled his masters; and the slight volume of his collected papers has to this day a charm which is wanting in their voluminous writings. Great scientist though he was, Newton’s influence on the development of physics in Britain was not entirely positive, as the above quote makes clear. Newton was held in such awe, especially in Cambridge, that his inferior mathematical approach was deemed to be the “right” way to do calculus and generations of scholars were forced to use it. This held back British science until the use of fluxions was phased out. Green himself was forced to learn fluxions when he went as an undergraduate to Cambridge despite having already learned the better method. Unfortunately, Green’s great pre-Cambridge work on mathematical physics didn’t reach wide circulation in the United Kingdom until after his death. William Thomson, later Lord Kelvin, found a copy of Green’s Essay in 1845 and promoted it widely as a work of fundamental importance. This contributed to the eventual emergence of British theoretical physics from the shadow cast by Isaac Newton which reached one of its heights just a few years later with the publication a fully unified theory of electricity and magnetism by James Clerk Maxwell. But as to the possible reason for the lack of recognition for John Michell who was clearly an important figure in his own right (he was the person who first developed the concept of a black hole, for example) you’ll have to read Malcolm Longair’s forthcoming book on the History of the Cavendish Laboratory! ### Clifford V. Johnson - Asymptotia Showcase and Awards Today! Just a reminder: The USC Science Film Competition Showcase and Awards are tonight (March 7th) at 6:00pm. I've been tallying up all the judges' input and have the results in special envelopes to give out tonight. Very exciting. Come along (event information here), and enjoy celebrating all the students' hard work. There will be twelve films on display! -cvj Click to continue reading this post ### astrobites - astro-ph reader's digest First look at NASA’s FY2015 Budget Request The Obama Administration just released its budget request for fiscal year (FY) 2015. While not much has changed from previous years regarding NASA, a few “minor” tweaks will elate some communities and devastate others. In particular, the White House wants to slash the budget for the Stratospheric Observatory for Infrared Astronomy (SOFIA, the 2.5 m telescope that flies around in a 747). The resulting savings would fund an extension of the Cassini mission to Saturn and its satellites, among other things. Before diving into this year’s wonky details, you might want to read a general overview of the federal budget process or two meditations about NASA’s strategic direction. The Larger Context When Republicans took over the majority in the House of Representatives in 2010, they started rancorous debates over the appropriations process. The country began lurching from fiscal crisis to crisis—or cliff or whatever metaphor you like. Federal agencies couldn’t plan more than a few months ahead at a time. Our credit rating got downgraded. You know the story. Republicans and Democrats, represented by Representative Paul Ryan and Senator Patty Murray, respectively, announced a welcome compromise in 2013, which President Obama quickly signed into law. The Bipartisan Budget Act of 2013 eliminated some of the onerous, indiscriminate cuts mandated by sequestration and set overall spending levels of$1.012 trillion for FY 2014 and $1.014 trillion for FY 2015. Notably, this act only imposes an overall cap. The various appropriations committees in Congress can decide what programs to cut or fund to get to that number. The Stratospheric Observatory for Infrared Astronomy (SOFIA) sits on the ramp in Palmdale, CA as mission staff celebrate its 100th flight. SOFIA may be the latest casualty of our age of austerity. In times of flat budgets, doing anything new requires ending operating missions, some of which may still be producing good science. This new budget request adheres to the requirements of the Bipartisan Budget Act. Because the Obama Administration believes that more government spending could significantly improve the national welfare, there is a supplemental request totaling$56 billion, called the Opportunity, Growth, and Security (OGS) Initiative. (I think they considered calling it the Annoying Republicans by Supporting Progressive Priorities Initiative but then decided to try staying non-confrontational.) This initiative would send an additional $886 million to NASA beyond the$17.5 billion in the primary request, but House Republicans have essentially declared this addition dead on arrival (i.e., the GOP will not agree to any appropriations beyond the 2013 caps). I’d bet that Congress accedes to the core of the President’s request for NASA, because it’s basically a continuation of what Congress has recently passed. But Congress is forever fickle.

The End of SOFIA?

SOFIA is an infrared telescope that flies in the stratosphere, above most of the water vapor that bedevils ground-based observers. (One Astrobites author took an informative tour in 2013.) Right now, NASA supplies 80% of its funding—the remaining 20% comes from the German space agency (DLR). The history of SOFIA is rife with cost overruns, technical delays, and even attempted cancellations. Now, astronomy missions are no strangers to these problems. But SOFIA’s are unusually severe. Its cost per hour of observations (>\$300,000) currently rivals NASA’s most expensive missions, including the Hubble Space Telescope. The final instrument on SOFIA was just fully implemented, but the plane will shortly be grounded for half a year of maintenance. Sadly, all these factors make SOFIA low hanging fruit for cutting in our age of austerity. All scientists wish that we could fund every innovative mission that produces quality science. But without perpetually increasing budgets, some things must be prioritized.

The FY 2015 budget request “proposes placing SOFIA into storage due to its high operating cost and budget constraints.” This ignominious fate could be avoided if Germany or some other partner were able to step in and supply the lost funding. The rationale is that “savings from SOFIA can have a larger impact supporting other science missions.” NASA hasn’t yet specified what missions will benefit from SOFIA’s potential downfall, but the Explorer program of targeted, small-scale astronomy missions and the extended Cassini mission are popular guesses.

Planetary scientists were worried that NASA would send Cassini plunging into Saturn’s atmosphere years ahead of schedule for want of sufficient funding. (Any watching aliens would likely shake their heads, or head-equivalents, in a mixture of bemusement and disgust.) Cassini is extremely popular, so it seemed likely that NASA would find money to keep it going. The question, however, was where. By proposing an effective end to American investment in SOFIA, it looks like NASA has answered.

## March 06, 2014

### Quantum Diaries

My Week as a Real Scientist

For a week at the end of January, I was a real scientist. Actually, I’m always a real scientist, but only for that week was I tweeting from the @realscientists Twitter account, which has a new scientist each week typing about his or her life and work. I tweeted a lot. I tweeted about the conference I was at. I tweeted about the philosophy of science and religion. I tweeted about how my wife, @CuratorPolly, wasn’t a big fan of me being called the “curator” of the account for the week. I tweeted about airplanes and very possibly bagels. But most of all I tweeted the answers to questions about particle physics and the LHC.

Real Scientists wrote posts for the start and end of my week, and all my tweets for the week are at this Storify page. My regular twitter account, by the way, is @sethzenz.

I was surprised by how many questions people had when I they were told that a real physicist at a relatively high-profile Twitter account was open for questions. A lot of the questions had answers that can already be found, often right here on Quantum Diaries! It got me thinking a bit about different ways to communicate to the public about physics. People really seem to value personal interaction, rather than just looking things up, and they interact a lot with an account that they know is tweeting in “real time.” (I almost never do a tweet per minute with my regular account, because I assume it will annoy people, but it’s what people expect stylistically from the @realscientists account.) So maybe we should do special tweet sessions from one of the CERN-related accounts, like @CMSexperiment, where we get four physicists around one computer for an hour and answer questions. (A lot of museums did a similar thing with #AskACurator day last September.) We’ve also discussed the possibility of doing a AMA on Reddit. And the Hangout with CERN series will be starting again soon!

But while you’re waiting for all that, let me tell you a secret: there are lots of physicists on Twitter. (Lists here and here and here, four-part Symmetry Magazine series here and here and here and here.) And I can’t speak for everyone, but an awful lot of us would answer questions if you had any. Anytime. No special events. Just because we like talking about our work. So leave us comments. Tweet at us. Your odds of getting an answer are pretty good.

In other news, Real Scientists is a finalist for the Shorty Award for social media’s best science. We’ll have to wait and see how they — we? — do in a head-to-head matchup with giants like NASA and Neil deGrasse Tyson. But I think it’s clear that people value hearing directly from researchers, and social media seems to give us more and more ways to communicate every year.

### Sean Carroll - Preposterous Universe

Effective Field Theory and Large-Scale Structure

Been falling behind on my favorite thing to do on the blog: post summaries of my own research papers. Back in October I submitted a paper with two Caltech colleagues, postdoc Stefan Leichenauer and grad student Jason Pollack, on the intriguing intersection of effective field theory (EFT) and cosmological large-scale structure (LSS). Now’s a good time to bring it up, as there’s a great popular-level discussion of the idea by Natalie Wolchover in Quanta.

So what is the connection between EFT and LSS? An effective field theory, as loyal readers know, an “effective field theory” is a way to describe what happens at low energies (or, equivalently, long wavelengths) without having a complete picture of what’s going on at higher energies. In particle physics, we can calculate processes in the Standard Model perfectly well without having a complete picture of grand unification or quantum gravity. It’s not that higher energies are unimportant, it’s just that all of their effects on low-energy physics can be summed up in their contributions to just a handful of measurable parameters.

In cosmology, we consider the evolution of LSS from tiny perturbations at early times to the splendor of galaxies and clusters that we see today. It’s really a story of particles — photons, atoms, dark matter particles — more than a field theory (although of course there’s an even deeper description in which everything is a field theory, but that’s far removed from cosmology). So the right tool is the Boltzmann equation — not the entropy formula that appears on his tombstone, but the equation that tells us how a distribution of particles evolves in phase space. However, the number of particles in the universe is very large indeed, so it’s the most obvious thing in the world to make an approximation by “smoothing” the particle distribution into an effective fluid. That fluid has a density and a velocity, but also has parameters like an effective speed of sound and viscosity. As Leonardo Senatore, one of the pioneers of this approach, says in Quanta, the viscosity of the universe is approximately equal to that of chocolate syrup.

So the goal of the EFT of LSS program (which is still in its infancy, although there is an important prehistory) is to derive the correct theory of the effective cosmological fluid. That is, to determine how all of the complicated churning dynamics at the scales of galaxies and clusters feeds back onto what happens at larger distances where things are relatively smooth and well-behaved. It turns out that this is more than a fun thing for theorists to spend their time with; getting the EFT right lets us describe what happens even at some length scales that are formally “nonlinear,” and therefore would conventionally be thought of as inaccessible to anything but numerical simulations. I really think it’s the way forward for comparing theoretical predictions to the wave of precision data we are blessed with in cosmology.

Here is the abstract for the paper I wrote with Stefan and Jason:

A Consistent Effective Theory of Long-Wavelength Cosmological Perturbations
Sean M. Carroll, Stefan Leichenauer, Jason Pollack

Effective field theory provides a perturbative framework to study the evolution of cosmological large-scale structure. We investigate the underpinnings of this approach, and suggest new ways to compute correlation functions of cosmological observables. We find that, in contrast with quantum field theory, the appropriate effective theory of classical cosmological perturbations involves interactions that are nonlocal in time. We describe an alternative to the usual approach of smoothing the perturbations, based on a path-integral formulation of the renormalization group equations. This technique allows for improved handling of short-distance modes that are perturbatively generated by long-distance interactions.

As useful as the EFT of LSS approach is, our own contribution is mostly on the formalism side of things. (You will search in vain for any nice plots comparing predictions to data in our paper — but do check out the references.) We try to be especially careful in establishing the foundations of the approach, and along the way we show that it’s not really a “field” theory in the conventional sense, as there are interactions that are nonlocal in time (a result also found by Carrasco, Foreman, Green, and Senatore). This is a formal worry, but doesn’t necessarily mean that the theory is badly behaved; one just has to work a bit to understand the time-dependence of the effective coupling constants.

Here is a video from a physics colloquium I gave at NYU on our paper. A colloquium is intermediate in level between a public talk and a technical seminar, so there are some heavy equations at the end but the beginning is pretty motivational. Enjoy!

### Lubos Motl - string vacua and pheno

Two fresh dark matter stories
Randall, Reece link DM and dinosaurs; strengthening DM signal in Central Milky Way

I want to mention two developments related to dark matter. First, Lisa Randall and Matthew Reece of Harvard have finally released a preprint – to appear in Physical Review Letters – linking extinctions and dark matter:
Dark Matter as a Trigger for Periodic Comet Impacts
As the "comments" (an entry in the arXiv form) point out, there are no dinosaurs in the paper so let me offer you a compensation.

Holy crap, we forgot to install a thermonuclear missile shield above Chick-Ku-Klux-Club in the Yucatan Peninsula (65 megayears before Christ).

At least one of the authors has intensely thought about various extinctions etc. at the same moment when she or he was writing the paper ;-), so the "no dinosaurs" comment is much less off-topic than some people might think.

They take one thing for granted, namely a periodicity of 35 million years in the crater record on the Earth's surface. And they try to link it to a model involving the galactic midplane, a hypothetical dark disk in that plane, and tidal effects on the Oort cloud (a far "Ukraine" of the Solar System; just to be sure, if you happen to be brainwashed by the idea that Ukraine has no permanent link to Russia, "Ukraine" does mean "borderland" or "march" [of Rus'] in the Slavic languages, and even Ukrainian scholars agree with that).

I am a non-expert and confused by the periodicities in similar things. I know that comets and craters are different things than galactic cosmic rays but I still don't fully understand why they should exhibit such a very different behavior when it comes to the periodicity. Note that the periodicity of the galactic cosmic rays used by Shaviv and Veizer is about 140 million years.

At any rate, it is a new paper linking terrestrial traces (in this case, numbers of craters) with some celestial cycles (linked to the inner structure of our galaxy). So when it comes to the basic ideas, I do think that Shaviv-Veizer should be cited by Randall-Reece and it's a mistake that it is not.

The second fresh paper on dark matter I want to mention is
The Characterization of the Gamma-Ray Signal from the Central Milky Way: A Compelling Case for Annihilating Dark Matter
by Daylan, Finkbeiner, Hooper, Linden, Portillo, Rodd, and Slatyer. A good popular story was printed in
Case for Dark Matter Signal Strengthens (by Wolchower, Simons Foundation's Quanta Magazine, copy in The Guardian)
and in Wired (by Adam Mann).

One looks somewhere at the galactic center using the Fermi gamma-ray telescope. She sees tons of frequencies and tries to maximally accurately subtract the radiation from all the known sources (stars). Something is left, especially gamma rays with energies $$1$$-$$3\GeV$$. The question is whether this excess is due to some exciting new physics (dark matter particle in this case) or some relatively mundane astrophysics ("millisecond pulsars" is currently the #1 favorite buzzwords of those who want to prefer this conservative explanation).

They drew the map of "where the excess is coming from" in some more detail, with a more careful geographic subtraction, and the result is that it started to look more like dark matter and less like millisecond pulsars etc. That's why Finkbeiner, a long-term skeptic when it comes to the dark matter interpretation of similar signals, joined the large list of authors (although he's still more skeptical than some co-authors).

The features supporting the dark-matter interpretation include the apparent spherical shape of the DM halo needed to explain that (although it could be elongated a priori); and the extension of the source up to 10° from the Galactic center (where no millisecond pulsars seem to be located) which still seems to agree with a distribution expected for dark matter (they assume a generalized NFW halo profile everywhere).

Even if the photons are created from dark matter by some annihilation, it is hard to determine what the dark matter particle is (and by which process it decays to the gamma rays and something else). Their favorite explanation is a dark matter WIMP particle in the $$31$$-$$40\GeV$$ interval that decays to the $$b\bar b$$ quark pair. I suppose that the two hadrons containing the bottom quark (or antiquark) then continue to decay so that the "few$$\GeV$$ photons" appear among the final products of the decay.

If the dark-matter explanation is real, there is a chance – one could even call it a prediction – that the same excess should be seen in the dwarf galaxies orbiting our Milky Way. Jennifer Siegal-Gaskins of Fermi (and Caltech) leaks the opinion that the excess could indeed be there, too. Dan Hooper says that a confirmation of this rumor by a big excess would make this game over. Well, he seems to be convinced that DM is the only possible explanation already now. ;-) It's not surprising given the purely numerical statistical significance: it's a whopping 40 standard deviations, well enough above 5 sigma! But catches could still be there. The number "40 sigma" results from a comparison of $$\chi^2$$ of fits with and without the dark matter halo (imprinted via their assumed decays).

Tracy Slatyer who is now at MIT faculty talks about her surprise that the new data would indeed sharpen the picture. Her preferred WIMP mass is $$35\GeV$$. The particle is sometimes referred to as a "hooperon", not to be confused with a "hyperon", but "tracon" could be good, too. Juan Collar of CoGeNT etc. who would defend some "dark matter direct discovery" claims that no longer look too plausible now says that such a particle may be detected by similar underground experiments if the sensitivity increases 100-fold.

The rest of Wolchower's article is about the sterile-neutrino-like X-ray excess and the possibility that both excesses could actually be genuine, something that could actually be compatible e.g. with the eXciting dark matter models.

### ZapperZ - Physics and Physicists

What Happens When You Cross A Bicycle With A Tricycle
Is this another case against cross-breeding and genetic modification? :)

Those crazy folks at Cornell produced a hybrid between a bicycle and a tricycle, and ended up with a vehicle that has a very weird steering capability.
Similarly, he wanted to see if the bike/trike dichotomy was really true in practice: A vehicle perfectly balanced between tricycle and bicycle would negate the effect of gravity by both preventing it from exerting force with its rear wheels like a trike, and by allowing the rider to lean the bike at any angle without shifting her center of mass.

Ruina’s “bricycle,” as he calls it, is a bike equipped with two training wheels attached by means of a spring. When the spring is stiff, the bricycle turns like a trike. When the spring is loose, the bricycle turns like a bike. But at a certain point when the spring is just stiff enough, the training wheels and rear wheel offset the force of gravity on each other. At that stiffness, the bike becomes unsteerable and falls over if the rider tries to turn, Ruina reported today at the American Physical Society meeting in Denver.
The bricycle is really the same as the gravity-free pendulum. Assuming friction and so on are negligible, if we start from an upright position, the lean and the sideways displacement of the ground contact point are always in proportion to each other. So changing direction would cause both an ever-growing distance for the original line of travel, and an ever-growing lean angle. The riders don't tolerate this. Instead, they maintain balance and thus are stuck going about straight.

So gravity, superficially the thing that makes it hard to balance a bicycle, is the thing that allows you to steer it.
Here's the video:

Zz.

### Marco Frasca - The Gauge Connection

Evidence of the square root of Brownian motion

A mathematical proof of existence of a stochastic process involving fractional exponents seemed out of question after some mathematicians claimed this cannot not exist. This observation is strongly linked to the current definition and may undergo revision if nature does not agree with it. Stochastic process are very easy to simulate on a computer. Very few lines of code can decide if something works or not. I and Alfonso Farina, together with Matteo Sedehi,  have introduced the idea that the square root of a Wiener process yields the Schroedinger equation (see here or download a preprint here). This implies that one has to attach a meaning to the equation

$dX=(dW)^\frac{1}{2}.$

In a paper appeared today on arxiv (see here) we finally have provided this proof: We were right. The idea is to solve such an equation by numerical methods. These methods are themselves a proof of existence. We used the Euler-Maruyama method, the simplest one and we compared the results as shown in the following figure

a) Original Brownian motion. b) Same but squaring the formula for the square root. c) Formula of the square root taken as a stochastic equation. d) Same from the stochastic equation in this post.

There is now way to distinguish each other and the original Brownian motion is completely recovered by taking the square of the square root process computed in three different ways. Each one of these completely supports the conclusions we have drawn in our published paper. You can find the code to recover this figure in our arxiv paper. It is obtained by a Monte Carlo simulation with 10000 independent paths. You can play with it changing the parameters as you like.

This paper has an important consequence: Our current mathematical understanding of stochastic processes should be properly extended to account for our results. As a by-product, we have shown how, using Pauli matrices, this idea can be generalized to include spin introducing a new class of stochastic processes in a Clifford algebra.

In conclusion, we would like to remember that, it does not matter what your mathematical definition could be, a stochastic process is always a well-defined entity on a numerical ground. Tests can be easily performed as we proved here.

Farina, A., Frasca, M., & Sedehi, M. (2013). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y

Marco Frasca, & Alfonso Farina (2014). Numerical proof of existence of fractional Wiener processes arXiv arXiv: 1403.1075v1

Filed under: Applied Mathematics, Mathematical Physics, Physics, Quantum mechanics Tagged: Brownian motion, Schrödinger equation, Square root of a stochastic process, Stochastic differential equations, Stochastic processes

### Lubos Motl - string vacua and pheno

Brian Greene's talk on the state of string theory
Stephen and Vincent Della Pietra – who are not Capo di tutti capi because there are two of them; instead, they are fratelli – donated a few million dollars to Stony Brook and launched their lecture series. Recent speakers included (or the coming one will include) Wilczek, Linde, Veltman (and Schwarz).

In October 2011, Brian Greene gave the talk on "The State of String theory" which was finally posted to YouTube and if you can sacrifice 76 minutes (or a part of them), you are invited to watch the talk.

There are lots of the usual and some unusual introductory comments related to string theory – its history, major conflicts in physics, small vs long scales, why the theory unifies, pants diagrams, extra dimensions and their physics.

Since 27:00, it's more about the "present" – how to extract phenomenology, nonperturbative formulations including the Matrix Theory Hamiltonian, AdS/CFT, dualities, M-theory, landscape, experimental tests, braneworlds, cosmology, inflation, singularities, impacts on enumerative geometry, quantum geometry, a report card, emergence and holography.

Questions begin at 1:03:40 and Brian's answers are often amusing. They suggest a significant gap between the diplomatic formulations he often likes to offer to large audiences and what he really thinks on the other hand. With this variability, the hypothesis that he actually thinks the same thing about most of these issues (and the non-orthodox "interpretations" of quantum mechanics unfortunately don't belong to this list) as your humble correspondent is totally viable. I also have some spread of the "degree of diplomacy" depending on the context, it's just visibly smaller than Brian's.

The first man complains that Brian omitted the competitors. Brian says that he didn't see any competitors. People laugh and he says that it is a fair question and switches to the mode of Brian Greene the diplomat. He enumerates the guys who are doing loop quantum gravity and how much they believe that they have an alternative and so on. He wraps this flattering discussion by saying that it's good that people are working on various alternatives – and he's also happy that he personally doesn't work on that LQG pile of crap! ;-)

Brian offers an even more entertaining two-colored answer to a question about the "superluminal OPERA neutrinos" that were hot at that time. He says how smart these people are and how they have surely incorporated all the effects of the GPS synchronization, slowdown of light in the air, turbulence of the air, [he enumerates about 10 other possible sources of errors]. These people are so careful, you know. Brian keeps the consistently diplomatic language and says that he might still need some truly independent measurement to be compelled. The punch line arrives in his last sentence: "In fact, I would bet anything I hold dear that their result is wrong." The audience explodes in laughter because in the context, the sentence is a work of a comedian. Recall that the bogus "superluminal" OPERA result was an artifact of a loosely connected optical cable.

The third question is about the links of string theory and the Higgs boson. Brian says some of the same things I did but he also discusses the (currently indeed emerging) nightmare scenario (Higgs and nothing else at the LHC) and the reactions of the funding agencies to this "surprising and exciting" possibility.

When Greene is asked the overly popular question whether the landscape makes string theory unfalsifiable, Brian says that there are different ways to deal with this question. The first way, he shows, is offered by an "offensive cartoon". Just to be sure, the answer to the question is "PAK, you keep on talking like a bitch so I'm gonna slap you like a bitch". I have seen and given many answers to the question but I still think that this is the most accurate and appropriate one. Brian says that "I don't even want to show that [answer]" but I suspect he also thinks it's the best answer on the market. He says it's "not his perspective at all, it is completely inappropriate", so he offers another, less compelling answer. He points out it is not obvious that you can get anything out of the many vacua. The points are discrete and sparse, so after some measurements, you may produce predictions. Second, he says that the statistical properties of the vacua should be studied – clusters, groups, statistical predictions etc. He says that the problem is that it is hard to make it. I think that a more conceptual problem is that one can only make predictions if he has some probability measure and there's no known natural probabilistic distribution on the space of vacua (the egalitarian one is clearly wrong).

What is the difference between the choices of fields and parameters of the Standard Model on one side and properties of the compactifications in string theory on the other? Well, the former ones are continuous numbers etc.; the latter are just some discrete data. So the latter yield no undetermined, adjustable continuous dimensionless parameters. For practical applications – whether we really can say what's right or wrong – the framework of string theory is as predictive or unpredictive as the framework of quantum field theory.

Michael Douglas adds the last comment – a more careful analysis of early cosmology in string theory may actually tell us what the shape of extra dimensions ultimately looks like or prefers to look like which may make predictions possible. Brian Greene says "exactly" and that's the end of the talk.

### Symmetrybreaking - Fermilab/SLAC

Physics by hand

To encourage discussion and engagement, a physics forum has banned PowerPoint slides in favor of low-tech whiteboards.

A physicist is more than the sum of his or her slides.

That's why, about six months ago, organizers of a biweekly forum on Large Hadron Collider physics at Fermilab banned PowerPoint presentations in favor of old-fashioned, chalkboard-style talks.

“Without slides, the participants go further off-script, with more interaction and curiosity,” says Andrew Askew, an assistant professor of physics at Florida State University and a co-organizer of the forum. “We wanted to draw out the importance of the audience.”

## March 05, 2014

### arXiv blog

Can a Serious Game Improve Privacy Awareness on Facebook?

Understanding the nature of privacy on Facebook is not always straightforward. Now there’s a game that can help.

### The Great Beyond - Nature blog

Acid-bath stem-cell team releases tip sheet

A group of Japanese researchers whose revolutionary method to  produce stem cells simply drew questions from other biologists has published more details of their protocol.

The authors, who developed an ‘acid-bath’ technique that others have so far been unable to reproduce, released technical tips with a press statement today and published them on Nature Protocol Exchange. The document is entitled ‘Essential technical tips for STAP cell conversion culture from somatic cells’.

In it, Haruko Obokata, Hitoshi Niwa and Yoshiki Sasai, all of the RIKEN Centre for Developmental Biology in Kobe, say that despite its “seeming simplicity”, the method requires special care. But it is “absolutely reproducible”, Niwa told Nature News.

The controversy began at the end of January when Obokata and colleagues released two papers in Nature detailing how stress — in the form of low pH or physical pressure — could trigger the reprogramming of a mouse’s cells into an embryonic state, a process they called stimulus-triggered acquisition of pluripotency (STAP).

Cells reprogrammed into this state are ideal for studying the development of disease or the effectiveness of drugs, and could also be transplanted to regenerate failing organs. Making another type of pluripotent stem cell, called induced pluripotent stem (iPS) cells, requires a complex recipe of chemical or genetic factors. Obokata’s simple technique made headlines around the world.

But after the papers were published, they came under attack for a number of reasons, including the presence of duplicated images, an apparently plagiarized passage and the abnormal presentation of certain data. This led some commenters to question the validity of the results.

Something that would resolve the controversy would be the replication of the results by another group, but so far there have only been reports of failed attempts.

Despite a media frenzy, especially in Japan, with headlines even suggesting the results are fraudulent, the authors have stood by the work. Today Niwa told Nature News that members of the team besides Obokata have replicated the bulk of the work and that others outside the laboratory have succeeded in the first crucial step, inducing Oct3/4 expression after the acid treatment.

But the authors admit that the procedure is more complicated than originally advertised, leading to the publication of the tips.

The 10-page document states: “Despite its seeming simplicity, this procedure requires special care in cell handling and culture conditions, as well as in the choice of starting cell population.” The authors also point to the importance of bringing the cells gradually to the brink of death — which kills some 80% of them after 2–3 days — to reach the “optimal level of sub-lethal stress”.

The tips break down the process into three sections: collection of tissue and treatment with low-pH needed to produce STAP cells; preparing the culture needed to convert STAP cells to STAP stem cells, which behave like iPS cells or embryonic stem cells; and preparing the culture needed to turn STAP cells into “FI cells”, which can form placenta.

The document includes 28 “important” tips, which note the necessity of starting with primary cells (as opposed to cultured cells); that mice less than a week old, especially male mice, gave better results; the recommendation of using non-adhesive plates, which allow cell mobility and cluster formation; the importance of getting the cell density right in culture; the recommendation of using mice of a specific genetic lineage and many other detailed hints for using the proper culture conditions.

Martin Pera, a stem-cell researcher at the University of Melbourne in Australia, says: “The details provided in Nature Protocol Exchange will undoubtedly be helpful to those trying to repeat these findings.” But Pera, who has not tried to make STAP cells, adds: “The additional information does not seem to me to reveal any key procedural detail without which it would be impossible to duplicate the work. It appears instead to reinforce and emphasise some aspects of the technique that were disclosed originally.”

Those who are trying to replicate the method are intrigued by the publication of the tips. Jacob Hanna, at the Weizmann Institute of Science in Rehoboth, Israel, has made 10 batches of cells in an as-yet-unsuccessful effort to make STAP cells. He looks forward to trying some of the tips on culture conditions. “Some protocols can indeed be tricky and finicky and I commend the authors on making the effort to reach out to the scientific community,” he says. But he questions how the more complicated protocol would apply to another method of producing STAP cells advertised in the original article — putting pressure on the cell membranes. ”I find that hard to imagine as a very complicated manipulation,” he says.

Qi Zhou, of the Institute of Zoology in Beijing, also appreciates the authors sharing all the details, as “some were overlooked” in his efforts to make STAP cells. The restrictions on the origin of the cell type to specific stage and gender “raise very interesting questions which may help to explain the underlying mechanism of STAP”, he says.

Niwa says that the original team is working on a “full protocol” that will make it easier to make STAP cells, but that won’t be available for at least a month. “We are not sure when it will happen because we are now trying to improve some point to enhance the reproducibility,” he says.

### The n-Category Cafe

Guest post by Nick Gurski

I have been thinking about various sorts of operads with my PhD student Alex Corner, and have become interested in the following very concrete question: what are examples of operads in the category of finite groups under the cartesian product? I don’t know any really interesting examples, but maybe you do! After the break I will explain why I got interested in this question, and tell you about some examples that I do know.

Alex and I started off thinking about various sorts of things you might do with operads in $\mathrm{Cat}\mathbf\left\{Cat\right\}$, and were eventually forced into what we currently call an action operad. This is an operad $GG$ whose job it is to act on the objects of other operads. The key examples to keep in mind are the terminal operad (each set is just a singleton), the symmetric operad (the $nn$th set is the $nn$th symmetric group), and the braid operad (the $nn$th set is the $nn$th braid group). The technical definition involves an operad $GG$, a group structure on each set $G\left(n\right)G\left(n\right)$, a map of operads $\pi :G\to \Sigma \pi:G \rightarrow \Sigma$ to the symmetric operad which is levelwise a group homomorphism, and a final condition (when it makes sense) relating operadic composition, $\mu \mu$, with group multiplication:

$\mu \left(g;{f}_{1},\dots ,{f}_{n}\right)\cdot \mu \left(g\prime ;{f}_{1}\prime ,\dots ,{f}_{n}\prime \right)=\mu \left(gg\prime ;{f}_{\pi \left(g\prime \right)\left(1\right)}{f}_{1}\prime ,\dots ,{f}_{\pi \left(g\prime \right)\left(n\right)}{f}_{n}\prime \right). \mu\left(g; f_1, \ldots, f_n\right) \cdot \mu\left(g\text{'}; f_1\text{'}, \ldots, f_n\text{'}\right) = \mu \left(g g\text{'}; f_\left\{\pi\left(g\text{'}\right)\left(1\right)\right\}f_\left\{1\right\}\text{'}, \ldots, f_\left\{\pi\left(g\text{'}\right)\left(n\right)\right\}f_\left\{n\right\}\text{'}\right). $

These ideas have cropped up before, for example in Nathalie Wahl’s thesis or this preprint of Wenbin Zhang. Once you have this definition, you can define operads which are equivariant with respect to $GG$: you have an operad $PP$, a group action of $G\left(n\right)G\left(n\right)$ on $P\left(n\right)P\left(n\right)$ for each $nn$, and some equivariance conditions that generalize the equivariance conditions for a symmetric operad.

This isn’t the only thing you can do with an action operad, you can also think about the 2-monad on $\mathrm{Cat}\mathbf\left\{Cat\right\}$ whose algebras are strict monoidal categories where $G\left(n\right)G\left(n\right)$ acts naturally on $nn$-fold tensor products. If you do this with the symmetric operad, you get symmetric strict monoidal categories (or permutative categories, if you are a topologist); if you do this with the (ribbon) braid operad, you get (ribbon) braided strict monoidal categories; and if you do this with the action operad of all terminal groups, you get back plain old strict monoidal categories. The only other “naturally-occurring” example of an action operad that I know of is the operad of $nn$-fruit cactus groups, ${J}_{n}J_\left\{n\right\}$. These groups come up in the representation theory of quantum groups, particularly the theory of crystals, and the monoidal structure you get out here is something Drinfeld called a coboundary category. I can give you a generators-and-relations definition of these groups, at which point I would have completely exhausted my understanding of this operad. The best reference that I know of is the paper Crystals and coboundary categories by Henriques and Kamnitzer.

What does this have to do with my question about operads of finite groups? Well, as it turns out, the structure map for an action operad $\pi :G\to \Sigma \pi:G \rightarrow \Sigma$ only has two options: it can be surjective, or it can be the zero map (i.e., everything maps to the identity permutation). Furthermore, that condition I wrote down relating group multiplication and operadic composition says that giving an action operad with $\pi \pi$ the zero map is equivalent to giving an operad in which the operadic composition maps all preserve group multiplication. Alex and I already showed that the operadic composition of all identity elements is the identity element in the target, in other words an action operad with $\pi \pi$ the zero map is just an operad in the category of groups using the cartesian product.

You can take kernels of maps between action operads, so in particular given any action operad $GG$ you can take the kernel of $\pi :G\to \Sigma \pi:G \rightarrow \Sigma$; this gives you an operad in groups. For the examples above, you get finite groups when $GG$ is the terminal operad or $G=\Sigma G = \Sigma$ (as the terminal operad is obviously the kernel in the case of $G=\Sigma G = \Sigma$), but for the rest of the examples you get an operad in the category of groups, but most of those groups are infinite. The groups involved are the so-called pure versions: pure braids, pure ribbon braids, and pure $nn$-fruit cacti. One can then think of an action operad with surjective map $\pi \pi$ as being an extension of the symmetric operad by an operad in the category of groups (the “pure” version), and that action operad is finite if and only if the operad in the category of groups is one containing only finite groups. We can translate this back into thinking about monoidal categories by then noting if we have some notion of strict monoidal category in which $nn$th tensor powers come equipped with a natural action of a finite group $G\left(n\right)G\left(n\right)$ for all $nn$, then we must be able to dig up an operad in the category of finite groups.

Now let’s talk concrete examples: what operads do I know in the category of finite groups? Well, there is obviously the terminal operad, but I can go very slightly further in that I can tell you how to construct some new ones. Here are two methods you can use to construct operads of finite groups.

• Let $AA$ be a finite abelian group. Then there is an operad $\underline{A}\underline\left\{A\right\}$ where $\underline{A}\left(n\right)={A}^{n}\underline\left\{A\right\}\left(n\right) = A^\left\{n\right\}$ (the power here is a cartesian power). The operadic composition map ${A}^{n}×{A}^{{k}_{1}}×\cdots ×{A}^{{k}_{n}}\to {A}^{\Sigma {k}_{i}} A^\left\{n\right\} \times A^\left\{k_\left\{1\right\}\right\} \times \cdots \times A^\left\{k_\left\{n\right\}\right\} \rightarrow A^\left\{\Sigma k_\left\{i\right\}\right\} $ takes the vector $\left({a}_{1},\dots ,{a}_{n}\right)\left(a_\left\{1\right\}, \ldots, a_\left\{n\right\}\right)$ in the first coordinate and duplicates the $ii$th coordinate ${k}_{i}k_\left\{i\right\}$ times, then adds the result to the vector you get by just concatenating the $nn$ vectors in the ${A}^{{k}_{i}}A^\left\{k_\left\{i\right\}\right\}$. Here $AA$ must be abelian as it appears as $\underline{A}\left(1\right)\underline\left\{A\right\}\left(1\right)$, and $G\left(1\right)G\left(1\right)$ must be abelian for any action operad by an Eckmann-Hilton argument.
• Now let $GG$ be any finite group, but in fact you will see that we don’t use anything interesting about the group structure here. There is an operad ${G}^{c2}G^\left\{c2\right\}$ with ${G}^{c2}\left(n\right)={G}^{\left(\genfrac{}{}{0}{}{n}{2}\right)} G^\left\{c2\right\}\left(n\right) = G^\left\{\binom\left\{n\right\}\left\{2\right\}\right\} $ given the pointwise group structure. You should think of this as the set of functions from $\left\{\left(i,j\right):1\le i\\left\{ \left(i,j\right): 1 \leq i \lt j \leq n \\right\}$ to $GG$. Operad composition is a map ${G}^{\left(\genfrac{}{}{0}{}{n}{2}\right)}×{G}^{\left(\genfrac{}{}{0}{}{{k}_{1}}{2}\right)}×\cdots ×{G}^{\left(\genfrac{}{}{0}{}{{k}_{n}}{2}\right)}\to {G}^{\left(\genfrac{}{}{0}{}{\Sigma {k}_{i}}{2}\right)}. G^\left\{\binom\left\{n\right\}\left\{2\right\}\right\} \times G^\left\{\binom\left\{k_\left\{1\right\}\right\}\left\{2\right\}\right\} \times \cdots \times G^\left\{\binom\left\{k_\left\{n\right\}\right\}\left\{2\right\}\right\} \rightarrow G^\left\{\binom\left\{\Sigma k_\left\{i\right\}\right\}\left\{2\right\}\right\}. $ If we are given $1\le a1 \leq a \lt b \leq \sum k_\left\{i\right\}$, we must give back an element of $GG$. If there is some $rr$ such that $\sum _{i=1}^{r-1}{k}_{i}\le a \sum_\left\{i=1\right\}^\left\{r-1\right\} k_\left\{i\right\} \leq a \lt b \lt \sum_\left\{i=1\right\}^\left\{r\right\} k_\left\{i\right\}, $ then we use the function coming from ${G}^{c2}\left(r\right)G^\left\{c2\right\}\left(r\right)$ evaluated on $1\le a+1-\sum _{i=1}^{r-1}{k}_{i} 1 \leq a +1 - \sum_\left\{i=1\right\}^\left\{r-1\right\} k_\left\{i\right\} \lt b+1- \sum_\left\{i=1\right\}^\left\{r-1\right\} k_\left\{i\right\} \leq k_\left\{r\right\}. $ If not, then there exist $rr \lt s$ such that $aa$ lies in the $rr$th “interval” as we had before and $bb$ lies in the $ss$th interval, so then you use the function coming from ${G}^{c2}\left(n\right)G^\left\{c2\right\}\left(n\right)$ evaluated on $rr \lt s$.

I am reasonably content with the first of these constructions, I understand how to do the second but don’t really know where it comes from, and I have some ideas about how one could try to insert the finite group of their choice as $G\left(0\right)G\left(0\right)$ but haven’t checked the details, and know basically nothing else. Furthermore, I don’t know how these interact with each other, or how you can form extensions of $\Sigma \Sigma$ with them outside of some obvious constructions.

Those are just the two straightforward approaches that I know of to construct operads in the category of finite groups. You can also try to construct these operads from operads of topological spaces or simplicial sets, but once again I don’t know of an example that produces finite things (apart from ones giving the groups above). Do you know any others?

### John Baez - Azimuth

Markov Models of Social Change (Part 2)

guest post by Vanessa Schweizer

This is my first post to Azimuth. It’s a companion to the one by Alaistair Jamieson-Lane. I’m an assistant professor at the University of Waterloo in Canada with the Centre for Knowledge Integration, or CKI. Through our teaching and research, the CKI focuses on integrating what appears, at first blush, to be drastically different fields in order to make the world a better place. The very topics I would like to cover today, which are mathematics and policy design, are an example of our flavour of knowledge integration. However, before getting into that, perhaps some background on how I got here would be helpful.

### The conundrum of complex systems

For about eight years, I have focused on various problems related to long-term forecasting of social and technological change (long-term meaning in excess of 10 years). I became interested in these problems because they are particularly relevant to how we understand and respond to global environmental changes such as climate change.

In case you don’t know much about global warming or what the fuss is about, part of what makes the problem particularly difficult is that the feedback from the physical climate system to human political and economic systems is exceedingly slow. It is so slow, that under traditional economic and political analyses, an optimal policy strategy may appear to be to wait before making any major decisions – that is, wait for scientific knowledge and technologies to improve, or at least wait until the next election [1]. Let somebody else make the tough (and potentially politically unpopular) decisions!

The problem with waiting is that the greenhouse gases that scientists are most concerned about stay in the atmosphere for decades or centuries. They are also churned out by the gigatonne each year. Thus the warming trends that we have experienced for the past 30 years, for instance, are the cumulative result of emissions that happened not only recently but also long ago—in the case of carbon dioxide, as far back as the turn of the 20th century. The world in the 1910s was quainter than it is now, and as more economies around the globe industrialize and modernize, it is natural to wonder: how will we manage to power it all? Will we still rely so heavily on fossil fuels, which are the primary source of our carbon dioxide emissions?

Such questions are part of what makes climate change a controversial topic. Present-day policy decisions about energy use will influence the climatic conditions of the future, so what kind of future (both near-term and long-term) do we want?

### Futures studies and trying to learn from the past

Many approaches can be taken to answer the question of what kind of future we want. An approach familiar to the political world is for a leader to espouse his or her particular hopes and concerns for the future, then work to convince others that those ideas are more relevant than someone else’s. Alternatively, economists do better by developing and investigating different simulations of economic developments over time; however, the predictive power of even these tools drops off precipitously beyond the 10-year time horizon.

The limitations of these approaches should not be too surprising, since any stockbroker will say that when making financial investments, past performance is not necessarily indicative of future results. We can expect the same problem with rhetorical appeals, or economic models, that are based on past performances or empirical (which also implies historical) relationships.

### A different take on foresight

A different approach avoids the frustration of proving history to be a fickle tutor for the future. By setting aside the supposition that we must be able to explain why the future might play out a particular way (that is, to know the ‘history’ of a possible future outcome), alternative futures 20, 50, or 100 years hence can be conceptualized as different sets of conditions that may substantially diverge from what we see today and have seen before. This perspective is employed in cross-impact balance analysis, an algorithm that searches for conditions that can be demonstrated to be self-consistent [3].

Findings from cross-impact balance analyses have been informative for scientific assessments produced by the Intergovernmental Panel on Climate Change Research, or IPCC. To present a coherent picture of the climate change problem, the IPCC has coordinated scenario studies across economic and policy analysts as well as climate scientists since the 1990s. Prior to the development of the cross-impact balance method, these researchers had to do their best to identify appropriate ranges for rates of population growth, economic growth, energy efficiency improvements, etc. through their best judgment.

A retrospective using cross-impact balances on the first Special Report on Emissions Scenarios found that the researchers did a good job in many respects. However, they underrepresented the large number of alternative futures that would result in high greenhouse gas emissions in the absence of climate policy [4].

As part of the latest update to these coordinated scenarios, climate change researchers decided it would be useful to organize alternative futures according socio-economic conditions that pose greater or fewer challenges to mitigation and adaptation. Mitigation refers to policy actions that decrease greenhouse gas emissions, while adaptation refers to reducing harms due to climate change or to taking advantage of benefits. Some climate change researchers argued that it would be sufficient to consider alternative futures where challenges to mitigation and adaptation co-varied, e.g. three families of futures where mitigation and adaptation challenges would be low, medium, or high.

Instead, cross-impact balances revealed that mixed-outcome futures—such as socio-economic conditions simultaneously producing fewer challenges to mitigation but greater challenges to adaptation—could not be completely ignored. This counter-intuitive finding, among others, brought the importance of quality of governance to the fore [5].

Although it is generally recognized that quality of governance—e.g. control of corruption and the rule of law—affects quality of life [6], many in the climate change research community have focused on technological improvements, such as drought-resistant crops, or economic incentives, such as carbon prices, for mitigation and adaptation. The cross-impact balance results underscored that should global patterns of quality of governance across nations take a turn for the worse, poor governance could stymie these efforts. This is because the influence of quality of governance is pervasive; where corruption is permitted at the highest levels of power, it may be permitted at other levels as well—including levels that are responsible for building schools, teaching literacy, maintaining roads, enforcing public order, and so forth.

The cross-impact balance study revealed this in the abstract, as summarized in the example matrices below. Alastair included a matrix like these in his post, where he explained that numerical judgments in such a matrix can be used to calculate the net impact of simultaneous influences on system factors. My purpose in presenting these matrices is a bit different, as the matrix structure can also explain why particular outcomes behave as system attractors.

In this example, a solid light gray square means that the row factor directly influences the column factor some amount, while white space means that there is no direct influence:

Dark gray squares along the diagonal have no meaning, since everything is perfectly correlated to itself. The pink squares highlight the rows for the factors “quality of governance” and “economy.” The importance of these rows is more apparent here; the matrix above is a truncated version of this more detailed one:

(Click to enlarge.)

The pink rows are highlighted because of a striking property of these factors. They are the two most influential factors of the system, as you can see from how many solid squares appear in their rows. The direct influence of quality of governance is second only to the economy. (Careful observers will note that the economy directly influences quality of governance, while quality of governance directly influences the economy). Other scholars have meticulously documented similar findings through observations [7].

As a method for climate policy analysis, cross-impact balances fill an important gap between genius forecasting (i.e., ideas about the far-off future espoused by one person) and scientific judgments that, in the face of deep uncertainty, are overconfident (i.e. neglecting the ‘fat’ or ‘long’ tails of a distribution).

### Wanted: intrepid explorers of future possibilities

However, alternative visions of the future are only part of the information that’s needed to create the future that is desired. Descriptions of courses of action that are likely to get us there are also helpful. In this regard, the post by Jamieson-Lane describes early work on modifying cross-impact balances for studying transition scenarios rather than searching primarily for system attractors.

This is where you, as the mathematician or physicist, come in! I have been working with cross-impact balances as a policy analyst, and I can see the potential of this method to revolutionize policy discussions—not only for climate change but also for policy design in general. However, as pointed out by entrepreneurship professor Karl T. Ulrich, design problems are NP-complete. Those of us with lesser math skills can be easily intimidated by the scope of such search problems. For this reason, many analysts have resigned themselves to ad hoc explorations of the vast space of future possibilities. However, some analysts like me think it is important to develop methods that do better. I hope that some of you Azimuth readers may be up for collaborating with like-minded individuals on the challenge!

### References

The graph of carbon emissions is from reference [2]; the pictures of the matrices are adapted from reference [5]:

[1] M. Granger Morgan, Milind Kandlikar, James Risbey and Hadi Dowlatabadi, Why conventional tools for policy analysis are often inadequate for problems of global change, Climatic Change 41 (1999), 271–281.

[2] T.F. Stocker et al., Technical Summary, in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (2013), T.F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex, and P.M. Midgley (eds.) Cambridge University Press, New York.

[3] Wolfgang Weimer-Jehle, Cross-impact balances: a system-theoretical approach to cross-impact analysis, Technological Forecasting & Social Change 73 (2006), 334–361.

[4] Vanessa J. Schweizer and Elmar Kriegler, Improving environmental change research with systematic techniques for qualitative scenarios, Environmental Research Letters 7 (2012), 044011.

[5] Vanessa J. Schweizer and Brian C. O’Neill, Systematic construction of global socioeconomic pathways using internally consistent element combinations, Climatic Change 122 (2014), 431–445.

[6] Daniel Kaufman, Aart Kray and Massimo Mastruzzi, Worldwide Governance Indicators (2013), The World Bank Group.

[7] Daron Acemoglu and James Robinson, The Origins of Power, Prosperity, and Poverty: Why Nations Fail. Website.

### Lubos Motl - string vacua and pheno

Particle fever: where to see
Particle Fever, the universally praised David Kaplan's full-fledged movie about particle physics, is coming to the movie theaters in the U.S. today.

Lots of news outlets discuss what the movie is all about.

It took 7 years for the movie to be shot, and so on. It is both about theorists and experimenters and the emotions they have been going through

For your convenience, here is a copy of the dates and places where the film will be shown. The film's website also tells you about the film festivals where the movie is participating.

March 5
New York, NY - Film Forum

March 6
Santa Barbara, CA - UCSB

March 7
Los Angeles, CA - Nuart
Toronto, ON – The Bloor
Irvine, CA – University Town Center

March 8
Sioux Falls, SD – Cinema Falls

March 14
Seattle, WA - Landmark
San Francisco, CA - Landmark
Berkeley, CA - Landmark
Bellingham, WA - Pickford Film Center
Scottsdale, AZ - Harkins Camelview
Chicago, IL - Music Box
Naperville, IL – AMC Showplace 16
Nashville, TN - Belcourt

March 16
Sioux Falls, SD – Cinema Falls

March 19
Ithaca, NY - Cornell Cinema

March 21
Cambridge, MA - Kendall Square
Minneapolis, MN - Landmark
Washington D.C. - E Street
Baltimore, MD - Charles
San Diego, CA - Landmark
Denver, CO – Landmark

March 28
Santa Fe, NM - CCA
Columbus, OH - Gateway
Kansas City, KS - Tivoli
Atlanta, GA – Midtown Art
Houston, TX – Sundance Cinemas

March 31
Ann Arbor, MI – Michigan Theater

April 3
Oklahoma City, OK – Oklahoma City Museum of Art

April 4
Boise, ID - Flicks
Charlotte, NC – Manor
Charlottesville, VA – Downtown Mall

April 11
Albany, NY – Spectrum 8

April 15
Portland, OR – Oregon Museum of Science

April 18
Austin, TX – Arbor
Knoxville, TN – Downtown West
Eugene, OR – Bijou Art

April 25
Lincoln, NB – Mary Riepma Ross Film Center

### Clifford V. Johnson - Asymptotia

Cloudy with a chance of Physics
I don't know. That's a bit of a desperate title. But in exchange, a rather nice cloud formation, don't you think? This was from the sky over Los Angeles yesterday evening (a shot of the sky in the other direction is to the right - click for larger), and my first thought was "what's the physics behind these beautiful structures?" There's enough regularity here to expect there to be a mechanism, but I do not know what it is. Some combination of atmospheric conditions like wind speed, temperature, perhaps some layering of different bodies of air, and so forth, resulted in this and I'd love to know more. What factors set the roughly regular size of the structures, their pretty uniform distance apart, etc? (These are typical physicist's questions, in case you're [...] Click to continue reading this post

## March 04, 2014

### Quantum Diaries

Particle Beam Cancer Therapy: The Promise and Challenges

Advances in accelerators built for fundamental physics research have inspired improved cancer treatment facilities. But will one of the most promising—a carbon ion treatment facility—be built in the U.S.? Participants at a symposium organized by Brookhaven Lab for the 2014 AAAS meeting explored the science and surrounding issues.

by Karen McNulty Walsh

Accelerator physicists are natural-born problem solvers, finding ever more powerful ways to generate and steer particle beams for research into the mysteries of physics, materials, and matter. And from the very beginning, this field born at the dawn of the atomic age has actively sought ways to apply advanced technologies to tackle more practical problems. At the top of the list—even in those early days— was taking aim at cancer, the second leading cause of death in the U.S. today, affecting one in two men and one in three women.

Using beams of accelerated protons or heavier ions such as carbon, oncologists can deliver cell-killing energy to precisely targeted tumors—and do so without causing extensive damage to surrounding healthy tissue, eliminating the major drawback of conventional radiation therapy using x-rays.

“This is cancer care aimed at curing cancer, not just treating it,” said Ken Peach, a physicist and professor at the Particle Therapy Cancer Research Institute at Oxford University.

Peach was one of six participants in a symposium exploring the latest advances and challenges in this field—and a related press briefing attended by more than 30 science journalists—at the 2014 meeting of the American Association for the Advancement of Science in Chicago on February 16. The session, “Targeting Tumors: Ion Beam Accelerators Take Aim at Cancer,” was organized by the U.S. Department of Energy’s (DOE’s) Brookhaven National Laboratory, an active partner in an effort to build a prototype carbon-ion accelerator for medical research and therapy. Brookhaven Lab is also currently the only place in the U.S. where scientists can conduct fundamental radiobiological studies of how beams of ions heavier than protons, such as carbon ions, affect cells and DNA.

Participants in a symposium and press briefing exploring the latest advances and challenges in particle therapy for cancer at the 2014 AAAS meeting: Eric Colby (U.S. Department of Energy), Jim Deye (National Cancer Institute), Hak Choy (University of Texas Southwestern Medical Center), Kathryn Held (Harvard Medical School and Massachusetts General Hospital), Stephen Peggs (Brookhaven National Laboratory and Stony Brook University), and Ken Peach (Oxford University). (Credit: AAAS)

“We could cure a very high percentage of tumors if we could give sufficiently high doses of radiation, but we can’t because of the damage to healthy tissue,” said radiation biologist Kathryn Held of Harvard Medical School and Massachusetts General Hospital during her presentation. “That’s the advantage of particles. We can tailor the dose to the tumor and limit the amount of damage in the critical surrounding normal tissues.”

Yet despite the promise of this approach and the emergence of encouraging clinical results from carbon treatment facilities in Asia and Europe, there are currently no carbon therapy centers operating in the U.S.

Participants in the Brookhaven-organized session agreed: That situation has to change—especially since the very idea of particle therapy was born in the U.S.

Physicists as pioneers

“When Harvard physicist Robert Wilson, who later became the first director of Fermilab, was asked to explore the potential dangers of proton particle radiation [just after World War II], he flipped the problem on its head and described how proton beams might be extremely useful—as effective killers of cancer cells,” said Stephen Peggs, an accelerator physicist at Brookhaven Lab and adjunct professor at Stony Brook University.

As Peggs explained, the reason is simple: Unlike conventional x-rays, which deposit energy—and cause damage—all along their path as they travel through healthy tissue en route to a tumor (and beyond it), protons and other ions deposit most of their energy where the beam stops. Using magnets, accelerators can steer these charged particles left, right, up, and down and vary the energy of the beam to precisely place the cell-killing energy right where it’s needed: in the tumor.

The first implementation of particle therapy used helium and other ions generated by the Bevatron at Berkeley Lab. Those spin-off studies “established a foundation for all subsequent ion therapy,” Peggs said. And as accelerators for physics research grew in size, pioneering experiments in particle therapy continued, operating “parasitically” until the very first accelerator built for hospital-based proton therapy was completed with the help of DOE scientists at Fermilab in 1990.

But even before that machine left Illinois for Loma Linda University Medical Center in California, physicists were thinking about how it could be made better. The mantra of making machines smaller, faster, cheaper—and capable of accelerating more kinds of ions—has driven the field since then.

Advances in magnet technology, including compact superconducting magnets and beam-delivery systems developed at Brookhaven Lab, hold great promise for new machines. Peggs is working to incorporate these technologies in a prototype ‘ion Rapid Cycling Medical Synchrotron’ (iRCMS) capable of delivering protons and/or carbon ions for radiobiology research and for treating patients.

Brookhaven Lab accelerator physicist Stephen Peggs with magnet technology that could reduce the size of particle accelerators needed to steer heavy ion beams and deliver cell-killing energy to precisely targeted tumors while sparing surrounding healthy tissue.

Small machine, big particle impact

The benefits of using charged particles heavier than protons (e.g., carbon ions) stem not only from their physical properties—they stop and deposit their energy over an even smaller and better targeted tumor volume than protons—but also a range of biological advantages they have over x-rays.

As Kathryn Held elaborated in her talk, compared with x-ray photons, “carbon ions are much more effective at killing tumor cells. They put a huge hole through DNA compared to the small pinprick caused by x-rays, which causes clustered or complex DNA damage that is less accurately repaired between treatments—less repaired, period—and thus more lethal [to the tumor].” Carbon ions also appear to be more effective than x-rays at killing oxygen-deprived tumor cells, and might be most effective in fewer higher doses, “but we need more basic biological studies to really understand these effects,” Held said.

Different types of radiation treatment cause different kinds of damage to the DNA in a tumor cell. X-ray photons (top arrow) cause fairly simple damage (purple area) that cancer cells can sometimes repair between treatments. Charged particles—particularly ions heavier than protons (bottom arrow)—cause more and more complex forms of damage, resulting in less repair and a more lethal effect on the tumor. (Credit: NASA)

Held conducts research at the NASA Space Radiation Laboratory (NSRL) at Brookhaven Lab, an accelerator-based facility designed to fully understand risks and design protections for future astronauts exposed to radiation. But much of that research is relevant to understanding the mechanisms and basic radiobiological responses that can apply to the treatment of cancer. But additional facilities and funding are needed for research specifically aimed at understanding the radiobiological effects of heavier ions for potential cancer therapies, Held emphasized.

Hak Choy, a radiation oncologist and chair in the Department of Radiation Oncology at the University of Texas Southwestern Medical Center, presented compelling clinical data on the benefits of proton particle therapy, including improved outcomes and reduced side effects when compared with conventional radiation, particularly for treating tumors in sensitive areas such as the brain and spine and in children. “When you can target the tumor and spare critical tissue you get fewer side effects,” he said.

Data from Japan and Europe suggest that carbon ions could be three or four times more biologically potent than protons, Choy said, backing that claim with impressive survival statistics for certain types of cancers where carbon therapy surpassed protons, and was even better than surgery for one type of salivary gland cancer. “And carbon therapy is noninvasive,” he emphasized.

To learn more about this promising technology and the challenges of building a carbon ion treatment/research facility in the U.S., including perspectives from the National Cancer Institute, DOE and a discussion about economics, read the full summary of the AAAS symposium here: http://www.bnl.gov/newsroom/news.php?a=24672.

Karen McNulty Walsh is a science writer in the Media & Communications Office at Brookhaven National Laboratory.

### Symmetrybreaking - Fermilab/SLAC

There’s an app for that

From simulators and reference tools to fun and games, physics-related mobile applications run the gamut. Some of the apps were designed by physicists for use by physicists, while others are intended to inform the general public about physics laws and the field's grandest experiments, or offer an entertaining escape.

### arXiv blog

How Airships Are Set to Revolutionize Science

Airships can patrol the upper atmosphere, monitoring the ground or peering at the stars for a fraction of a cost of satellites, according to a new report. All that’s needed is a prize to kick-start innovation.

### arXiv blog

How Airships Are Set To Revolutionise Science

Airships can patrol the upper atmosphere, monitoring the ground or peering at the stars for a fraction of a cost of satellites, according to a new report. All that’s needed is a prize to kickstart innovation.

The Naval Air Engineering Station in Lakehurst New jersey must be one of the most famous airfields in the world. If you’ve ever watched the extraordinary footage of the German passenger airship Hindenburg catching fire as it attempted to moor, you’ll have seen Lakehurst. That’s where the disaster took place.

## March 03, 2014

### arXiv blog

Mathematical Proof Reveals How To Make The Internet More Earthquake-Proof

Decentralised networks are naturally robust against certain types of attack. Now one mathematician says advanced geometry shows how to make them even more robust.

One of the common myths about the internet is that it was originally designed during the Cold War to survive nuclear attack. Historians of the internet are quick to point out that this was not at all one of the design goals of the early network, although the decentralised nature of the system turns out to make it much more robust than any kind of centralised network.

### ZapperZ - Physics and Physicists

Checking On Antimatter
This is a rather nice, short summary on the study of anti-atoms, and in particular, CERN's effort to study the properties of anti-hydrogen and why it is so important.

With a big enough sample of anti-hydrogen, one can make detailed studies of the energy levels that the positron can occupy in its journey around the antiproton. These energy levels have been measured very precisely for hydrogen, and the expectation is that they should be identical in antihydrogen. But we won’t know until we look.
.
.
The symmetry principle which these experiments are designed to test is whether physics, and therefore the whole universe, would look the same if we simultaneously swapped all matter for antimatter, left for right, and backwards in time for forwards in time. This is called a CPT (Charge/Parity/Time) inversion. The Standard Model of physics, and almost all variants on it, require that indeed the universe would be identical after such an inversion.
Now pay attention, kids. In physics, even when some of our most cherished theories have been used, and known to be valid, we STILL go out and test out many of its predictions. Here, the Standard Model says that antihydrogen should behave the same way as hydrogen. While the Standard Model certainly has been useful, and has been correct in many aspects, we do not simply accept its predictions for the behavior of antihydrogen. We still want to test it! In fact, many physicists are hoping that we see something the Standard Model can't explain, that something "weird" is going on that might give hints of new physics. This is what many of us in this field look gleefully for!

This is how science works. We verify an idea, a theory, etc., but we continue to test its RANGE OF VALIDITY, i.e. how far out does this thing work? It works here, but does it work there? It works when you do this, but does it work when you do that? This is how we expand the boundaries of our knowledge.

Zz.

### astrobites - astro-ph reader's digest

Kepler 2.0

Last May, we learned that the Kepler Space Telescope could no longer go on finding transiting exoplanets as it had since its launch in 2009 due to the failure of a critical reaction wheel used to accurately point the telescope. Although Kepler could no longer carry out its intended mission, many of its powerful capabilities remain intact. The team called for ideas for a second mission for Kepler, and astronomers enthusiastically submitted their plans, which I summarized in this astrobite from last September. Since then, the team has considered these ideas while formulating a new mission for Kepler called “K2″. K2 was discussed at the AAS meeting in January, which was covered in this astrobite. Now, Steve Howell and collaborators have put on the archive a more detailed description of the K2 mission, which I’ll review in this post.

The original Kepler mission observed a single patch of sky, monitoring a pre-selected set of 156,000 stars for the changes in their brightness that indicate the presence of  transiting planets. With Kepler’s reduced pointing capability, the light from the stars would drift across the camera over time, smearing out the signal and reducing the sensitivity of the instrument to detect very minute brightness fluctuations. K2 is designed to minimize this problem by pointing only in the ecliptic (the plane defined by Kepler’s orbit around the Sun), so that the photon pressure from the Sun on the spacecraft is balanced.

But pointing in the ecliptic means that Kepler can no longer focus on a single patch of sky, because at some point in its orbit, Kepler would be pointed towards the Sun (and sunlight would get into the telescope). So for the K2 mission, Kepler will point to a new field of view every 83 days. These 83 day long periods of time, and their corresponding fields of view, are called “campaigns”.  The way that Kepler will reorient to a new field of view for each campaign is shown in Figure 1, and the locations of these fields are shown in Figure 2.

While the campaign fields have been chosen, the specific targets within each field to be monitored have not. K2 is a “community-driven” observatory, so targets will be selected from proposals submitted by anyone in the scientific community. The team expects to observe between 10,000 and 20,000 targets in each campaign.

Figure 1. A diagram of how Kepler will reorient to a new field of view for each campaign. Also shown is how the spacecraft will be balanced against radiation pressure by staying pointed within the ecliptic plane.

Compared to the original mission, K2′s ability to detect transiting planets is reduced in two ways. First, its photometric precision–the ability to detect minute changes in a stars brightness–is about 4 times worse (although it would have been even worse if the telescope where not confined to point in the ecliptic). Second, the time baseline over which each target can be monitored is drastically shortened due the necessity of using multiple campaigns. K2 will not be able to detect new planets in the habitable zone around Sun-like stars because the planet’s orbital period would be about one year, compared to an 83 day campaign. M stars will make great targets for K2′s planet search because the habitable zones are much closer to these stars (so habitable planets will have shorter orbital periods), and the stars are smaller, so small planets can still make a detectable dimming effect when transiting, even with K2′s reduced sensitivity. Despite these limitations, K2′s large field of view, its still impressive photometric precision, and its ability to continuously monitor targets with high cadence make it superior to ground based programs for detecting transiting planets.

The target stars of the original mission were generally far away, and thus fairly faint, but K2 will likely target much closer stars. The planets that will be found by K2 around these nearby stars will thus be much easier to characterize with detailed follow up observations using other telescopes. K2 will also be able to target stellar clusters, which have not yet been thoroughly studied for transiting planets. Young planetary systems could also be studied by K2, from which we could constrain our planet formation and migration theories.

In addition to looking for more transiting planets, K2 can study anything that varies in brightness. This includes the target stars themselves, as well as AGN, supernovae, and gravitational microlensing signals. Of course, the exact science that K2 will do depends on the targets proposed by the community!

While the K2 program does not have official funding support yet from NASA, campaign 0 has recently begun. You can follow the latest on the K2 mission here and here. And, of course, we’ll keep you updated with important developments on astrobites!

Figure 2. The locations in the sky of the fields of view along the ecliptic for the ten proposed K2 campaigns. These span a range of galactic coordinates. Note that field 9 appears out of place because the telescope will be pointed “forward” during this campaign (it will usually point backward relative to its orbital direction).

### Symmetrybreaking - Fermilab/SLAC

'Particle Fever' opens in the US

Particle Fever, a documentary that follows scientists involved in research at the Large Hadron Collider, opens this week in select theaters across the United States.

Wish you could have witnessed the euphoria and excitement rippling through the CERN Control Center when the Large Hadron Collider first turned on? Or been in the room when the discovery of the Higgs boson was announced?

### Quantum Diaries

CDMS result covers new ground in search for dark matter

The Cryogenic Dark Matter Search has set more stringent limits on light dark matter.

Scientists looking for dark matter face a serious challenge: No one knows what dark matter particles look like. So their search covers a wide range of possible traits—different masses, different probabilities of interacting with regular matter.

Today, scientists on the Cryogenic Dark Matter Search experiment, or CDMS, announced they have shifted the border of this search down to a dark-matter particle mass and rate of interaction that has never been probed.

“We’re pushing CDMS to as low mass as we can,” says Fermilab physicist Dan Bauer, the project manager for CDMS. “We’re proving the particle detector technology here.”

Their result, which does not claim any hints of dark matter particles, contradicts a result announced in January by another dark matter experiment, CoGeNT, which uses particle detectors made of germanium, the same material as used by CDMS.

To search for dark matter, CDMS scientists cool their detectors to very low temperatures in order to detect the very small energies deposited by the collisions of dark matter particles with the germanium. They operate their detectors half of a mile underground in a former iron ore mine in northern Minnesota. The mine provides shielding from cosmic rays that could clutter the detector as it waits for passing dark matter particles.

Today’s result carves out interesting new dark matter territory for masses below 6 billion electronvolts. The dark matter experiment Large Underground Xenon, or LUX, recently ruled out a wide range of masses and interaction rates above that with the announcement of its first result in October 2013.

Scientists have expressed an increasing amount of interest of late in the search for low-mass dark matter particles, with CDMS and three other experiments—DAMA, CoGeNT and CRESST—all finding their data compatible with the existence of dark matter particles between 5 billion and 20 billion electronvolts. But such light dark-matter particles are hard to pin down. The lower the mass of the dark-matter particles, the less energy they leave in detectors, and the more likely it is that background noise will drown out any signals.

Even more confounding is the fact that scientists don’t know whether dark matter particles interact in the same way in detectors built with different materials. In addition to germanium, scientists use argon, xenon, silicon and other materials to search for dark matter in more than a dozen experiments around the world.

“It’s important to look in as many materials as possible to try to understand whether dark matter interacts in this more complicated way,” says Adam Anderson, a graduate student at MIT who worked on the latest CDMS analysis as part of his thesis. “Some materials might have very weak interactions. If you only picked one, you might miss it.”

Scientists around the world seem to be taking that advice, building different types of detectors and constantly improving their methods.

“Progress is extremely fast,” Anderson says. “The sensitivity of these experiments is increasing by an order of magnitude every few years.”

Kathryn Jepsen

### Tommaso Dorigo - Scientificblogging

The dyslectic guy with an erection problem...
Did you know about that dyslectic guy with an impotence problem who once came to Fermilab ? He said he'd been advised to go there as he wanted to get a hadron.

### Quantum Diaries

In case you haven’t figured it out already from reading the US LHC blog or any of the others at Quantum Diaries, people who do research in particle physics feel passionate about their work. There is so much to be passionate about! There are challenging intellectual issues, tricky technical problems, and cutting-edge instrumentation to work with — all in pursuit of understanding the nature of the universe at its most fundamental level. Your work can lead to global attention and support Nobel Prizes. It’s a lot of effort put in over long days and nights, but there is also a lot of satisfaction to be gained from our accomplishments.

That being said, a fundamental truth about our field is that not everyone doing particle-physics research will be doing that for their entire career. There are fewer permanent jobs in the field than there are people who are qualified to hold them. It is certainly easy to do the math about university jobs in particular — each professor may supervise a large number of PhD students in his or her career, but only one could possibly inherit that job position in the end. Most of our researchers will end up working in other fields, quite likely in the for-profit sector, and as a field we do need to make sure that they are well-prepared for jobs in that part of the world.

I’ve always believed that we do a good job of this, but my belief was reinforced by a recent column by Tom Friedman in The New York Times. It was based around an interview with the Google staff member who oversees hiring for the company. The essay describes the attributes that Google looks for in new employees, and I couldn’t help but to think that people who work in the large experimental particle physics projects such as those at the LHC have all of those attributes. Google is not just looking for technical skills — it goes without saying that they are, and that particle physicists have those skills and great experience with digesting large amounts of computerized data. Google is also looking for social and personality traits that are also important for success in particle physics.

(Side note: I don’t support all of what Friedman writes in his essay; he is somewhat dismissive of the utility of a college education, and as a university professor I think that we are doing better than he suggests. But I will focus on some of his other points here. I also recognize that it is perhaps too easy for me to write about careers outside the field when I personally hold a permanent job in particle physics, but believe me that it just as easily could have wound up differently for me.)

For example, just reading from the Friedman column, one thing Google looks for is what is referred to as “emergent leadership”. This is not leadership in the form of holding a position with a particular title, but seeing when a group needs you to step forward to lead on something when the time is right, but also to step back and let someone else lead when needed. While the big particle-physics collaborations appear to be massive organizations, much of the day to day work, such as the development of a physics measurement, is done in smaller groups that function very organically. When they function well, people do step up to take on the most critical tasks, especially when they see that they are particularly positioned to do them. Everyone figures out how to interact in such a way that the job gets done. Another facet of this is ownership: everyone who is working together on a project feels personally responsible for it and will do what is right for the group, if not the entire experiment — even if it means putting aside your own ideas and efforts when someone else clearly has the better thing.

And related to that in turn is what is referred to in the column as “intellectual humility.” We are all very aggressive in making our arguments based on the facts that we have in hand. We look at the data and we draw conclusions, and we develop and promote research techniques that appear to be effective. But when presented with new information that demonstrates that the previous arguments are invalid, we happily drop what we had been pursuing and move on to the next thing. That’s how all of science works, really; all of your theories are only as good as the evidence that supports them, and are worthless in the face of contradictory evidence. Google wants people who take this kind of approach to their work.

I don’t think you have to be Google to be looking for the same qualities in your co-workers. If you are an employer who wants to have staff members who are smart, technically skilled, passionate about what they do, able to incorporate disparate pieces of information and generate new ideas, ready to take charge when they need to, feel responsible for the entire enterprise, and able to say they are wrong when they are wrong — you should be hiring particle physicists.

### Matt Strassler - Of Particular Significance

A 100 TeV Proton-Proton Collider?

During the gap between the first run of the Large Hadron Collider [LHC], which ended in 2012 and included the discovery of the Higgs particle (and the exclusion of quite a few other things), and its second run, which starts a year from now, there’s been a lot of talk about the future direction for particle physics. By far the most prominent option, both in China and in Europe, involves the long-term possibility of a (roughly) 100 TeV proton-proton collider — that is, a particle accelerator like the LHC, but with 5 to 15 times more energy per collision.

Do we need such a machine?

The answer is “Yes, Definitely”. Definitely, if human beings are to continue to explore the inner world of the elementary laws of nature with the same level of commitment with which they explore the outer world of our neighboring planets, the nearby stars and their own planets, and distant galaxies far-flung across the universe. If we can send the Curiosity rover to roam around the surface of the Red Planet and beam back pictures and scientific information — if we can send telescopes like Kepler into space whose sole purpose is to look for signs of planets around distant stars — then surely we can build a machine on Earth whose sole purpose is to help us understand the fundamental principles and elementary objects that underlie the natural world. That’s why we built the LHC, and machines before it; and the justification for a 100 TeV machine remains the same.

Definitely, also, if the exploration of the laws of nature is to continue as a healthy research field. We have a large number of experts who know how to build a big particle accelerator. If we were to postpone building such a machine for a generation, we would suffer some of the same problems suffered by the U.S. space program. All sorts of crucial knowledge of the craft of rocket building was lost when the U.S. failed to follow up on its several trips to the Moon. If we have a hiatus of a generation between the current machine and the next, we will find it much more difficult and expensive to build the next one when we finally decide to do it. So it makes sense to do maintain continuity, especially if it can be done at reasonable cost.

One thing that’s interesting to keep in mind is that a roughly 100 TeV machine is hardly a stretch for modern technology; it’s not going to be a machine with a significant risk of failure. The Superconducting SuperCollider (SSC), which was to be the U.S. flagship machine and was due to start running in the year 2000 (in which case it would definitely have discovered the Higgs particle many years ago — sadly, the U.S. congress canceled it, after it was well underway, in 1993), would have been a 40 TeV machine. The technological step from 40 TeV to 80 or 120 is not a big one. Moreover, the SSC would have been an easier machine to run than is the LHC, which has to strain with very high collision rates to make up for the fact that its energy per collision is a third of what the SSC would have been capable of. The main challenge for such an accelerator is that it has to be very large — which requires a very long tunnel (over 50 miles/80 km) and a very large number of powerful magnets.

It’s no wonder the Chinese are interested in potentially building this machine. With an economy growing rapidly enough to catch up with the other great nations of the world in the next decade or two, and with scientific prowess rapidly increasing (see here and here), some in China rightly see a 100 TeV proton-proton collider both as an opportunity to gain all sorts of technical and technological knowledge that they have previously lacked, and to establish themselves among the few nations that can be viewed as scientific superpowers. Yet it will not require them to go far out on a limb with technology that no one has ever attempted at all, and invent whole new methods that don’t currently exist. Moreover, some of the things that would be expensive or politically complex in the U.S. or Europe will be easier in China. They may be able to pay for and construct this machine themselves, with technical advice and personnel from other countries, but without being dependent on other nations’ political and financial challenges.

In fact, there’s another huge potential benefit along the way, even before the 100 TeV machine is built: a “Higgs factory”. One can potentially use this same tunnel to first build an accelerator that smashes electrons and positrons [i.e. anti-electrons] together, at an energy which isn’t that high, but is sufficient to make Higgs particles at a high rate — not as many Higgs particles as the LHC will produce, but in an environment where precise measurements are much easier to make. [Protons are messy, and all measurements in proton-proton collisions are very difficult due to huge collision rates and large backgrounds; electrons and positrons are simple objects and measurements tend to be much more straightforward.  This comes at a cost: it is harder to get collisions at the highest energies physicists would ideally want.]

The value of a Higgs factory is obvious: a no-brainer. The Higgs particle is our main way of gaining insight into the nature of the all-important Higgs field, and moreover the Higgs particle might also, through its possible rare decays, illuminate a currently veiled world of unknown particles and forces. It’s a research effort whose importance no one can deny, and it serves as a technical stepping stone to a 100 TeV collider, complete with the realistic possibility of Nobel Prize-worthy discoveries in the near term. For China, it’s perfect.

Of course, the Chinese aren’t the only ones interested.  My European colleagues, recognizing a good thing when they see it, and with the advantage that they built and ran the LHC, are also considering building such a machine. [Neither the U.S., which is expertly squandering its scientific leadership in many scientific fields (and pushing many of its best scientists toward the Chinese effort), nor Russia, which is busy starting a disastrous invasion of its neighbor, seem able to make any intelligent decisions at the moment, and surely aren't going to be the leaders in such an effort.] For the moment, the scientists involved are all working together.  Over recent years, any particle physicist worth his or her salt (including me) would spend some time at Europe’s CERN laboratory, which hosts the LHC. And now, many young U.S. experts in theoretical particle physics are planning to spend extended time at China’s “Center for the Future of High Energy Physics“. There was a time young Chinese geniuses like T.D. Lee, C.N. Yang and C.S. Wu did Nobel Prize-winning (or -deserving) work in the United States. Soon, perhaps, it will be the other way around.

But what, scientifically, is the justification for this machine?

Why build a 100 TeV collider?

It’s important to distinguish two types of scientific enterprises: exploratory and targeted. Exploratory refers to when you’re doing a search, in a plausible place, for anything unexpected — perhaps for something whose existence you might suspect, but perhaps more broadly. Targeted refers to doing a search or study where you know roughly, or even exactly, what you’re looking for.

Often a targeted enterprise is also exploratory; while looking for one thing, you can always stumble on something else. Many scientific discoveries, such as X-rays, have been made while doing or preparing experiments with a completely different purpose. On the other hand, an exploratory enterprise may not have any targets at all, or at best, only a very vague target. Sometimes we go searching just because we can. When Galileo pointed his first telescopes at the moon and the planets and the stars, he had no idea what he would find; he just knew he had a great opportunity to discover something.

The LHC was built as a clearly targeted machine: its main goal was to find the Higgs particle (or particles) if it (or they) existed, or whatever replaced them if they did not. Well, now we know that one Higgs particle exists, and it resembles the simplest possible type of Higgs particle, which is termed a “Standard Model Higgs”.   But much remains to learn.  Is this Higgs particle really Standard Model-like, not just at the 30% level but at the 3% level and better? Are there other Higgs particles?  Are there other as-yet unknown particles being produced at the LHC? Are there new forces beyond the ones we’re aware of? Other than the detailed study of the new Higgs particle, these questions are mostly exploratory. In short, though the LHC was built as a targeted machine with a near-guarantee of success, its mission has now shifted toward exploration of the unknown, with no guarantee of further discoveries. But it’s also important to understand that a lack of discoveries will be just as important to our understanding of nature as discoveries would be, for reasons I’ll return to in my next post.

Now what about the 100 TeV machine? Will it be a targeted experimental facility, or an exploratory one?

For the moment, the answer is: we don’t know. Currently, there is no clear target; more precisely, there are lots of possible targets, but none that we know could emerge to be a major, central one. But this machine won’t be built and completed for a couple of decades, and things could change dramatically by then. If the LHC discovers something not predicted by the Standard Model (the equations used to describe the known elementary particles and forces), then clarifying this new discovery will become a major target, and possibly the main target, of the 100 TeV machine.

This highlights one of the challenges with large experimental projects. One has to start thinking about them far in advance, long before it’s entirely clear what their precise use will be. When the SSC and the LHC were first proposed, they did have a proposed target — finding the Higgs particle or particles. But if the recently discovered Higgs particle’s mass had been, say, half of what it actually is, it would have been discovered some years before the SSC or LHC were completed… in which case, the target of the SSC and LHC would have significantly shifted. So we have to start considering, proposing, and perhaps even building the 100 TeV machine before it’s completely clear whether it will have a prominent and definite target, or whether it will be mainly exploratory. That ambiguity is something we just have to live with.

In contrast to the 100 TeV machine, which currently has to be viewed as exploratory, the Higgs factory that would precede it in the same tunnel is much more sharply targeted… targeted at detailed study of the Higgs particle. There are some other targeted and exploratory activities that it can be involved in, including more detailed investigation of the Z particle, W particle and top quark, but its main focus is the Higgs.

However, even if no prominent target for the 100 TeV collider shows up before it is built, its justification as an exploratory machine is clear. In quantum field theory, collisions at higher energy and momentum allow you to probe physics at shorter times and distances — for “particles” are really quanta, i.e., ripples in quantum fields, and a higher-energy quantum has a shorter wavelength and a faster frequency. And we’ve learned time and time again that one way (though not the only one) to discover new things about the world is to examine its behavior on shorter times and shorter distances than we’ve previously been capable of. This enterprise has been going on for generations; first microscopes discovered bacteria and other cells; then these were found, with more powerful experiments, to be made of molecules, in turn made from atoms; yet more powerful experiments showed first that the atoms contain electrons and atomic nuclei, then that the nuclei are made from protons and neutrons, and then that these in turn are made from quarks and gluons. All of this has been discovered by probing the world with ever more powerful particle collisions of one form or another. So building a higher energy accelerator is to take another step along a well-trodden path.

However, it’s not the only path, nor has it ever been.

Is this the most promising path to explore?

The LHC is still in its adolescence, and we can’t predict its future discoveries. At this point the LHC experiments have collected a few percent of the data they’ll collect over the next decade, and they have done so with proton-proton collisions whose energy is only about 60% of what we expect to see in the next few years. Moreover, even the existing data set, collected in 2011-2012, hasn’t been fully analyzed; this data could still yield discoveries (but only if the experimenters choose to make the relevant measurements.) So we certainly can’t know yet whether the LHC will produce a new target for the 100 TeV machine. If it does, then it will be much clearer what to do next and how to use the 100 TeV machine. If it doesn’t… well, that’s something that deserves a bit more discussion.

Suppose that, after the LHC’s last run, nothing other than the Higgs particle’s been found, with properties that are consistent, to a few percent, with a Standard Model Higgs. While this sounds dull at first glance, it’s actually among the most radical possible outcomes of the LHC. That’s because of the “naturalness puzzle”, which I discussed in some detail in this article. Never before in nature, in any generic context, have we come across a low-mass spin-zero particle (i.e. something like the Higgs particle) without other particles associated with it.  In this sense, the Standard Model is an extraordinarily non-generic theory, at least from our current point of view and understanding.  It will be quite shocking if it completely describes all LHC data.

But maybe it does.  If it does, what does this potentially imply about nature?  And what would be the implications for our future explorations of nature at its most elementary level? I’ll address this issue in my next post.

Filed under: Higgs, LHC News, Other Collider News, Particle Physics, The Scientific Process Tagged: atlas, cms, Higgs, LHC, particle physics

### Tommaso Dorigo - Scientificblogging

The Plot Of The Week - Dark Matter Candidates In Super-CDMS
The Super-CDMS dark-matter search has released two days ago the results from the analysis of nine months of data taking. The experiment has excellent sensitivity to weak interacting massive particles producing inelastic scattering with the Germanium in the detector.

The detector is composed of fifteen cylindrical 0.6 kg crystals stacked in groups of three, equipped with ionization and phonon detectors that are capable of measuring the energy of the signals. From that the recoil energy can be derived, and a rough estimate of WIMP candidates mass. The towers are kept at close to absolute zero temperature in the Soudan mine, where backgrounds from cosmic rays and other sources are very small.

### The n-Category Cafe

Should Mathematicians Cooperate with GCHQ?

I’ve just submitted a piece for the new Opinions section of the monthly LMS Newsletter: Should mathematicians cooperate with GCHQ? The LMS is the London Mathematical Society, which is the UK’s national mathematical society. My piece should appear in the April edition of the newsletter, and you can read it below.

Here’s the story. Since November, I’ve been corresponding with people at the LMS, trying to find out what connections there are between it and GCHQ. Getting the answer took nearly three months and a fair bit of pushing. In the process, I made some criticisms of the LMS’s total silence over the GCHQ/NSA scandal:

GCHQ is a major employer of mathematicians in the UK. The NSA is said to be the largest employer of mathematicians in the world. If there had been a major scandal at the heart of the largest publishing houses in the world, unfolding constantly over the last eight months, wouldn’t you expect it to feature prominently in every issue of the Society of Publishers’ newsletter?

To its credit, the LMS responded by inviting me to write an inaugural piece for a new Opinions section of the newsletter. Here it is.

Should mathematicians cooperate with GCHQ?

Tom Leinster

One of the UK’s largest employers of mathematicians has been embroiled in a major international scandal for the last nine months, stands accused of law-breaking on an industrial scale, and is now the object of widespread outrage. How has the mathematical community responded? Largely by ignoring it.

GCHQ and its partners have been systematically monitoring as much of our lives as they possibly can, including our emails, phone calls, text messages, bank transactions, web browsing, Skype calls, and physical location. The goal: “collect it all”. They tap internet trunk cables, bug charities and political leaders, disrupt lawful activist groups, and conduct economic espionage, all under the banner of national security.

Perhaps most pertinently to mathematicians, the NSA (GCHQ’s major partner and partial funder) has deliberately undermined internet encryption, inserting a secret back door into a standard elliptic curve algorithm. This can be exploited by anyone sufficiently skilled and malicious — not only the NSA/GCHQ. (See Thomas Hales’s piece in February’s Notices of the AMS.) We may never know what else mathematicians have been complicit in; GCHQ’s policy is not to comment on intelligence matters, which is to say, anything it does.

Indifference to mass surveillance rests partly on misconceptions such as “it’s only metadata”. This is certainly false; for instance, GCHQ has used webcams to collect images, many sexually intimate, of millions of ordinary citizens. It is also misguided, even according to the NSA’s former legal counsel: “metadata absolutely tells you everything about somebody’s life”.

Some claim to be unbothered by the recording of their daily activities, confident that no one will examine their records. They may be right. But even if you feel that way, do you want the secret services to possess such a powerful tool for chilling dissent, activism, and even journalism? Do you trust an organization operating in secret, and subject to only “light oversight” (a GCHQ lawyer’s words), never to abuse that power?

Mathematicians seldom have to face ethical questions. But now we must decide: cooperate with GCHQ or not? It has been suggested that mathematicians today are in the same position as nuclear physicists in the 1940s. However, the physicists knew they were building a bomb, whereas mathematicians working for GCHQ may have little idea how their work will be used. Colleagues who have helped GCHQ in the past, trusting that they were contributing to legitimate national security, may justifiably feel betrayed.

At a bare minimum, we as a community should talk about it. Sasha Beilinson has proposed that working for the NSA/GCHQ should be made “socially unacceptable”. Not everyone will agree, but it reminds us that we have both individual choice and collective power. Individuals can withdraw their labour from GCHQ. Heads of department can refuse staff leave to work for GCHQ. The LMS can refuse GCHQ’s money.

At a bare minimum, let us acknowledge that the choices are ours to make. We are human beings before we are mathematicians, and if we do not like what GCHQ is doing, we do not have to cooperate.

I had a 500-word limit, so I omitted a lot. Here are the facts on the LMS’s links with GCHQ, as stated to me by the LMS President Terry Lyons:

The Society has an indirect relationship with GCHQ via a funding agreement with the Heilbronn Institute, in which the Institute will give up to £20,000 per year to the Society. This is approximately 0.7% of our total income. This is a recently made agreement and the funding will contribute directly to the LMS-CMI Research Schools, providing valuable intensive training for early career mathematicians. GCHQ is not involved in the choice of topics covered by the Research Schools.

So, GCHQ’s financial support for the LMS is small enough that declining it would not make a major financial impact.

I hope the LMS will make a public statement clarifying its relationship with GCHQ. I see no argument against transparency.

Another significant factor (which Lyons alludes to above and is already a matter of public record) is that GCHQ is a funder of the Heilbronn Institute, which is a collaboration between GCHQ and the University of Bristol. I don’t know that the LMS is involved with Heilbronn beyond what’s mentioned above, but Heilbronn does seem to provide an important channel through which (some!) British mathematicians support the secret services.

Finally, I want to make clear that although I think there are some problems with the LMS as an institution, I don’t blame the people running it, many of whom are taking time out of extremely busy schedules for the most altruistic reasons. As I wrote to one of them:

I’m genuinely in awe of the amount that you […] give to the mathematical community, both in terms of your selflessness and your energy. I don’t know how you do it. Anything critical I have to say is said with that admiration as the backdrop, and I hope I’d never say anything of the form “do more!”, because to ask that would be ridiculous.

Rules for commenting here  I’ve now written several posts on this and related subjects (1, 2, 3, 4). Every time, I’ve deleted some off-topic comments — including some I’ve enjoyed and agreed with heartily. Please keep comments on-topic. In case there’s any doubt, the topic is the relationship between mathematicians and the secret services. Comments that stray too far from this will be deleted.

## March 02, 2014

### Andrew Jaffe - Leaves on the Line

Nearly a decade ago, blogging was young, and its place in the academic world wasn’t clear. Back in 2005, I wrote about an anonymous article in the Chronicle of Higher Education, a so-called “advice” column admonishing academic job seekers to avoid blogging, mostly because it let the hiring committee find out things that had nothing whatever to do with their academic job, and reject them on those (inappropriate) grounds.

I thought things had changed. Many academics have blogs, and indeed many institutions encourage it (here at Imperial, there’s a College-wide list of blogs written by people at all levels, and I’ve helped teach a course on blogging for young academics). More generally, outreach has become an important component of academic life (that is, it’s at least necessary to pay it lip service when applying for funding or promotions) and blogging is usually seen as a useful way to reach a wide audience outside of one’s field.

So I was distressed to see the lament — from an academic blogger — “Want an academic job? Hold your tongue”. Things haven’t changed as much as I thought:

… [A senior academic said that] the blog, while it was to be commended for its forthright tone, was so informal and laced with profanity that the professor could not help but hold the blog against the potential faculty member…. It was the consensus that aspiring young scientists should steer clear of such activities.

Depending on the content of the blog in question, this seems somewhere between a disregard for academic freedom and a judgment of the candidate on completely irrelevant grounds. Of course, it is natural to want the personalities of our colleagues to mesh well with our own, and almost impossible to completely ignore supposedly extraneous information. But we are hiring for academic jobs, and what should matter are research and teaching ability.

Of course, I’ve been lucky: I already had a permanent job when I started blogging, and I work in the UK system which doesn’t have a tenure review process. And I admit this blog has steered clear of truly controversial topics (depending on what you think of Bayesian probability, at least).

### John Baez - Azimuth

Network Theory I

Here’s a video of a talk I gave last Tuesday—part of a series. You can see the slides here:

One reason I’m glad I gave this talk is because afterwards Jamie Vicary pointed out some very interesting consequences of the relations among signal-flow diagrams listed in my talk. It turns out they imply equations familiar from the theory of complementarity in categorical quantum mechanics!

This is the kind of mathematical surprise that makes life worthwhile for me. It seemed utterly shocking at first, but I think I’ve figured out why it happens. Now is not the time to explain… but I’ll have to do it soon, both here and in the paper that Jason Eberle are writing about control theory.

• Brendan Fong, A compositional approach to control theory.

### Lubos Motl - string vacua and pheno

Gross vs Strassler: Gross is right
I was told about Matt Strassler's 50-minute talk at JoeFest (click for different formats of the video/audio/slides) and his verbal exchange with David Gross that begins around 35:00.

Matt's talk is pretty nice, touching technical things like the Myers effect, pomerons etc. but also reviewing his work with Joe Polchinski and giving Joe some homework exercises all the time. Matt said various things about the effective field theory's and/or string theory's inability to solve the hierarchy problem even with the anthropic bias taken into account. He would be distinguishing the existence of hierarchies from the lightness of the Higgs in a way that I didn't quite find logical.

They were thought-provoking comments but I just disagree about the basic conclusions. He can't pinpoint any contradiction in these matters because the QFT framework doesn't tell us which QFT is more likely – it goes beyond the domain of questions that an effective QFT may answer. And even the rules to extract such a probabilistic distribution of the vacua from string theory is unknown. If there are no predictions about a particular question – even if it is a "pressing" question like that – there can't be contradictions.

But the main conflict arose due to Matt's vague yet unusual and combative enough comments about the value of the 100-TeV collider.

He would say it could be a bad idea to give our eggs into the basket of this collider planned for the longer term. The reasons? Similar to the luminiferous aether. Michelson was trying to find the aether wind which was misguided, Matt says, so there should have been other experiments.

Unfortunately, he didn't say what those were supposed to be.

David Gross' reaction made it very clear that they disagree not only about the 100-TeV collider but also about the right strategy and interpretation of hypotheses and their testing of the late 19th century, i.e. about the Michelson issue. Previously, Matt Strassler would say lots of weird things such as "the prediction of new physics at the LHC is the only prediction one can deduce from string theory".

This is clearly wrong. No one can deduce anything like that. Effective field theories with some extra assumptions about the distribution of parameters could perhaps lead you to guess that particles with masses comparable to the Higgs may exist. But these are not conclusions of QFT itself. And in string theory, such "predictions" are even more impossible because string theory has no adjustable continuous dimensionless parameters. That means that there are no "natural distributions" on the space of parameters that would follow from string theory, at least as long as we interpret string theory as the currently understood body of knowledge.

Even more qualitatively, there is clearly no derivation that would imply that "string theory is progressive" or "string theory is conservative" when it comes to the amount of new low-energy physics. The latter – conservative string theory – is totally compatible with everything we know. After all, your humble correspondent is not the only one who thinks that string theory is a mostly if not very conservative theory. The claim that it inevitably predicts new things – or that it predicts more new things than possible or real alternatives – is just wrong.

Just to be sure, you may remember that your humble correspondent considers an even larger hadron collider to be the single most meaningful way to progress in experimental particle physics. We march towards the unknown which means that higher-energy experiments are needed for that. This relationship is probably true up to the Planck scale. The higher-energies we investigate experimentally, the deeper we penetrate into the realm of the unknown. David Gross and others clearly share the same viewpoint.

Is it possible that the Very Large Hadron Collider will find the Standard Model only and nothing else? Absolutely. That will be a disappointment but physicists will still learn something. But if you want to propose alternative experiments, you should know what they are. Some people are looking for the dark matter directly. There are experiments trying to detect axions and other things. Physics seems to have enough money for those – they are not as expensive as the large colliders. If you had another important idea what should be tested, you should say what it is, otherwise the claim that the colliders are overvalued contradicts the evidence you are able to offer. Matt isn't able to justify any true alternative.

But David Gross really disagreed about Matt's suggestion that the Michelson-Morley experiment wasn't naturally the best things to do at that time. Well, it was very sensible. Their understanding of the origin of the electromagnetic fields within classical physics implies the aether wind, so they were logically and justifiably trying to find it. More generally, these interferometer-based experiments were a "proxy" for the tests of many or all conceivable phenomena whose strength is proportional to $$v/c$$ or its power i.e. effects that become strong when speeds approach the speed of light. They chose an experimentally smart representative of the "tests of relativistic physics", as we could call it decades later.

But during Michelson's times, there was no relativity. People just didn't know it yet. And Michelson's experiments happened to play just a minor role in Einstein's theoretical investigations. But as far as the people who relied on the experimental data and not Einstein's ingenuity are concerned, it was just completely logical to study the aether wind. It was really a critical experiment that would kickstart relativity if Einstein were paying more attention to the experiments or if another physicists who was well aware of the experiments had a little bit more of Einstein's ingenuity.

When relativity was discovered, it became clear that the aether wind doesn't exist but lots of other relativistic effects (corrections to Newton's physics suppressed by powers of $$v/c$$) do exist. Theorists like Einstein were crucial to make the progress in 1905 etc. but when it comes to the experiments, Michelson's experiments in the 1880s were really the optimum thing to make. They were even lucky because they revealed a particular situation where the right (now: relativistic) physics not only deviates from Newton's physics but where it gives a very simple result (the speed of light is constant, regardless of the speeds of sources or observers).

So just like David Gross and others, I think that Matt is just wrong even in the case of Michelson.

Nati Seiberg would also criticize Matt Strassler. Seiberg says that Matt's doubts are not only in contradiction with the philosophy of high-energy physics; they are in contradiction with reductionism itself that has worked for centuries. The further you go to shorter distances, the more detailed understanding of the physics you may acquire. Why should it break down now? Matt says that quantum gravity is an example where shorter distances fail to allow you to study more detailed physics. Seiberg correctly says that it is right but quantum gravity is very far. Strassler says that this comment by Seiberg might be right or wrong.

Well, I do think that quantum gravity is "probably" at the usual 4D Planck scale or nearby, roughly at the conventional GUT scale or higher. It is also hypothetically possible that quantum gravity kicks in at nearby energies, near 1 TeV. But this scenario predicts *exactly* what Matt Strassler attributes to string theory in general. This scenario implies significant deviations from the Standard Model at the LHC energies. It is really excluded experimentally. String theory is not excluded but quantum gravity at 1 TeV is excluded. I don't know why Matt is getting these basic things backwards.

Nima Arkani-Hamed addressed a question touched by Matt, "why a light Higgs and not technicolor". This question has an answer. As Nima decided with Savas after some scrutiny, technicolor models with light fermions are inevitably tuned to 1-in-a-million or worse because the light fermion masses require us to introduce some running that easily and generically creates new exponential hierarchies between the electroweak scale and the QCD scale, and related things. So SUSY with some tuning for a scalar is still less fine-tuned than technicolor. And if the electroweak symmetry is broken by a strong force, there are no baryons – just neutrinos.

Nima also defends the 100-TeV collider. No one is really suggesting to put all eggs into one basket; people are thinking about and building many, many experiments. Going to high energies is still a very important thing for many reasons.

Matt replies to Nati that "this time is different" because for the first time, the Standard Model can be the "whole story" (he overlooked gravity but it is a potentially complete theory of non-gravitational physics) so there is no reason to think that the new discoveries will come soon. Despite my expectations that new physics does exist below the scale of a few TeV, I agree with that. Strassler also says – and it is pretty much equivalent to the previous sentence – that naturalness and reductionism are not related in any direct way. I agree with that, too: there may be big hierarchies and deserts but reductionism still holds.

David Gross says that the naturalness isn't a prediction; it is a strategy. I completely agree with him (and I have written down this point many times), so let me please try to present this claim in my words, assuming that David Gross would fully subscribe to them, too. Naturalness boils down to a probability distribution on the space of parameters which we can use to think that certain values or patterns are "puzzling" because they are "unnatural" – which means "unlikely" according to this probability distribution. And that's why we focus on them; they are likely to hide something we don't know yet but we should know. At the end, the complete theory makes all these effects in Nature natural (in a more general sense) but because they look unnatural according to an incomplete theory, these effects in Nature are likely to hold a key for a new insight that changes the game, an insight by which the new theory significantly differs from the current incomplete theory. Naturalness cannot be tested quantitatively, however.

Gross said that our inability to calculate something is a problem. I completely agree with that and I add that this problem is worse in the case of parameters that seem parameterically smaller than the expectations because this gap suggests that there is some "big piece of meat" we are missing that changes the story substantially, not just some generic obscure calculation that spits out some numbers of order one. That's where naturalness directs us, I add.

Matt is often drowning in the sea of vagueness – this is something we know from the discussions with him on the blogosphere, too. He tries to say something extremely unusual but he's trying to claim that he isn't saying anything nontrivial at all at the same time. You just can't have it both ways, Matt. In this case, he is saying that we're not spending our time wisely – it's being focused too narrowly. Except that he never says what is the direction in which one might or one should broaden the interest or work.

Someone says he finds it frustrating that the reach for gluinos will only be doubled from 1 TeV to 2 TeV in 2015, not too big a difference. A reason to like a higher-energy machine.

Steve Giddings also points out a bug in Matt's logic concerning reductionism. Even if reductionism (meaning the need to study higher energies) were ending at X TeV, we would clearly need to go slightly above this level to find out that the reductionism fails. Finally, Matt proposes a loophole. Maybe there are extremely light and extremely weakly coupled new effects somewhere, so going to higher energies doesn't help us. Great. So what should we measure instead of the larger collider data?

David Gross says that dark matter is an example of that and Matt says that this makes his (Matt's case) stronger because according to many dark-matter models, one can't discover the new physics by going to higher energies. Well, right, it's plausible. But the difference is that there are "very many" such possible directions. Going to high energy is to increase the value of a quantity that is universal for all of physics – energy (or the inverse distance). Going to study very weakly coupled things means to go in the direction of lowering every conceivable coupling constant anywhere and there are just too many. We may try. We should try those cases that are justified by some arguments. But it is simply not true that any single march towards higher sensitivity in some particular coupling constant of some particular interaction seems to be as important as our ability to go to higher energies. There is only one energy and it's the king; there are way too many coupling constants and each of them seems less fundamental and less universal than energy. So I don't really agree with Matt on this change of the bias, either – unless he tells us what is the particular coupling constant or experiment where it makes sense to go to "much better than considered" sensitivities.

Maybe Matt would propose to build a 1 GeV collider with the luminosity increased 1 billion times? Perhaps it could make sense for some potential possibilities. But he should at least propose such a thing explicitly instead of saying that others are narrow-minded just because they are doing everything that people have conceived so far.

At the very end, Joe Polchinski calmed all bitterness and said that Matt was one of the young persons who comes to Joe's office and pretty much solves a problem in the confining gauge theory, having 100% right on the field-theory side and 80% right on the string-theory side, so Joe added the remaining 20%. Joe improved the flattering joke by saying that this paper was never submitted for publication because it didn't meet Matt's standards LOL. Matt says it's not really true. Joe also says that he didn't really deserve his PhD but with Matt's help and 15 years later, he had finally solved his thesis problem. ;-)

### Sean Carroll - Preposterous Universe

Decennial

Almost forgot again — the leap-year thing always gets me. But I’ve now officially been blogging for ten years. Over 2,000 posts, generating over 57,000 comments. I don’t have accurate stats because I’ve moved around a bit, but on the order of ten million visits. Thanks for coming!

Nostalgia buffs are free to check out the archives (by category or month) via buttons on the sidebar, or see the greatest hits page. Here are some of my personal favorites from each of the past ten years:

## March 01, 2014

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Einstein’s unfinished symphony in the media

Our recent discovery of an unpublished model of the cosmos by Albert Einstein (see last post or here for a preprint of our paper) is receiving a lot of media attention, it’s very humbling. First off the mark was Davide Castelvecchi with a very nice article in Nature. Davide’s article was quickly reproduced in various outlets, from Scientific American here to the Huffington PostTrawling over the internet, I see newspaper and magazine articles describing our discovery in a dozen languages. It’s nice to see historical material receiving this sort of attention, I guess everyone loves an Einstein story.

I’m also intrigued that it was the traditional media that picked up the story – with the exception of Peter Woit, no-one in the blogosphere seemed to notice our preprint or even a blogpost I wrote describing our paper. Perhaps we bloggers need the imprimateur of respected print journals more than we care to admit!

I notice one slightly misleading point in the electronic version of the Nature article is getting repeated everywhere. It’s probably not quite correct to frame Einstein’s attempt at a steady-state model of the cosmos in terms of a resistance to ‘big bang’ theories; there is no reference to the problem of origins in Einstein’s manuscript. Indeed, one of the most interesting aspects of the manuscript is that it appears to have been written in early 1931, at a time when the first tentative astronomical evidence for an expanding universe was emerging but the issue of an explosive beginning for the cosmos had yet to come into focus (e.g. the great debate between Eddington and Lemaitre later in 1931). It’s interesting that the initial mention in Nature of resistance to ‘big bang’ theories  is repeated in almost all other outlets, one can’t help wondering how many science journalists read our abstract. An honorable exception here is John Farrell at Forbes Magazine. John certainly noticed the discrepancy and no wonder – John has written an excellent book on Lemaitre.

All in all, it’s been a lot of fun so far. I’m getting quite a few emails from distinguished colleagues pointing out that Einstein’s model is trivial because it didn’t work, which is of course true. However, our view is that what Einstein is trying to do is very interesting from a philosophical point of view  – and what is even more interesting is that he apparently abandoned the project when he realised that a consistent steady-state model would require an amendment to the field equations. In short, it seems the Great Master conducted an internal debate between steady-state and evolving models of the cosmos decades before the rest of the community…

Update

There is a very nice video describing our discovery here.

### Clifford V. Johnson - Asymptotia

LA Phil Rock Star!
When calling to mind the Los Angeles Philharmonic, everyone's (and all the posters') focus is on Gustavo Dudamel, (or, the Dude, as I call him), all unruly hair and visible enthusiasm and so forth, and that's great. He's an excellent conductor. However, one of the unsung (as far as I know*) visibly spectacular performers of the LA Philharmonic is the excellent principal viola player whose name I do not know [update: see below*] who puts on the most remarkable physical performance every time I go (and presumably those other times too). Actually, the violist who sits next to her is also remarkable, since she manages without being distracted by her neighbour to maintain a very upright and solid, firmly planted, legs wide stance, in part providing a canvas upon which the viola player I first mentioned can splash bright splashes of movement all over the place! She rocks, sways, jerks, and contorts (sometimes even during quiet slow bits)- doing the craziest things with her legs, head, and bow arm, and so much of the time looks like she is about to spectacularly fall off her chair and wipe out at least half the viola section! This is why her colleague right next to her is also remarkable, as she acts as this wonderful un-distractable "straight man" to the physical pyrotechnics helping make them all the more remarkable by contrast. Last night I tried to capture some of the energy of the hyper-energetic viola player in a quick sketch (during [...] Click to continue reading this post

### Clifford V. Johnson - Asymptotia

Take Part in the Festival!

### Jester - Resonaances

Weekend Plot: wimp race
This weekend's plot encompasses almost the entire field of direct detection of WIMP dark matter:

It shows the existing and projected limits on the scattering cross section of dark matter on nucleons. LUX -- a 370 kg xenon detector -- currently holds the leader position for dar matter masses above 6 GeV and promises to improve the limits by another factor of a few next year. The Xenon collaboration on the other side of the Atlantic is already preparing a nuclear response in the form of a 3 ton detector, to which LUX will retaliate with a 5 ton Led Zeppelin, or maybe LUX-Zeplin. Meanwhile, the SuperCDMS experiment will secure a monopoly in the low-mass region. But the arms race cannot go on forever, as direct detection experiments will inevitably hit the neutrino wall. That is to say, they will reach the sufficient sensitivity to observe nuclear recoils due to elastic scattering of solar and atmospheric neutrinos. That will constitute an irreducible background to dark matter searches  (unless directional detection techniques are developed). And so it'll all come to a bitter end: sometime in the next decade WIMP detection experiments will be downgraded to neutrino observatories.

The plot borrowed from this talk who itself borrowed it from somewhere.

### Tommaso Dorigo - Scientificblogging

Death Of The Dijet Anomaly
Do you remember the CDF Dijet bump at 145 GeV? In 2010, CDF published a paper that showed how the same data sample of W + jet events where they had previously isolated the "single-lepton" WW+WZ signal also presented an intriguing excess of events in the dijet mass distribution, in a region where the background -dominated by QCD radiation produced in association with a W- fell smoothly. That signal generated some controversy within the collaboration, and a lot of interest outside of it. It could be interpreted as some signal of a new technicolor resonance !

## February 28, 2014

### ZapperZ - Physics and Physicists

"Dropleton" Makes News
I've given up on trying to figure out why certain things from science make the news, while others don't. My feeble guess would be that a good, catchy name or phrase often can captivate a news reporter or agency more than having an actual importance.

Not that I'm implying the "dropleton" is not not important. After all, it made the cover of this week's Nature! Still, what makes the Los Angeles Times take note of it? I think it is a combination of the name and the sleek image on Nature's cover. Still, I don't think people who read the LA Times article on this thing would know what it is and why it is important enough that it made the cover. Besides, I don't think they would care.

It isn't often that a "new quasiparticle" makes the news. I probably won't see another one again in my lifetime, I would think.

Zz.

### astrobites - astro-ph reader's digest

How Green Can a Planet in a Resonant Orbit Be?

Title: Photosynthetic Potential of Planets in 3:2 Spin Orbit Resonances
Authors: S.P. Brown, A.J. Mead, D.H. Forgan, J.A. Raven, C.S. Cockell
First Author’s Institution: UK Centre for Astrobiology, School of Physics and Astronomy, University of Edinburgh
Paper Status: Accepted for publication in the International Journal of Astrobiology

When looking for exoplanetary homes for life, planets around M-type stars show a lot of potential. Although these dwarf stars are smaller and less luminous than our sun, their lifespans are much longer. A dimmer star means that the potential habitable zone (HZ) is much closer in to the star. Whereas Earth orbits 1 AU (roughly 93 million miles) from the sun, the HZ around an M-dwarf is anywhere from one tenth to even a few hundredths of that distance. Planets this close to their stars have shorter orbital periods, making them easier to find by our current planet-hunting methods.

Planets so close to their stars are more likely to fall into orbital resonances with their star. In its most extreme case, this means tidal locking, where the planet makes one rotation per orbit, as the moon does around the Earth, resulting in permanent day and night hemispheres on the planet. This leads to extreme heat and cold, although there has been research indicating that heat transfer through the atmosphere and a greenhouse-friendly atmosphere could make tidally locked planets less hostile to the possibility of life.

A less extreme example is a 3:2 resonance, where the planet makes three rotations for every two orbits. (Click through for a helpful .gif.) Mercury is in a 3:2 rotation around the sun. Each Mercury year consists of one and a half Mercury days. Mercury, of course, lacks an atmosphere and has a temperature range from 100 – 700 C, so it’s not exactly hospitable. A 3:2 planet around an M star, though, would not be so intensely hot.

Temperature isn’t the only criterion for habitability. Starlight brings more than heat – at least on Earth, the sun’s energy also provides the basis for the food chain, via photosynthesis. Although photosynthesis isn’t necessarily a requirement for life on a planet, it is essential to the biosphere of Earth. This paper investigates the possibility for photosynthetic life to exist on a planet in a 3:2 orbital resonance, where long days are paired with long nights that could starve photosynthetic life out of existence.

First, the authors calculate the flux received over the surface of the planet. On Earth, flux varies by latitude, but works out evenly at a given longitude – the equator gets the same amount of energy from the sun in Quito as in Jakarta. But if a 3:2 planet’s orbit is eccentric – Mercury’s is 0.206; high eccentricity may what keeps a close-orbit planet from becoming tidally locked and at 3:2 instead – things start to get weird. If the eccentricity is above 0.191, the star will have apparent retrograde motion across the sky. At some longitudes, this even means that the sun will rise a bit from where it has set before reversing to set once again (see figure 1).

Figure 1: The position of the star in the sky over 2 orbits, at longitudes along the equator. On the left, with eccentricty of e=0, the star moves straight across the sky, as on Earth. On the right, for e=0.3, the star exhibits retrograde motion. For longitude of 90 degrees (yellow), the dips above and below the line where the angle of the star is 90 or -90 degrees indicate the sun rising from where it has set, or setting back from where it has just risen.

Figure 2: Integrated energy received over 2 orbits as a function of longitude (x-axis) and latitude (y-axis). Red means more flux, blue and black mean less. From left to right, top to bottom, at eccentricities of e=0, e=0.2, e=0.4, e=0.5, e=0.7, e=0.8.

This retrograde motion means that different longitudes will receive different amounts of stellar energy over the course of the two-orbit cycle. Figure 2 shows the stellar energy received over two orbits as a function of longitude and latitude. In the upper left-hand image, with zero eccentricity, all longitudes are the same, with the most flux in the two bright bands. But once eccentricity is up to .2 (upper right), some longitudes receive much more energy than others, as seen in the bright and dark areas.

The longitudinal variations in the light cycles make habitability – on local and planetary levels – a complicated question. The average stellar flux would be enough for photosynthetic life, but organisms can’t bask in a two-orbit average. They would be subject to dramatic local variations, long nights, constrained habitats, as well as planet-wide results of the 3:2 orbit including the risk of planetary freeze-out.

The authors use the research already conducted into potential conditions on fully tidally locked planets (in a 1:1 resonance) to extrapolate to the similar but less-extreme 3:2 planet. A tidally locked planet has a risk of the runway freeze-out of its dark-side atmosphere, although atmospheric heat transfer to the dark side could allay this event. Although the darkness on a 3:2 planet is not permanent but simply long, the authors argue that while freeze-out is a possibility, it is an unlikely one.

Even if long nights don’t cause a planet’s atmosphere to freeze, prolonged darkness is still a major challenge for organisms that get their energy from light. In the cross-disciplinary spirit of astrobiology, the authors collect extensive research from biology that provides examples of photosynthetic life surviving extended darknesses through two methods: dormancy and mixotrophy. Many phytoplankton, dinoflagellates, and diatoms have been shown to survive without light for months, years, and even decades. And some green algae are mixotrophic, subsisting on photosynthesis when light is abundant, and switching to digestion by way of phagotrophy when it is not. These examples show the wide-ranging flexibility of photosynthetic life. The precedent for the adaptations needed to flourish in the weird light conditions of a 3:2 planet is certainly there.

What isn’t certain, of course, is everything else. The authors make an especially interesting point about one of their necessary simplifications, this one in terms of orbital dynamics. This paper’s calculations of a planet’s orbit use fixed Keplerian parameters, but from Mercury (our own local 3:2 planet) we know that its perihelion actually precesses, meaning it shifts gradually over time. As perihelion precesses, the zones of high and low flux (figure 2) shift over the planet’s surface. A colony of photosynthesizers living in a bright zone could find themselves losing the extra daylight they depend on. The authors posit, though that since precession takes place on the same general timescales as evolution, precession could drive adaptation and speciation rather than extinction.

While a planet in a 3:2 orbital resonance could be home to photosynthetic life, its biosphere would show very different patterns than ours. Rather than latitudinal bands of climate, a 3:2 planet would have biomes constrained to certain longitudes. As we get closer and closer to the ability to scan exoplanets’ spectra and atmospheres for signs of life, the familiar biosignature of photosynthesis – oxygen waste – remains a promising clue.

### Symmetrybreaking - Fermilab/SLAC

CDMS result covers new ground in search for dark matter

The Cryogenic Dark Matter Search has set more stringent limits on light dark matter.

Scientists looking for dark matter face a serious challenge: No one knows what dark matter particles look like. So their search covers a wide range of possible traits—different masses, different probabilities of interacting with regular matter.

Today, scientists on the Cryogenic Dark Matter Search experiment, or CDMS, announced they have shifted the border of this search down to a dark-matter particle mass and rate of interaction that has never been probed.

### Matt Strassler - Of Particular Significance

Brane Waves

The first day of the conference celebrating theoretical physicist Joe Polchinski (see also yesterday’s post) emphasized the broad impact of his research career.  Thursday’s talks, some on quantum gravity and others on quantum field theory, were given by

• Juan Maldacena, on his latest thinking on the relation between gravity, geometry and the entropy of quantum entanglement;
• Igor Klebanov, on some fascinating work in which new relations have been found between some simple quantum field theories and a very poorly understood and exotic theory, known as Vassiliev theory (a theory that has more fields than a field theory but fewer than a string theory);
• Raphael Bousso, on his recent attempts to prove the so-called “covariant entropy bound”, another relation between entropy and geometry, that Bousso conjectured over a decade ago;
• Henrietta Elvang, on the resolution of a puzzle involving the relation between a supersymmetric field theory and a gravitational description of that same theory;
• Nima Arkani-Hamed, about his work on the amplituhedron, a set of geometric objects that allow for the computation of particle scattering in various quantum field theories (and who related how one of Polchinski’s papers on quantum field theory was crucial in convincing him to stay in the field of high-energy physics);
• Yours truly, in which I quickly reviewed my papers with Polchinski relating string theory and quantum field theory, emphasizing what an amazing experience it is to work with him; then I spoke briefly about my most recent Large Hadron Collider [LHC] research (#1,#2), and concluded with some provocative remarks about what it would mean if the LHC, having found the last missing particle of the Standard Model (i.e. the Higgs particle), finds nothing more.

The lectures have been recorded, so you will soon be able to find them at the KITP site and listen to any that interest you.

There were also two panel discussions. One was about the tremendous impact of Polchinski’s 1995 work on D-branes on quantum field theory (including particle physics, nuclear physics and condensed matter physics), on quantum gravity (especially through black hole physics), on several branches of mathematics, and on string theory. It’s worth noting that every talk listed above was directly or indirectly affected by D-branes, a trend which will continue in most of Friday’s talks.  There was also a rather hilarious panel involving his former graduate students, who spoke about what it was like to have Polchinski as an advisor. (Sorry, but the very funny stories told at the evening banquet were not recorded. [And don't ask me about them, because I'm not telling.])

Let me relate one thing that Eric Gimon, one of Polchinski’s former students, had to say during the student panel. Gimon, a former collaborator of mine, left academia some time ago and now works in the private sector. When it was his turn to speak, he asked, rhetorically, “So, how does calculating partition functions in K3 orientifolds” (which is part of what Gimon did as a graduate student) “prepare you for the real world?” How indeed, you may wonder. His answer: “A sense of pertinence.” In other words, an ability to recognize which aspects of a puzzle or problem are nothing but distracting details, and which ones really matter and deserve your attention. It struck me as an elegant expression of what it means to be a physicist.

Filed under: Quantum Field Theory, Quantum Gravity, String Theory Tagged: DoingScience, fields, QuantumGravity, StringTheory