# Particle Physics Planet

## March 08, 2014

### Jester - Resonaances

Weekend Plot: all of dark matter
To put my recent posts into a perspective, here's a graph summarizing all of dark matter particles discovered so far via direct or indirect detection:

The graph shows the number of years the signal has survived vs. the inferred dark matter mass. The particle names follow the usual Particle Data Group conventions. The label's size is related to the statistical significance of the signal, and the color to the Bayesian likelihood that the signal originates from dark matter. The masses of the discovered particles span 11 orders of magnitude, although the largest concentration is near 100 GeV (this is known as the WIMP miracle). If I missed any particle for which a compelling evidence exists, let me know, and I will add it to the graph.

Here are the original references for the Bulbulon, BoehmotColaron, Daemon, CresstonHooperon, Wenigon, Pamelon, and the mother of Bert and Ernie

## March 07, 2014

### Emily Lakdawalla - The Planetary Society Blog

Why Cosmos should matter, especially to Hollywood
For a town dependent on Stars, there are far too few people here who look up at the sky. But come this Sunday, March 9, the epic series of science, space and humanity will return: Cosmos: A Spacetime Odyssey. Why does it matter for Hollywood, specifically? I'll tell you why it will. And then why it should.

### Jester - Resonaances

Signal of WIMP dark matter
You may have  heard about the excess of gamma-ray emission from the center of the Milky Way measured by the Fermi telescope. This excess can be interpreted as a signal of a 30-40 GeV dark matter particle - the so-called hooperon -  annihilating into a pair of b-quarks. The inferred annihilation cross section is of order 1 picobarn, perfectly fitting the thermal dark matter paradigm.  The story is not exactly new; the anomaly and its dark matter interpretation was first claimed 4 years ago. Since then there has been a steady trickle of papers by different groups arguing that the signal is robust and proposing dark matter or astrophysical explanations. Last week the story hit several news outlets, see for example here for a nice write up. What has changed that the anomaly was upgraded from a tantalizing hint to a compelling evidence of WIMP dark matter?

First, here is a bit more detailed description of the signal.  The Fermi satellite measures gamma rays from all sky with a good angular and energy resolution. Many boring astrophysical processes produce gamma rays, for example cosmic rays scattering on the interstellar medium, or violent events happening around black holes and pulsars. However, known point sources, galactic and extragalactic diffuse emission, and the emission from the Fermi Bubbles do not seem to be enough to explain what's going on in the center of our galaxy. A better fit is obtained if one adds a new component with a spatial distribution sharply peaked around the galactic center and the energy spectrum with a broad peak near 2 GeV, see the plot. How much better fit?  This paper quotes 40 sigma preference for this new component in the inner galaxy region. That's hell of a significance, even after translating the astrophysical sigmas to the ones used in conventional statistics ;)

Now, WIMP dark matter can easily reproduce the new component.  Cold dark matter is expected to be sharply peaked near the galactic center, with the 1/r or similar profile. Furthermore, when dark matter annihilates into charged particles, the latter can radiate a part of their energy producing photons via the final state radiation, Compton scattering, and bremsstrahlung. This leads to emission of gamma rays with the energy spectrum depending on the dark matter mass and the identity of particles it annihilates into.   Annihilation into leptons (electron, muons, taus) would produce a sharper peak than what is observed. As the plot shows, annihilation into quarks, whether the bottom or lighter one, fits the signal much better. All in all, the excess can be explained by a 15-40 GeV dark matter particle annihilating into quarks with the cross section in the  0.1-1 pb range.

This was known before, more or less. As far as I understand, the recent paper by Daylan et al. adds the following. They repeat the analysis using a subset of the Fermi data where the photon direction  is more reliably reconstructed.  This allows them to better study the morphology of the signal. They show that the excess is steeply falling (approximately as 1/r^1.4) all the way to about 2 kiloparsecs from the galactic center. Moreover, they demonstrate that  the excess is to a good degree spherically symmetric. This can be regarded as an argument against conventional astrophysical explanations. For example, a school of several thousand milisecond pulsars could produce a similar energy spectrum as the excess, but would not be expected to be distributed this way.

Ah, and what does the Fermi collaboration have to say about it? As far as I know, there is no official statement about the excess. In this talk one finds the quote "[In the inner galaxy], diffuse emission and point sources account for most of the emission observed in the region".  So we seem to have two slightly discrepant stories here: 40 sigma vs. nothing to see. If the truth were in the middle that would  be great ;)

In any case, continuous emission from the galactic center will never be regarded as a convincing evidence of dark matter.  To really get excited we would need to find a matching signal in a less messy environment. One possibility is the dwarf galaxies - small galaxies consisting mostly of dark matter that orbit the Milky Way. The Fermi collaboration recently reported the limits on the dark matter annihilation cross section based on observations of 25 dwarf galaxies, see the plot. Intriguingly, there is a small excess (global p-value 0.08) that may be consistent with the dark matter interpretation of the signal from the galactic centre... More data should clarify the situation, but for that we probably need to wait a few more years.

### Sean Carroll - Preposterous Universe

Guest Post: Katherine Freese on Dark Matter Developments

The hunt for dark matter has been heating up once again, driven (as usual) by tantalizing experimental hints. This time the hints are coming mainly from outer space rather than underground laboratories, which makes them harder to check independently, but there’s a chance something real is going on. We need more data to be sure, as scientists have been saying since the time Eratosthenes measured the circumference of the Earth.

As I mentioned briefly last week, Katherine Freese of the University of Michigan has a new book coming out, The Cosmic Cocktail, that deals precisely with the mysteries of dark matter. Katie was also recently at the UCLA Dark Matter Meeting, and has agreed to share some of her impressions with us. (She also insisted on using the photo on the right, as a way of reminding us that this is supposed to be fun.)

Dark Matter Everywhere (at the biannual UCLA Dark Matter Meeting)

The UCLA Dark Matter Meeting is my favorite meeting, period. It takes place every other year, usually at the Marriott Marina del Rey right near Venice Beach, but this year on UCLA campus. Last week almost two hundred people congregated, both theorists and experimentalists, to discuss our latest attempts to solve the dark matter problem. Most of the mass in galaxies, including our Milky Way, is not comprised of ordinary atomic material, but instead of as yet unidentified dark matter. The goal of dark matter hunters is to resolve this puzzle. Experimentalist Dave Cline of the UCLA Physics Department runs the dark matter meeting, with talks often running from dawn till midnight. Every session goes way over, but somehow the disorganization leads everybody to have lots of discussion, interaction between theorists and experimentalists, and even more cocktails. It is, quite simply, the best meeting. I am usually on the organizing committee, and cannot resist sending in lots of names of people who will give great talks and add to the fun.

Last week at the meeting we were treated to multiple hints of potential dark matter signals. To me the most interesting were the talks by Dan Hooper and Tim Linden on the observations of excess high-energy photons — gamma-rays — coming from the Central Milky Way, possibly produced by annihilating WIMP dark matter particles. (See this arxiv paper.) Weakly Interacting Massive Particles (WIMPs) are to my mind the best dark matter candidates. Since they are their own antiparticles, they annihilate among themselves whenever they encounter one another. The Center of the Milky Way has a large concentration of dark matter, so that a lot of this annihilation could be going on. The end products of the annihilation would include exactly the gamma-rays found by Hooper and his collaborators. They searched the data from the FERMI satellite, the premier gamma-ray mission (funded by NASA and DoE as well as various European agencies), for hints of excess gamma-rays. They found a clear excess extending to about 10 angular degrees from the Galactic Center. This excess could be caused by WIMPs weighing about 30 GeV, or 30 proton masses. Their paper called these results “a compelling case for annihilating dark matter.” After the talk, Dave Cline decided to put out a press release from the meeting, and asked the opinion of us organizers. Most significantly, Elliott Bloom, a leader of the FERMI satellite that obtained the data, had no objection, though the FERMI team itself has as yet issued no statement.

Many putative dark matter signals have come and gone, and we will have to see if this one holds up. Two years ago the 130 GeV line was all the rage — gamma-rays of 130 GeV energy that were tentatively observed in the FERMI data towards the Galactic Center. (Slides from Andrea Albert’s talk.) This line, originally proposed by Stockholm’s Lars Bergstrom, would have been the expectation if two WIMPs annihilated directly to photons. People puzzled over some anomalies of the data, but with improved statistics there isn’t much evidence left for the line. The question is, will the 30 GeV WIMP suffer the same fate? As further data come in from the FERMI satellite we will find out.

What about direct detection of WIMPs? Laboratory experiments deep underground, in abandoned mines or underneath mountains, have been searching for direct signals of astrophysical WIMPs striking nuclei in the detectors. At the meeting the SuperCDMS experiment hammered on light WIMP dark matter with negative results. The possibility of light dark matter, that was so popular recently, remains puzzling. 10 GeV dark matter seemed to be detected in many underground laboratory experiments: DAMA, CoGeNT, CRESST, and in April 2013 even CDMS in their silicon detectors. Yet other experiments, XENON and LUX, saw no events, in drastic tension with the positive signals. (I told Rick Gaitskell, a leader of the LUX experiment, that I was very unhappy with him for these results, but as he pointed out, we can’t argue with nature.) Last week at the conference, SuperCMDS, the most recent incarnation of the CDMS experiment, looked to much lower energies and again saw nothing. (Slides from Lauren Hsu’s talk.) The question remains: are we comparing apples and oranges? These detectors are made of a wide variety of types of nuclei and we don’t know how to relate the results. Wick Haxton’s talk surprised me by discussion of nuclear physics uncertainties I hadn’t been aware of, that in principle could reconcile all the disagreements between experiments, even DAMA and LUX. Most people think that the experimental claims of 10 GeV dark matter are wrong, but I am taking a wait and see attitude.

We also heard about the hints of detection of a completely different dark matter candidate: sterile neutrinos. (Slides from George Fuller’s talk.) In addition to the three known neutrinos of the Standard Model of Particle Physics, there could be another one that doesn’t interact with the standard model. Yet its decay could lead to x-ray lines. Two separate groups found indications of lines in data from the Chandra and XMM-Newton space satellites that would be consistent with a 7 keV neutrino (7 millionths of a proton mass). Could it be that there is more than one type of dark matter particle? Sure, why not?

On the last evening of the meeting, a number of us went to the Baja Cantina, our favorite spot for margaritas. Rick Gaitskell was smart: he talked us into the $60.00 pitchers, high enough quality that the 6AM alarm clocks the next day (that got many of us out of bed and headed to flights leaving from LAX) didn’t kill us completely. We have such a fun community of dark matter enthusiasts. May we find the stuff soon! ### Emily Lakdawalla - The Planetary Society Blog [Updated] To Europa!...Slowly. First Impressions of NASA's New Budget Request Europa may get a mission...eventually. We give our first take on the 2015 NASA Budget request. How does Planetary Exploration fare? Which projects were cancelled? Will NASA capture an asteroid? And most importantly, what can you do about it? ### ZapperZ - Physics and Physicists Physics Talk With No Powerpoint Slides? Oh, say it isn't so! In an effort to get a better interaction between speaker and audience, organizers at a biweekly forum on the LHC at Fermilab banned the use of any Powerpoint presentation by the speaker. “Without slides, the participants go further off-script, with more interaction and curiosity,” says Andrew Askew, an assistant professor of physics at Florida State University and a co-organizer of the forum. “We wanted to draw out the importance of the audience.” In one recent meeting, physics professor John Paul Chou of Rutgers University (pictured above) presented to a full room holding a single page of handwritten notes and a marker. The talk became more dialogue than monologue as members of the audience, freed from their usual need to follow a series of information-stuffed slides flying by at top speed, managed to interrupt with questions and comments. It is definitely a development and a change that I find interesting and support... so some extent. You see, something like this will be amazingly fun and useful IF the speaker is engaging and actually pays attention to the audience. I'm sure you've been in seminars (or even a class) where the speaker simply rambled on and on looking at the screen, without even looking behind him/her to see if the audience was even there! So how well something like this goes depends very much on the speaker. Still, not having the powerpoint slides will force these speakers to be more creative and inevitably, will create a less formal atmosphere during such a presentation. And from the report, having more of a dialog than a monolog is exactly what the organizers were trying to accomplish. It is interesting to note that while these physicists are going back to the "primitive" form of communication, others in the education field are trying various technologies and techniques to get away from the primitive form of teaching. It is now almost common that college lecturers use Powerpoint in their lectures, and other forms of teaching techniques and technologies are being used in the classrooms. Yet, at the top, we go back to chalkboard/whiteboard to communicate. Zz. ### Symmetrybreaking - Fermilab/SLAC Start spreading the SNEWS A worldwide network keeps astronomers and physicists ready for the next nearby supernova. When it comes to studying supernovae, if you don’t SNEWS, you lose. SNEWS, the Supernova Early Warning System, is a worldwide network designed to do just what the name implies: let astronomers and physicists know when a nearby supernova appears. This can be a tricky business, since supernovae appear in our galaxy roughly once every 30 years, and the window for studying them can vary—anywhere from a few weeks down to a few hours. ### Emily Lakdawalla - The Planetary Society Blog That time I took a selfie with Neil Tyson and the President of the United States Last week, my fellow Board Member Neil deGrasse Tyson and I were invited to be presenters at the first edition of the White House Film Festival. Neil asked the President if we could take a selfie with him. In those few moments, the President, Neil, and I spoke about science and space exploration. ### Matt Strassler - Of Particular Significance What if the Large Hadron Collider Finds Nothing Else? In my last post, I expressed the view that a particle accelerator with proton-proton collisions of (roughly) 100 TeV of energy, significantly more powerful than the currently operational Large Hadron Collider [LHC] that helped scientists discover the Higgs particle, is an obvious and important next steps in our process of learning about the elementary workings of nature. And I described how we don’t yet know whether it will be an exploratory machine or a machine with a clear scientific target; it will depend on what the LHC does or does not discover over the coming few years. What will it mean, for the 100 TeV collider project and more generally, if the LHC, having made possible the discovery of the Higgs particle, provides us with no more clues? Specifically, over the next few years, hundreds of tests of the Standard Model (the equations that govern the known particles and forces) will be carried out in measurements made by the ATLAS, CMS and LHCb experiments at the LHC. Suppose that, as it has so far, the Standard Model passes every test that the experiments carry out? In particular, suppose the Higgs particle discovered in 2012 appears, after a few more years of intensive study, to be, as far the LHC can reveal, a Standard Model Higgs — the simplest possible type of Higgs particle? Before we go any further, let’s keep in mind that we already know that the Standard Model isn’t all there is to nature. The Standard Model does not provide a consistent theory of gravity, nor does it explain neutrino masses, dark matter or “dark energy” (also known as the cosmological constant). Moreover, many of its features are just things we have to accept without explanation, such as the strengths of the forces, the existence of “three generations” (i.e., that there are two heavier cousins of the electron, two for the up quark and two for the down quark), the values of the masses of the various particles, etc. However, even though the Standard Model has its limitations, it is possible that everything that can actually be measured at the LHC — which cannot measure neutrino masses or directly observe dark matter or dark energy — will be well-described by the Standard Model. What if this is the case? Michelson and Morley, and What They Discovered In science, giving strong evidence that something isn’t there can be as important as discovering something that is there — and it’s often harder to do, because you have to thoroughly exclude all possibilities. [It's very hard to show that your lost keys are nowhere in the house --- you have to convince yourself that you looked everywhere.] A famous example is the case of Albert Michelson, in his two experiments (one in 1881, a second with Edward Morley in 1887) trying to detect the “ether wind”. Light had been shown to be a wave in the 1800s; and like all waves known at the time, it was assumed to be a wave in something material, just as sound waves are waves in air, and ocean waves are waves in water. This material was termed the “luminiferous ether”. As we can detect our motion through air or through water in various ways, it seemed that it should be possible to detect our motion through the ether, specifically by looking for the possibility that light traveling in different directions travels at slightly different speeds. This is what Michelson and Morley were trying to do: detect the movement of the Earth through the luminiferous ether. Both of Michelson’s measurements failed to detect any ether wind, and did so expertly and convincingly. And for the convincing method that he invented — an experimental device called an interferometer, which had many other uses too — Michelson won the Nobel Prize in 1907. Meanwhile the failure to detect the ether drove both FitzGerald and Lorentz to consider radical new ideas about how matter might be deformed as it moves through the ether. Although these ideas weren’t right, they were important steps that Einstein was able to re-purpose, even more radically, in his 1905 equations of special relativity. In Michelson’s case, the failure to discover the ether was itself a discovery, recognized only in retrospect: a discovery that the ether did not exist. (Or, if you’d like to say that it does exist, which some people do, then what was discovered is that the ether is utterly unlike any normal material substance in which waves are observed; no matter how fast or in what direction you are moving relative to me, both of us are at rest relative to the ether.) So one must not be too quick to assume that a lack of discovery is actually a step backwards; it may actually be a huge step forward. Epicycles or a Revolution? There were various attempts to make sense of Michelson and Morley’s experiment. Some interpretations involved tweaks of the notion of the ether. Tweaks of this type, in which some original idea (here, the ether) is retained, but adjusted somehow to explain the data, are often referred to as “epicycles” by scientists. (This is analogous to the way an epicycle was used by Ptolemy to explain the complex motions of the planets in the sky, in order to retain an earth-centered universe; the sun-centered solar system requires no such epicycles.) A tweak of this sort could have been the right direction to explain Michelson and Morley’s data, but as it turned out, it was not. Instead, the non-detection of the ether wind required something more dramatic — for it turned out that waves of light, though at first glance very similar to other types of waves, were in fact extraordinarily different. There simply was no ether wind for Michelson and Morley to detect. If the LHC discovers nothing beyond the Standard Model, we will face what I see as a similar mystery. As I explained here, the Standard Model, with no other particles added to it, is a consistent but extraordinarily “unnatural” (i.e. extremely non-generic) example of a quantum field theory. This is a big deal. Just as nineteenth-century physicists deeply understood both the theory of waves and many specific examples of waves in nature and had excellent reasons to expect a detectable ether, twenty-first century physicists understand quantum field theory and naturalness both from the theoretical point of view and from many examples in nature, and have very good reasons to expect particle physics to be described by a natural theory. (Our examples come both from condensed matter physics [e.g. metals, magnets, fluids, etc.] and from particle physics [e.g. the physics of hadrons].) Extremely unnatural systems — that is, physical systems described by quantum field theories that are highly non-generic — simply have not previously turned up in nature… which is just as we would expect from our theoretical understanding. [Experts: As I emphasized in my Santa Barbara talk last week, appealing to anthropic arguments about the hierarchy between gravity and the other forces does not allow you to escape from the naturalness problem.] So what might it mean if an unnatural quantum field theory describes all of the measurements at the LHC? It may mean that our understanding of particle physics requires an epicyclic change — a tweak. The implications of a tweak would potentially be minor. A tweak might only require us to keep doing what we’re doing, exploring in the same direction but a little further, working a little harder — i.e. to keep colliding protons together, but go up in collision energy a bit more, from the LHC to the 100 TeV collider. For instance, perhaps the Standard Model is supplemented by additional particles that, rather than having masses that put them within reach of the LHC, as would inevitably be the case in a natural extension of the Standard Model (here’s an example), are just a little bit heavier than expected. In this case the world would be somewhat unnatural, but not too much, perhaps through some relatively minor accident of nature; and a 100 TeV collider would have enough energy per collision to discover and reveal the nature of these particles. Or perhaps a tweak is entirely the wrong idea, and instead our understanding is fundamentally amiss. Perhaps another Einstein will be needed to radically reshape the way we think about what we know. A dramatic rethink is both more exciting and more disturbing. It was an intellectual challenge for 19th century physicists to imagine, from the result of the Michelson-Morley experiment, that key clues to its explanation would be found in seeking violations of Newton’s equations for how energy and momentum depend on velocity. (The first experiments on this issue were carried out in 1901, but definitive experiments took another 15 years.) It was an even greater challenge to envision that the already-known unexplained shift in the orbit of Mercury would also be related to the Michelson-Morley (non)-discovery, as Einstein, in trying to adjust Newton’s gravity to make it consistent with the theory of special relativity, showed in 1913. My point is that the experiments that were needed to properly interpret Michelson-Morley’s result • did not involve trying to detect motion through the ether, • did not involve building even more powerful and accurate interferometers, • and were not immediately obvious to the practitioners in 1888. This should give us pause. We might, if we continue as we are, be heading in the wrong direction. Difficult as it is to do, we have to take seriously the possibility that if (and remember this is still a very big “if”) the LHC finds only what is predicted by the Standard Model, the reason may involve a significant reorganization of our knowledge, perhaps even as great as relativity’s re-making of our concepts of space and time. Were that the case, it is possible that higher-energy colliders would tell us nothing, and give us no clues at all. An exploratory 100 TeV collider is not guaranteed to reveal secrets of nature, any more than a better version of Michelson-Morley’s interferometer would have been guaranteed to do so. It may be that a completely different direction of exploration, including directions that currently would seem silly or pointless, will be necessary. This is not to say that a 100 TeV collider isn’t needed! It might be that all we need is a tweak of our current understanding, and then such a machine is exactly what we need, and will be the only way to resolve the current mysteries. Or it might be that the 100 TeV machine is just what we need to learn something revolutionary. But we also need to be looking for other lines of investigation, perhaps ones that today would sound unrelated to particle physics, or even unrelated to any known fundamental question about nature. Let me provide one example from recent history — one which did not lead to a discovery, but still illustrates that this is not all about 19th century history. An Example One of the great contributions to science of Nima Arkani-Hamed, Savas Dimopoulos and Gia Dvali was to observe (in a 1998 paper I’ll refer to as ADD, after the authors’ initials) that no one had ever excluded the possibility that we, and all the particles from which we’re made, can move around freely in three spatial dimensions, but are stuck (as it were) as though to the corner edge of a thin rod — a rod as much as one millimeter wide, into which only gravitational fields (but not, for example, electric fields or magnetic fields) may penetrate. Moreover, they emphasized that the presence of these extra dimensions might explain why gravity is so much weaker than the other known forces. Fig. 1: ADD’s paper pointed out that no experiment as of 1998 could yet rule out the possibility that our familiar three-dimensional world is a corner of a five-dimensional world, where the two extra dimensions are finite but perhaps as large as a millimeter. Given the incredible number of experiments over the past two centuries that have probed distances vastly smaller than a millimeter, the claim that there could exist millimeter-sized unknown dimensions was amazing, and came as a tremendous shock — certainly to me. At first, I simply didn’t believe that the ADD paper could be right. But it was. One of the most important immediate effects of the ADD paper was to generate a strong motivation for a new class of experiments that could be done, rather inexpensively, on the top of a table. If the world were as they imagined it might be, then Newton’s (and Einstein’s) law for gravity, which states that the force between two stationary objects depends on the distance r between them as 1/r², would increase faster than this at distances shorter than the width of the rod in Figure 1. This is illustrated in Figure 2. Fig. 2: If the world were as sketched in Figure 1, then Newton/Einstein’s law of gravity would be violated at distances shorter than the width of the rod in Figure 1. The blue line shows Newton/Einstein’s prediction; the red line shows what a universe like that in Figure 1 would predict instead. Experiments done in the last few years agree with the blue curve down to a small fraction of a millimeter. These experiments are not easy — gravity is very, very weak compared to electrical forces, and lots of electrical effects can show up at very short distances and have to be cleverly avoided. But some of the best experimentalists in the world figured out how to do it (see here and here). After the experiments were done, Newton/Einstein’s law was verified down to a few hundredths of a millimeter. If we live on the corner of a rod, as in Figure 1, it’s much, much smaller than a millimeter in width. But it could have been true. And if it had, it might not have been discovered by a huge particle accelerator. It might have been discovered in these small inexpensive experiments that could have been performed years earlier. The experiments weren’t carried out earlier mainly because no one had pointed out quite how important they could be. Ok Fine; What Other Experiments Should We Do? So what are the non-obvious experiments we should be doing now or in the near future? Well, if I had a really good suggestion for a new class of experiments, I would tell you — or rather, I would write about it in a scientific paper. (Actually, I do know of an important class of measurements, and I have written a scientific paper about them; but these are measurements to be done at the LHC, and don’t involve a entirely new experiment.) Although I’m thinking about these things, I do not yet have any good ideas. Until I do, or someone else does, this is all just talk — and talk does not impress physicists. Indeed, you might object that my remarks in this post have been almost without content, and possibly without merit. I agree with that objection. Still, I have some reasons for making these points. In part, I want to highlight, for a wide audience, the possible historic importance of what might now be happening in particle physics. And I especially want to draw the attention of young people. There have been experts in my field who have written that non-discoveries at the LHC constitute a “nightmare scenario” for particle physics… that there might be nothing for particle physicists to do for a long time. But I want to point out that on the contrary, not only may it not be a nightmare, it might actually represent an extraordinary opportunity. Not discovering the ether opened people’s minds, and eventually opened the door for Einstein to walk through. And if the LHC shows us that particle physics is not described by a natural quantum field theory, it may, similarly, open the door for a young person to show us that our understanding of quantum field theory and naturalness, while as intelligent and sensible and precise as the 19th century understanding of waves, does not apply unaltered to particle physics, and must be significantly revised. Of course the LHC is still a young machine, and it may still permit additional major discoveries, rendering everything I’ve said here moot. But young people entering the field, or soon to enter it, should not assume that the experts necessarily understand where the field’s future lies. Like FitzGerald and Lorentz, even the most brilliant and creative among us might be suffering from our own hard-won and well-established assumptions, and we might soon need the vision of a brilliant young genius — perhaps a theorist with a clever set of equations, or perhaps an experimentalist with a clever new question and a clever measurement to answer it — to set us straight, and put us onto the right path. Filed under: Higgs, History of Science, LHC Background Info, Other Collider News, Particle Physics, Quantum Field Theory, The Scientific Process Tagged: Einstein, energy, ExtraDimensions, gravity, Higgs, LHC, particle physics, relativity ### Peter Coles - In the Dark From Darkness to Green On Wednesday this week I spent a very enjoyable few hours in London attending the Inaugural Lecture of Professor Alan Heavens at South Kensington Technical College Imperial College, London. It was a very nice good lecture indeed, not only for its scientific content but also for the plentiful touches of droll humour in which Alan specialises. It was also followed by a drinks reception and buffet. The talk was entitled Cosmology in the Dark so naturally I had to mention it on this blog! At the end of the lecture, the vote of thanks was delivered in typically effervescent style by the ebullient Prof. Malcolm Longair who actually supervised Alan’s undergraduate project at the Cavendish laboratory way back in 1980, if I recall the date correctly. In his speech, Malcolm referred to the following quote from History of the Theories of the Aether and Electricity (Whittaker, 1951) which he was kind enough to send me when I asked by email: The century which elapsed between the death of Newton and the scientific activity of Green was the darkest in the history of (Cambridge) University. It is true that (Henry) Cavendish and (Thomas) Young were educated at Cambridge; but they, after taking their undergraduate courses, removed to London. In the entire period the only natural philosopher of distinction was (John) Michell; and for some reason which at this distance of time it is difficult to understand fully, Michell’s researches seem to have attracted little or no attention among his collegiate contemporaries and successors, who silently acquiesced when his discoveries were attributed to others, and allowed his name to perish entirely from the Cambridge tradition. I wasn’t aware of this analysis previously, but it re-iterates something I have posted about before. It stresses the enormous historical importance of British mathematician and physicist George Green, who lived from 1793 until 1841, and who left a substantial legacy for modern theoretical physicists, in Green’s theorems and Green’s functions; he is also credited as being the first person to use the word “potential” in electrostatics. Green was the son of a Nottingham miller who, amazingly, taught himself mathematics and did most of his best work, especially his remarkable Essay on the Application of mathematical Analysis to the theories of Electricity and Magnetism (1828) before starting his studies as an undergraduate at the University of Cambridge which he did at the age of 30, after his father died; he leased out the mill he consequently inherited, to pay for his studies. Extremely unusually for English mathematicians of his time, Green taught himself from books that were published in France. This gave him a huge advantage over his national contemporaries in that he learned the form of differential calculus that originated with Leibniz, which was far more elegant than that devised by Isaac Newton (which was called the method of fluxions). Whittaker remarks upon this: Green undoubtedly received his own early inspiration from . . . (the great French analysts), chiefly from Poisson; but in clearness of physical insight and conciseness of exposition he far excelled his masters; and the slight volume of his collected papers has to this day a charm which is wanting in their voluminous writings. Great scientist though he was, Newton’s influence on the development of physics in Britain was not entirely positive, as the above quote makes clear. Newton was held in such awe, especially in Cambridge, that his inferior mathematical approach was deemed to be the “right” way to do calculus and generations of scholars were forced to use it. This held back British science until the use of fluxions was phased out. Green himself was forced to learn fluxions when he went as an undergraduate to Cambridge despite having already learned the better method. Unfortunately, Green’s great pre-Cambridge work on mathematical physics didn’t reach wide circulation in the United Kingdom until after his death. William Thomson, later Lord Kelvin, found a copy of Green’s Essay in 1845 and promoted it widely as a work of fundamental importance. This contributed to the eventual emergence of British theoretical physics from the shadow cast by Isaac Newton which reached one of its heights just a few years later with the publication a fully unified theory of electricity and magnetism by James Clerk Maxwell. But as to the possible reason for the lack of recognition for John Michell who was clearly an important figure in his own right (he was the person who first developed the concept of a black hole, for example) you’ll have to read Malcolm Longair’s forthcoming book on the History of the Cavendish Laboratory! ### Clifford V. Johnson - Asymptotia Showcase and Awards Today! Just a reminder: The USC Science Film Competition Showcase and Awards are tonight (March 7th) at 6:00pm. I've been tallying up all the judges' input and have the results in special envelopes to give out tonight. Very exciting. Come along (event information here), and enjoy celebrating all the students' hard work. There will be twelve films on display! -cvj Click to continue reading this post ### Emily Lakdawalla - The Planetary Society Blog PlanetVac at the IEEE Aerospace Conference PlanetVac project leader Kris Zacny of Honeybee Robotics reports on presenting results of the Planetary Society project PlanetVac that created a prototype planetary dirt sampling system and tested it under Martian pressures. ### Christian P. Robert - xi'an's og Advances in scalable Bayesian computation [day #4] Final day of our workshop Advances in Scalable Bayesian Computation already, since tomorrow morning is an open research time ½ day! Another “perfect day in paradise”, with the Banff Centre campus covered by a fine snow blanket, still falling…, and making work in an office of BIRS a dream-like moment. Still looking for a daily theme, parallelisation could be the right candidate, even though other talks this week went into parallelisation issues, incl. Steve’s talk yesterday. Indeed, Anthony Lee gave a talk this morning on interactive sequential Monte Carlo, where he motivated the setting by a formal parallel structure. Then, Darren Wilkinson surveyed the parallelisation issues in Monte Carlo, MCMC, SMC and ABC settings, before arguing in favour of a functional language called Scala. (Neat entries to those topics can be found on Darren’s blog.) And in the afternoon session, Sylvia Frühwirth-Schnatter exposed her approach to the (embarrassingly) parallel problem, in the spirit of Steve’s , David Dunson’s and Scott’s (a paper posted on the day I arrived in Chamonix and hence I missed!). There was plenty to learn from that talk (do not miss the Yin-Yang moment at 25 mn!), but it also helped me to break a difficulty I had with the consensus Bayes representation for two weeks (more on that later!). And, while Marc Suchard mostly talked about flu and trees in a very pleasant and broad talk, he also had a slide on parallelisation to fit the theme! While unrelated with parallelism, Nicolas Chopin’s talk was on sequential quasi-Monte Carlo algorithms: while I had heard previous versions of this talk in Chamonix and BigMC, I found it full of exciting stuff. And it clearly got the room truly puzzled by this possibility, in a positive way! Similarly, Alex Lenkoski spoke about extreme rain events in Norway with no trace of parallelism, but the general idea behind the examples was to question the notion of the calibrated Bayesian (with possible connections with the cut models). This has been a wonderful week and I am sure the participants got as much as I did from the talks and the informal exchanges. Thanks to BIRS for the sponsorship and the superb organisation of the week (and to the Banff Centre for providing such a paradisical environment). I feel very privileged to have benefited from this support, even though I deadly hope to be back in Banff within a few years. Filed under: Books, Mountains, pictures, R, Statistics, University life Tagged: Art Owen, Banff International Research Station for Mathematical Innovation, Bayesian nonparametrics, Canada, Canadian Rockies, extremes, flu, influenza, low discrepancy sequences, MCMC algorithms, Norway, parallel processing, parallelisation, philogenic trees, quasi-random sequences, vaccine, viruses, Yin-Yang algorithm ### astrobites - astro-ph reader's digest First look at NASA’s FY2015 Budget Request The Obama Administration just released its budget request for fiscal year (FY) 2015. While not much has changed from previous years regarding NASA, a few “minor” tweaks will elate some communities and devastate others. In particular, the White House wants to slash the budget for the Stratospheric Observatory for Infrared Astronomy (SOFIA, the 2.5 m telescope that flies around in a 747). The resulting savings would fund an extension of the Cassini mission to Saturn and its satellites, among other things. Before diving into this year’s wonky details, you might want to read a general overview of the federal budget process or two meditations about NASA’s strategic direction. The Larger Context When Republicans took over the majority in the House of Representatives in 2010, they started rancorous debates over the appropriations process. The country began lurching from fiscal crisis to crisis—or cliff or whatever metaphor you like. Federal agencies couldn’t plan more than a few months ahead at a time. Our credit rating got downgraded. You know the story. Republicans and Democrats, represented by Representative Paul Ryan and Senator Patty Murray, respectively, announced a welcome compromise in 2013, which President Obama quickly signed into law. The Bipartisan Budget Act of 2013 eliminated some of the onerous, indiscriminate cuts mandated by sequestration and set overall spending levels of$1.012 trillion for FY 2014 and $1.014 trillion for FY 2015. Notably, this act only imposes an overall cap. The various appropriations committees in Congress can decide what programs to cut or fund to get to that number. The Stratospheric Observatory for Infrared Astronomy (SOFIA) sits on the ramp in Palmdale, CA as mission staff celebrate its 100th flight. SOFIA may be the latest casualty of our age of austerity. In times of flat budgets, doing anything new requires ending operating missions, some of which may still be producing good science. This new budget request adheres to the requirements of the Bipartisan Budget Act. Because the Obama Administration believes that more government spending could significantly improve the national welfare, there is a supplemental request totaling$56 billion, called the Opportunity, Growth, and Security (OGS) Initiative. (I think they considered calling it the Annoying Republicans by Supporting Progressive Priorities Initiative but then decided to try staying non-confrontational.) This initiative would send an additional $886 million to NASA beyond the$17.5 billion in the primary request, but House Republicans have essentially declared this addition dead on arrival (i.e., the GOP will not agree to any appropriations beyond the 2013 caps). I’d bet that Congress accedes to the core of the President’s request for NASA, because it’s basically a continuation of what Congress has recently passed. But Congress is forever fickle.

The End of SOFIA?

SOFIA is an infrared telescope that flies in the stratosphere, above most of the water vapor that bedevils ground-based observers. (One Astrobites author took an informative tour in 2013.) Right now, NASA supplies 80% of its funding—the remaining 20% comes from the German space agency (DLR). The history of SOFIA is rife with cost overruns, technical delays, and even attempted cancellations. Now, astronomy missions are no strangers to these problems. But SOFIA’s are unusually severe. Its cost per hour of observations (>\$300,000) currently rivals NASA’s most expensive missions, including the Hubble Space Telescope. The final instrument on SOFIA was just fully implemented, but the plane will shortly be grounded for half a year of maintenance. Sadly, all these factors make SOFIA low hanging fruit for cutting in our age of austerity. All scientists wish that we could fund every innovative mission that produces quality science. But without perpetually increasing budgets, some things must be prioritized.

The FY 2015 budget request “proposes placing SOFIA into storage due to its high operating cost and budget constraints.” This ignominious fate could be avoided if Germany or some other partner were able to step in and supply the lost funding. The rationale is that “savings from SOFIA can have a larger impact supporting other science missions.” NASA hasn’t yet specified what missions will benefit from SOFIA’s potential downfall, but the Explorer program of targeted, small-scale astronomy missions and the extended Cassini mission are popular guesses.

Planetary scientists were worried that NASA would send Cassini plunging into Saturn’s atmosphere years ahead of schedule for want of sufficient funding. (Any watching aliens would likely shake their heads, or head-equivalents, in a mixture of bemusement and disgust.) Cassini is extremely popular, so it seemed likely that NASA would find money to keep it going. The question, however, was where. By proposing an effective end to American investment in SOFIA, it looks like NASA has answered.

### Emily Lakdawalla - The Planetary Society Blog

Intro Astronomy Class 5: Venus (continued) and Mars
Continue exploring Venus and begin looking at Mars in this video of class 5 of Bruce Betts' Introduction to Planetary Science and Astronomy class.

## March 06, 2014

### Christian P. Robert - xi'an's og

Le Monde puzzle [#855]

A Le Monde mathematical puzzle that reminds me of an earlier one:

Given ten tokens with different unknown weights, and a scale that can rank three tokens at a time, starting with ranking three tokens, what is the minimum number of actions necessary to rank the ten of them if (a) one token at a time is added, (b) one or two tokens are added? If no restriction is imposed on the token introduction, is there a more efficient strategy?

It indeed relates to earlier posts on sorting and ternary sorting. Checking further on StackOverflow I found this reply to a question about ternary sorting:

Average number of comparisons:

in ternary search = ((1/3)*1+(2/3)*2)*ln(n)/ln(3)~1.517*ln(n)
in binary search  =                 1*ln(n)/ln(2)~1.443*ln(n)


Worst number of comparisons:

in ternary search = 2*ln(n)/ln(3)~1.820*ln(n)
in binary search  = 1*ln(n)/ln(2)~1.443*ln(n)

albeit with no reference. So this somewhat answers part (c) of the question. (If asymptotically since this does not work for n=10! Even for n=100, it is way too large.) Looking for a solution to part (a), I looked at the performances of a dyadic sorting algorithm, partitioning recursively the already sorted part of the sample into three parts to locate each time the proper position of the new token. This led me to the following code

rang=function(x,ranj,taken){

mag=max(ranj)-min(ranj)+1
i1=ranj[1]-1+trunc(mag/3)
i2=ranj[1]-1+trunc(2*mag/3)
locrk=rank(c(taken[c(i1,i2)],x))

if (locrk[3]==1) return(ifelse(i1>3,
1+rang(x,ranj=1:(i1-1),taken),
1+(i1>1)))
if (locrk[3]==2) return(ifelse(i2-i1>4,
1+rang(x,ranj=(i1+1):(i2-1),taken),ranj=1:(i1-1),taken),
1+(i1>1)))
if (locrk[3]==3) return(ifelse(mag-i2>2,
1+rang(x,ranj=(i2+1):mag,taken),ranj=1:(i1-1),taken),
1+(i1>1)))
}

ternone=function(toke){

toke[1:3]=sort(toke[1:3])
counter=1
for (ul in (4:length(toke))){

counter=counter+rang(toke[ul],1:(ul-1),toke)
toke[1:ul]=sort(toke[1:ul])
}

return(counter)
}

ternone(sample(1:10))


which produces a minimum of eight (of course!) and a maximum of 20 uses of the scale (based on 10⁶ random permutations of (1,…,10)). Unsurprisingly, the solution proposed in Le Monde the week after does better as it obtains 16 as the worst case figure. Repeating the experiment with n=100 values, the maximum was 303 (with a mean of 270 and a minimum of 240 ternary comparisons).

Moving to an insertion two tokens at a time, I tested a scheme where two new tokens were tested against the current median, then the median of one half, and so on until they split on both sides of this median. Here is the corresponding R code

ring=function(x,y,ranj,taken){

mag=max(ranj)-min(ranj)+1
i1=ranj[1]-1+trunc(mag/2)
locrk=rank(c(taken[i1],x,y))

if (locrk[1]==3) return(ifelse(i1>2,
1+ring(x,y,ranj=1:(i1-1),taken),
1+(i1==2)))
if (locrk[1]==2) return(ifelse(mag>4,
1+rang(min(x,y),ranj=min(ranj):(i1-1),taken)
+rang(max(x,y),(i1+1):max(ranj),taken),1))
if (locrk[1]==1) return(ifelse(mag-i1>2,
1+ring(x,y,ranj=(i1+1):mag,taken),
1+(mag-i1>1)))
}

terntwo=function(toke){

toke[1:3]=sort(toke[1:3])
counter=1+rang(toke[4],1:3,toke)
toke[1:4]=sort(toke[1:4])

for (ul in -1+2*(3:(length(toke)/2))){

counter=counter+ring(toke[ul],toke[ul+1],1:(ul-1),toke)
toke[1:ul]=sort(toke[1:ul])
ul=ul+1
}

return(counter)
}


leading to a value of 13.

This Feb. 19 issue of Le Monde Science&Médecine leaflet also contains a two page report on growing psychological work-related issues like stress, burn-out and depression, in research labs, mostly connected with the increase in administrative duties and grant writing, duties most of us have not been trained for. There is also a short column by Etienne Gys telling about the short lived claim by Moukhtarbaï Otelbaev to have solved the Navier-Stokes problem. Until Terry Tao wrote a paper stating why the proof could not work.

Filed under: Books, Kids, Statistics Tagged: bubblesort, Le Monde, mathematical puzzle, Moukhtarbaï Otelbaev, Navier-Stokes problem, quicksort, sorting

### Quantum Diaries

My Week as a Real Scientist

For a week at the end of January, I was a real scientist. Actually, I’m always a real scientist, but only for that week was I tweeting from the @realscientists Twitter account, which has a new scientist each week typing about his or her life and work. I tweeted a lot. I tweeted about the conference I was at. I tweeted about the philosophy of science and religion. I tweeted about how my wife, @CuratorPolly, wasn’t a big fan of me being called the “curator” of the account for the week. I tweeted about airplanes and very possibly bagels. But most of all I tweeted the answers to questions about particle physics and the LHC.

Realscientists wrote posts for the start and end of my week, and all my tweets for the week are at this Storify page. My regular twitter account, by the way, is @sethzenz.

I was surprised by how many questions people had when I they were told that a real physicist at a relatively high-profile Twitter account was open for questions. A lot of the questions had answers that can already be found, often right here on Quantum Diaries! It got me thinking a bit about different ways to communicate to the public about physics. People really seem to value personal interaction, rather than just looking things up, and they interact a lot with an account that they know is tweeting in “real time.” (I almost never do a tweet per minute with my regular account, because I assume it will annoy people, but it’s what people expect stylistically from the @realscientists account.) So maybe we should do special tweet sessions from one of the CERN-related accounts, like @CMSexperiment, where we get four physicists around one computer for an hour and answer questions. (A lot of museums did a similar thing with #AskACurator day last September.) We’ve also discussed the possibility of doing a AMA on Reddit. And the Hangout with CERN series will be starting again soon!

But while you’re waiting for all that, let me tell you a secret: there are lots of physicists on Twitter. (Lists here and here and here, four-part Symmetry Magazine series here and here and here and here.) And I can’t speak for everyone, but an awful lot of us would answer questions if you had any. Anytime. No special events. Just because we like talking about our work. So leave us comments. Tweet at us. Your odds of getting an answer are pretty good.

In other news, Real Scientists is a finalist for the Shorty Award for social media’s best science. We’ll have to wait and see how they — we? — do in a head-to-head matchup with giants like NASA and Neil deGrasse Tyson. But I think it’s clear that people value hearing directly from researchers, and social media seems to give us more and more ways to communicate every year.

### Sean Carroll - Preposterous Universe

Effective Field Theory and Large-Scale Structure

Been falling behind on my favorite thing to do on the blog: post summaries of my own research papers. Back in October I submitted a paper with two Caltech colleagues, postdoc Stefan Leichenauer and grad student Jason Pollack, on the intriguing intersection of effective field theory (EFT) and cosmological large-scale structure (LSS). Now’s a good time to bring it up, as there’s a great popular-level discussion of the idea by Natalie Wolchover in Quanta.

So what is the connection between EFT and LSS? An effective field theory, as loyal readers know, an “effective field theory” is a way to describe what happens at low energies (or, equivalently, long wavelengths) without having a complete picture of what’s going on at higher energies. In particle physics, we can calculate processes in the Standard Model perfectly well without having a complete picture of grand unification or quantum gravity. It’s not that higher energies are unimportant, it’s just that all of their effects on low-energy physics can be summed up in their contributions to just a handful of measurable parameters.

In cosmology, we consider the evolution of LSS from tiny perturbations at early times to the splendor of galaxies and clusters that we see today. It’s really a story of particles — photons, atoms, dark matter particles — more than a field theory (although of course there’s an even deeper description in which everything is a field theory, but that’s far removed from cosmology). So the right tool is the Boltzmann equation — not the entropy formula that appears on his tombstone, but the equation that tells us how a distribution of particles evolves in phase space. However, the number of particles in the universe is very large indeed, so it’s the most obvious thing in the world to make an approximation by “smoothing” the particle distribution into an effective fluid. That fluid has a density and a velocity, but also has parameters like an effective speed of sound and viscosity. As Leonardo Senatore, one of the pioneers of this approach, says in Quanta, the viscosity of the universe is approximately equal to that of chocolate syrup.

So the goal of the EFT of LSS program (which is still in its infancy, although there is an important prehistory) is to derive the correct theory of the effective cosmological fluid. That is, to determine how all of the complicated churning dynamics at the scales of galaxies and clusters feeds back onto what happens at larger distances where things are relatively smooth and well-behaved. It turns out that this is more than a fun thing for theorists to spend their time with; getting the EFT right lets us describe what happens even at some length scales that are formally “nonlinear,” and therefore would conventionally be thought of as inaccessible to anything but numerical simulations. I really think it’s the way forward for comparing theoretical predictions to the wave of precision data we are blessed with in cosmology.

Here is the abstract for the paper I wrote with Stefan and Jason:

A Consistent Effective Theory of Long-Wavelength Cosmological Perturbations
Sean M. Carroll, Stefan Leichenauer, Jason Pollack

Effective field theory provides a perturbative framework to study the evolution of cosmological large-scale structure. We investigate the underpinnings of this approach, and suggest new ways to compute correlation functions of cosmological observables. We find that, in contrast with quantum field theory, the appropriate effective theory of classical cosmological perturbations involves interactions that are nonlocal in time. We describe an alternative to the usual approach of smoothing the perturbations, based on a path-integral formulation of the renormalization group equations. This technique allows for improved handling of short-distance modes that are perturbatively generated by long-distance interactions.

As useful as the EFT of LSS approach is, our own contribution is mostly on the formalism side of things. (You will search in vain for any nice plots comparing predictions to data in our paper — but do check out the references.) We try to be especially careful in establishing the foundations of the approach, and along the way we show that it’s not really a “field” theory in the conventional sense, as there are interactions that are nonlocal in time (a result also found by Carrasco, Foreman, Green, and Senatore). This is a formal worry, but doesn’t necessarily mean that the theory is badly behaved; one just has to work a bit to understand the time-dependence of the effective coupling constants.

Here is a video from a physics colloquium I gave at NYU on our paper. A colloquium is intermediate in level between a public talk and a technical seminar, so there are some heavy equations at the end but the beginning is pretty motivational. Enjoy!

### Lubos Motl - string vacua and pheno

Two fresh dark matter stories
Randall, Reece link DM and dinosaurs; strengthening DM signal in Central Milky Way

I want to mention two developments related to dark matter. First, Lisa Randall and Matthew Reece of Harvard have finally released a preprint – to appear in Physical Review Letters – linking extinctions and dark matter:
Dark Matter as a Trigger for Periodic Comet Impacts
As the "comments" (an entry in the arXiv form) point out, there are no dinosaurs in the paper so let me offer you a compensation.

Holy crap, we forgot to install a thermonuclear missile shield above Chick-Ku-Klux-Club in the Yucatan Peninsula (65 megayears before Christ).

At least one of the authors has intensely thought about various extinctions etc. at the same moment when she or he was writing the paper ;-), so the "no dinosaurs" comment is much less off-topic than some people might think.

They take one thing for granted, namely a periodicity of 35 million years in the crater record on the Earth's surface. And they try to link it to a model involving the galactic midplane, a hypothetical dark disk in that plane, and tidal effects on the Oort cloud (a far "Ukraine" of the Solar System; just to be sure, if you happen to be brainwashed by the idea that Ukraine has no permanent link to Russia, "Ukraine" does mean "borderland" or "march" [of Rus'] in the Slavic languages, and even Ukrainian scholars agree with that).

I am a non-expert and confused by the periodicities in similar things. I know that comets and craters are different things than galactic cosmic rays but I still don't fully understand why they should exhibit such a very different behavior when it comes to the periodicity. Note that the periodicity of the galactic cosmic rays used by Shaviv and Veizer is about 140 million years.

At any rate, it is a new paper linking terrestrial traces (in this case, numbers of craters) with some celestial cycles (linked to the inner structure of our galaxy). So when it comes to the basic ideas, I do think that Shaviv-Veizer should be cited by Randall-Reece and it's a mistake that it is not.

The second fresh paper on dark matter I want to mention is
The Characterization of the Gamma-Ray Signal from the Central Milky Way: A Compelling Case for Annihilating Dark Matter
by Daylan, Finkbeiner, Hooper, Linden, Portillo, Rodd, and Slatyer. A good popular story was printed in
Case for Dark Matter Signal Strengthens (by Wolchower, Simons Foundation's Quanta Magazine, copy in The Guardian)
and in Wired (by Adam Mann).

One looks somewhere at the galactic center using the Fermi gamma-ray telescope. She sees tons of frequencies and tries to maximally accurately subtract the radiation from all the known sources (stars). Something is left, especially gamma rays with energies $$1$$-$$3\GeV$$. The question is whether this excess is due to some exciting new physics (dark matter particle in this case) or some relatively mundane astrophysics ("millisecond pulsars" is currently the #1 favorite buzzwords of those who want to prefer this conservative explanation).

They drew the map of "where the excess is coming from" in some more detail, with a more careful geographic subtraction, and the result is that it started to look more like dark matter and less like millisecond pulsars etc. That's why Finkbeiner, a long-term skeptic when it comes to the dark matter interpretation of similar signals, joined the large list of authors (although he's still more skeptical than some co-authors).

The features supporting the dark-matter interpretation include the apparent spherical shape of the DM halo needed to explain that (although it could be elongated a priori); and the extension of the source up to 10° from the Galactic center (where no millisecond pulsars seem to be located) which still seems to agree with a distribution expected for dark matter (they assume a generalized NFW halo profile everywhere).

Even if the photons are created from dark matter by some annihilation, it is hard to determine what the dark matter particle is (and by which process it decays to the gamma rays and something else). Their favorite explanation is a dark matter WIMP particle in the $$31$$-$$40\GeV$$ interval that decays to the $$b\bar b$$ quark pair. I suppose that the two hadrons containing the bottom quark (or antiquark) then continue to decay so that the "few$$\GeV$$ photons" appear among the final products of the decay.

If the dark-matter explanation is real, there is a chance – one could even call it a prediction – that the same excess should be seen in the dwarf galaxies orbiting our Milky Way. Jennifer Siegal-Gaskins of Fermi (and Caltech) leaks the opinion that the excess could indeed be there, too. Dan Hooper says that a confirmation of this rumor by a big excess would make this game over. Well, he seems to be convinced that DM is the only possible explanation already now. ;-) It's not surprising given the purely numerical statistical significance: it's a whopping 40 standard deviations, well enough above 5 sigma! But catches could still be there. The number "40 sigma" results from a comparison of $$\chi^2$$ of fits with and without the dark matter halo (imprinted via their assumed decays).

Tracy Slatyer who is now at MIT faculty talks about her surprise that the new data would indeed sharpen the picture. Her preferred WIMP mass is $$35\GeV$$. The particle is sometimes referred to as a "hooperon", not to be confused with a "hyperon", but "tracon" could be good, too. Juan Collar of CoGeNT etc. who would defend some "dark matter direct discovery" claims that no longer look too plausible now says that such a particle may be detected by similar underground experiments if the sensitivity increases 100-fold.

The rest of Wolchower's article is about the sterile-neutrino-like X-ray excess and the possibility that both excesses could actually be genuine, something that could actually be compatible e.g. with the eXciting dark matter models.

### Peter Coles - In the Dark

Astronomy Look-alikes, No. 91

I wonder if anyone else has noticed the remarkable similarity between Martin Ward, surviving member of vocal duet The Ward Brothers, and distinguished astronomer Don Everly? I’m told the resemblance even extends to the wearing of cowboy boots. I wonder if, by any chance, they might be related?

### ZapperZ - Physics and Physicists

What Happens When You Cross A Bicycle With A Tricycle
Is this another case against cross-breeding and genetic modification? :)

Those crazy folks at Cornell produced a hybrid between a bicycle and a tricycle, and ended up with a vehicle that has a very weird steering capability.
Similarly, he wanted to see if the bike/trike dichotomy was really true in practice: A vehicle perfectly balanced between tricycle and bicycle would negate the effect of gravity by both preventing it from exerting force with its rear wheels like a trike, and by allowing the rider to lean the bike at any angle without shifting her center of mass.

Ruina’s “bricycle,” as he calls it, is a bike equipped with two training wheels attached by means of a spring. When the spring is stiff, the bricycle turns like a trike. When the spring is loose, the bricycle turns like a bike. But at a certain point when the spring is just stiff enough, the training wheels and rear wheel offset the force of gravity on each other. At that stiffness, the bike becomes unsteerable and falls over if the rider tries to turn, Ruina reported today at the American Physical Society meeting in Denver.
The bricycle is really the same as the gravity-free pendulum. Assuming friction and so on are negligible, if we start from an upright position, the lean and the sideways displacement of the ground contact point are always in proportion to each other. So changing direction would cause both an ever-growing distance for the original line of travel, and an ever-growing lean angle. The riders don't tolerate this. Instead, they maintain balance and thus are stuck going about straight.

So gravity, superficially the thing that makes it hard to balance a bicycle, is the thing that allows you to steer it.
Here's the video:

Zz.

### Marco Frasca - The Gauge Connection

Evidence of the square root of Brownian motion

A mathematical proof of existence of a stochastic process involving fractional exponents seemed out of question after some mathematicians claimed this cannot not exist. This observation is strongly linked to the current definition and may undergo revision if nature does not agree with it. Stochastic process are very easy to simulate on a computer. Very few lines of code can decide if something works or not. I and Alfonso Farina, together with Matteo Sedehi,  have introduced the idea that the square root of a Wiener process yields the Schroedinger equation (see here or download a preprint here). This implies that one has to attach a meaning to the equation

$dX=(dW)^\frac{1}{2}.$

In a paper appeared today on arxiv (see here) we finally have provided this proof: We were right. The idea is to solve such an equation by numerical methods. These methods are themselves a proof of existence. We used the Euler-Maruyama method, the simplest one and we compared the results as shown in the following figure

a) Original Brownian motion. b) Same but squaring the formula for the square root. c) Formula of the square root taken as a stochastic equation. d) Same from the stochastic equation in this post.

There is now way to distinguish each other and the original Brownian motion is completely recovered by taking the square of the square root process computed in three different ways. Each one of these completely supports the conclusions we have drawn in our published paper. You can find the code to recover this figure in our arxiv paper. It is obtained by a Monte Carlo simulation with 10000 independent paths. You can play with it changing the parameters as you like.

This paper has an important consequence: Our current mathematical understanding of stochastic processes should be properly extended to account for our results. As a by-product, we have shown how, using Pauli matrices, this idea can be generalized to include spin introducing a new class of stochastic processes in a Clifford algebra.

In conclusion, we would like to remember that, it does not matter what your mathematical definition could be, a stochastic process is always a well-defined entity on a numerical ground. Tests can be easily performed as we proved here.

Farina, A., Frasca, M., & Sedehi, M. (2013). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y

Marco Frasca, & Alfonso Farina (2014). Numerical proof of existence of fractional Wiener processes arXiv arXiv: 1403.1075v1

Filed under: Applied Mathematics, Mathematical Physics, Physics, Quantum mechanics Tagged: Brownian motion, Schrödinger equation, Square root of a stochastic process, Stochastic differential equations, Stochastic processes

### Christian P. Robert - xi'an's og

Advances in scalable Bayesian computation [day #3]

We have now gone over the midpoint of our workshop Advances in Scalable Bayesian Computation with three talks in the morning and an open research or open air afternoon. (Maybe surprisingly I chose to stay indoors and work on a new research topic rather than trying cross-country skiing!) If I must give a theme for the day, it would be (jokingly) corporate Big data, as the three speakers spoke of problems and solutions connected with Google, Facebook and similar companies. First, Russ Salakhutdinov presented somthe videoe hierarchical structures on multimedia data, like connecting images and text, with obvious applications on Google. The first part described Boltzman machines with impressive posterior simulations of characters and images. (Check the video at 45:00.) Then Steve Scott gave us a Google motivated entry to embarrassingly parallel algorithms, along the lines of papers recently discussed on the ‘Og. (Too bad we forgot to start the video at the very beginning!) One of the novel things in the talk (for me) was the inclusion of BART in this framework, with the interesting feature that using the whole prior on each machine was way better than using a fraction of the prior, as predicted by the theory! And Joaquin Quinonero Candela provided examples of machine learning techniques used by Facebook to suggest friends and ads in a most efficient way (techniques remaining hidden!).

Even though the rest of the day was free, the two hours of exercising between the pool in the early morning and the climbing wall in the late afternoon left me with no energy to experiment curling with a large subsample of the conference attendees, much to my sorrow!

Filed under: Books, Mountains, pictures, R, Statistics, University life Tagged: ads, Banff International Research Station for Mathematical Innovation, Bayesian nonparametrics, Canada, Canadian Rockies, climbing, curling, Facebook, Google, image classification, machine learning

### Lubos Motl - string vacua and pheno

Brian Greene's talk on the state of string theory
Stephen and Vincent Della Pietra – who are not Capo di tutti capi because there are two of them; instead, they are fratelli – donated a few million dollars to Stony Brook and launched their lecture series. Recent speakers included (or the coming one will include) Wilczek, Linde, Veltman (and Schwarz).

In October 2011, Brian Greene gave the talk on "The State of String theory" which was finally posted to YouTube and if you can sacrifice 76 minutes (or a part of them), you are invited to watch the talk.

There are lots of the usual and some unusual introductory comments related to string theory – its history, major conflicts in physics, small vs long scales, why the theory unifies, pants diagrams, extra dimensions and their physics.

Since 27:00, it's more about the "present" – how to extract phenomenology, nonperturbative formulations including the Matrix Theory Hamiltonian, AdS/CFT, dualities, M-theory, landscape, experimental tests, braneworlds, cosmology, inflation, singularities, impacts on enumerative geometry, quantum geometry, a report card, emergence and holography.

Questions begin at 1:03:40 and Brian's answers are often amusing. They suggest a significant gap between the diplomatic formulations he often likes to offer to large audiences and what he really thinks on the other hand. With this variability, the hypothesis that he actually thinks the same thing about most of these issues (and the non-orthodox "interpretations" of quantum mechanics unfortunately don't belong to this list) as your humble correspondent is totally viable. I also have some spread of the "degree of diplomacy" depending on the context, it's just visibly smaller than Brian's.

The first man complains that Brian omitted the competitors. Brian says that he didn't see any competitors. People laugh and he says that it is a fair question and switches to the mode of Brian Greene the diplomat. He enumerates the guys who are doing loop quantum gravity and how much they believe that they have an alternative and so on. He wraps this flattering discussion by saying that it's good that people are working on various alternatives – and he's also happy that he personally doesn't work on that LQG pile of crap! ;-)

Brian offers an even more entertaining two-colored answer to a question about the "superluminal OPERA neutrinos" that were hot at that time. He says how smart these people are and how they have surely incorporated all the effects of the GPS synchronization, slowdown of light in the air, turbulence of the air, [he enumerates about 10 other possible sources of errors]. These people are so careful, you know. Brian keeps the consistently diplomatic language and says that he might still need some truly independent measurement to be compelled. The punch line arrives in his last sentence: "In fact, I would bet anything I hold dear that their result is wrong." The audience explodes in laughter because in the context, the sentence is a work of a comedian. Recall that the bogus "superluminal" OPERA result was an artifact of a loosely connected optical cable.

The third question is about the links of string theory and the Higgs boson. Brian says some of the same things I did but he also discusses the (currently indeed emerging) nightmare scenario (Higgs and nothing else at the LHC) and the reactions of the funding agencies to this "surprising and exciting" possibility.

When Greene is asked the overly popular question whether the landscape makes string theory unfalsifiable, Brian says that there are different ways to deal with this question. The first way, he shows, is offered by an "offensive cartoon". Just to be sure, the answer to the question is "PAK, you keep on talking like a bitch so I'm gonna slap you like a bitch". I have seen and given many answers to the question but I still think that this is the most accurate and appropriate one. Brian says that "I don't even want to show that [answer]" but I suspect he also thinks it's the best answer on the market. He says it's "not his perspective at all, it is completely inappropriate", so he offers another, less compelling answer. He points out it is not obvious that you can get anything out of the many vacua. The points are discrete and sparse, so after some measurements, you may produce predictions. Second, he says that the statistical properties of the vacua should be studied – clusters, groups, statistical predictions etc. He says that the problem is that it is hard to make it. I think that a more conceptual problem is that one can only make predictions if he has some probability measure and there's no known natural probabilistic distribution on the space of vacua (the egalitarian one is clearly wrong).

What is the difference between the choices of fields and parameters of the Standard Model on one side and properties of the compactifications in string theory on the other? Well, the former ones are continuous numbers etc.; the latter are just some discrete data. So the latter yield no undetermined, adjustable continuous dimensionless parameters. For practical applications – whether we really can say what's right or wrong – the framework of string theory is as predictive or unpredictive as the framework of quantum field theory.

Michael Douglas adds the last comment – a more careful analysis of early cosmology in string theory may actually tell us what the shape of extra dimensions ultimately looks like or prefers to look like which may make predictions possible. Brian Greene says "exactly" and that's the end of the talk.

### Symmetrybreaking - Fermilab/SLAC

Physics by hand

To encourage discussion and engagement, a physics forum has banned PowerPoint slides in favor of low-tech whiteboards.

A physicist is more than the sum of his or her slides.

That's why, about six months ago, organizers of a biweekly forum on Large Hadron Collider physics at Fermilab banned PowerPoint presentations in favor of old-fashioned, chalkboard-style talks.

“Without slides, the participants go further off-script, with more interaction and curiosity,” says Andrew Askew, an assistant professor of physics at Florida State University and a co-organizer of the forum. “We wanted to draw out the importance of the audience.”

## March 05, 2014

### arXiv blog

Can a Serious Game Improve Privacy Awareness on Facebook?

Understanding the nature of privacy on Facebook is not always straightforward. Now there’s a game that can help.

### The Great Beyond - Nature blog

Acid-bath stem-cell team releases tip sheet

A group of Japanese researchers whose revolutionary method to  produce stem cells simply drew questions from other biologists has published more details of their protocol.

The authors, who developed an ‘acid-bath’ technique that others have so far been unable to reproduce, released technical tips with a press statement today and published them on Nature Protocol Exchange. The document is entitled ‘Essential technical tips for STAP cell conversion culture from somatic cells’.

In it, Haruko Obokata, Hitoshi Niwa and Yoshiki Sasai, all of the RIKEN Centre for Developmental Biology in Kobe, say that despite its “seeming simplicity”, the method requires special care. But it is “absolutely reproducible”, Niwa told Nature News.

The controversy began at the end of January when Obokata and colleagues released two papers in Nature detailing how stress — in the form of low pH or physical pressure — could trigger the reprogramming of a mouse’s cells into an embryonic state, a process they called stimulus-triggered acquisition of pluripotency (STAP).

Cells reprogrammed into this state are ideal for studying the development of disease or the effectiveness of drugs, and could also be transplanted to regenerate failing organs. Making another type of pluripotent stem cell, called induced pluripotent stem (iPS) cells, requires a complex recipe of chemical or genetic factors. Obokata’s simple technique made headlines around the world.

But after the papers were published, they came under attack for a number of reasons, including the presence of duplicated images, an apparently plagiarized passage and the abnormal presentation of certain data. This led some commenters to question the validity of the results.

Something that would resolve the controversy would be the replication of the results by another group, but so far there have only been reports of failed attempts.

Despite a media frenzy, especially in Japan, with headlines even suggesting the results are fraudulent, the authors have stood by the work. Today Niwa told Nature News that members of the team besides Obokata have replicated the bulk of the work and that others outside the laboratory have succeeded in the first crucial step, inducing Oct3/4 expression after the acid treatment.

But the authors admit that the procedure is more complicated than originally advertised, leading to the publication of the tips.

The 10-page document states: “Despite its seeming simplicity, this procedure requires special care in cell handling and culture conditions, as well as in the choice of starting cell population.” The authors also point to the importance of bringing the cells gradually to the brink of death — which kills some 80% of them after 2–3 days — to reach the “optimal level of sub-lethal stress”.

The tips break down the process into three sections: collection of tissue and treatment with low-pH needed to produce STAP cells; preparing the culture needed to convert STAP cells to STAP stem cells, which behave like iPS cells or embryonic stem cells; and preparing the culture needed to turn STAP cells into “FI cells”, which can form placenta.

The document includes 28 “important” tips, which note the necessity of starting with primary cells (as opposed to cultured cells); that mice less than a week old, especially male mice, gave better results; the recommendation of using non-adhesive plates, which allow cell mobility and cluster formation; the importance of getting the cell density right in culture; the recommendation of using mice of a specific genetic lineage and many other detailed hints for using the proper culture conditions.

Martin Pera, a stem-cell researcher at the University of Melbourne in Australia, says: “The details provided in Nature Protocol Exchange will undoubtedly be helpful to those trying to repeat these findings.” But Pera, who has not tried to make STAP cells, adds: “The additional information does not seem to me to reveal any key procedural detail without which it would be impossible to duplicate the work. It appears instead to reinforce and emphasise some aspects of the technique that were disclosed originally.”

Those who are trying to replicate the method are intrigued by the publication of the tips. Jacob Hanna, at the Weizmann Institute of Science in Rehoboth, Israel, has made 10 batches of cells in an as-yet-unsuccessful effort to make STAP cells. He looks forward to trying some of the tips on culture conditions. “Some protocols can indeed be tricky and finicky and I commend the authors on making the effort to reach out to the scientific community,” he says. But he questions how the more complicated protocol would apply to another method of producing STAP cells advertised in the original article — putting pressure on the cell membranes. ”I find that hard to imagine as a very complicated manipulation,” he says.

Qi Zhou, of the Institute of Zoology in Beijing, also appreciates the authors sharing all the details, as “some were overlooked” in his efforts to make STAP cells. The restrictions on the origin of the cell type to specific stage and gender “raise very interesting questions which may help to explain the underlying mechanism of STAP”, he says.

Niwa says that the original team is working on a “full protocol” that will make it easier to make STAP cells, but that won’t be available for at least a month. “We are not sure when it will happen because we are now trying to improve some point to enhance the reproducibility,” he says.

### The n-Category Cafe

Guest post by Nick Gurski

I have been thinking about various sorts of operads with my PhD student Alex Corner, and have become interested in the following very concrete question: what are examples of operads in the category of finite groups under the cartesian product? I don’t know any really interesting examples, but maybe you do! After the break I will explain why I got interested in this question, and tell you about some examples that I do know.

Alex and I started off thinking about various sorts of things you might do with operads in $\mathrm{Cat}\mathbf\left\{Cat\right\}$, and were eventually forced into what we currently call an action operad. This is an operad $GG$ whose job it is to act on the objects of other operads. The key examples to keep in mind are the terminal operad (each set is just a singleton), the symmetric operad (the $nn$th set is the $nn$th symmetric group), and the braid operad (the $nn$th set is the $nn$th braid group). The technical definition involves an operad $GG$, a group structure on each set $G\left(n\right)G\left(n\right)$, a map of operads $\pi :G\to \Sigma \pi:G \rightarrow \Sigma$ to the symmetric operad which is levelwise a group homomorphism, and a final condition (when it makes sense) relating operadic composition, $\mu \mu$, with group multiplication:

$\mu \left(g;{f}_{1},\dots ,{f}_{n}\right)\cdot \mu \left(g\prime ;{f}_{1}\prime ,\dots ,{f}_{n}\prime \right)=\mu \left(gg\prime ;{f}_{\pi \left(g\prime \right)\left(1\right)}{f}_{1}\prime ,\dots ,{f}_{\pi \left(g\prime \right)\left(n\right)}{f}_{n}\prime \right). \mu\left(g; f_1, \ldots, f_n\right) \cdot \mu\left(g\text{'}; f_1\text{'}, \ldots, f_n\text{'}\right) = \mu \left(g g\text{'}; f_\left\{\pi\left(g\text{'}\right)\left(1\right)\right\}f_\left\{1\right\}\text{'}, \ldots, f_\left\{\pi\left(g\text{'}\right)\left(n\right)\right\}f_\left\{n\right\}\text{'}\right). $

These ideas have cropped up before, for example in Nathalie Wahl’s thesis or this preprint of Wenbin Zhang. Once you have this definition, you can define operads which are equivariant with respect to $GG$: you have an operad $PP$, a group action of $G\left(n\right)G\left(n\right)$ on $P\left(n\right)P\left(n\right)$ for each $nn$, and some equivariance conditions that generalize the equivariance conditions for a symmetric operad.

This isn’t the only thing you can do with an action operad, you can also think about the 2-monad on $\mathrm{Cat}\mathbf\left\{Cat\right\}$ whose algebras are strict monoidal categories where $G\left(n\right)G\left(n\right)$ acts naturally on $nn$-fold tensor products. If you do this with the symmetric operad, you get symmetric strict monoidal categories (or permutative categories, if you are a topologist); if you do this with the (ribbon) braid operad, you get (ribbon) braided strict monoidal categories; and if you do this with the action operad of all terminal groups, you get back plain old strict monoidal categories. The only other “naturally-occurring” example of an action operad that I know of is the operad of $nn$-fruit cactus groups, ${J}_{n}J_\left\{n\right\}$. These groups come up in the representation theory of quantum groups, particularly the theory of crystals, and the monoidal structure you get out here is something Drinfeld called a coboundary category. I can give you a generators-and-relations definition of these groups, at which point I would have completely exhausted my understanding of this operad. The best reference that I know of is the paper Crystals and coboundary categories by Henriques and Kamnitzer.

What does this have to do with my question about operads of finite groups? Well, as it turns out, the structure map for an action operad $\pi :G\to \Sigma \pi:G \rightarrow \Sigma$ only has two options: it can be surjective, or it can be the zero map (i.e., everything maps to the identity permutation). Furthermore, that condition I wrote down relating group multiplication and operadic composition says that giving an action operad with $\pi \pi$ the zero map is equivalent to giving an operad in which the operadic composition maps all preserve group multiplication. Alex and I already showed that the operadic composition of all identity elements is the identity element in the target, in other words an action operad with $\pi \pi$ the zero map is just an operad in the category of groups using the cartesian product.

You can take kernels of maps between action operads, so in particular given any action operad $GG$ you can take the kernel of $\pi :G\to \Sigma \pi:G \rightarrow \Sigma$; this gives you an operad in groups. For the examples above, you get finite groups when $GG$ is the terminal operad or $G=\Sigma G = \Sigma$ (as the terminal operad is obviously the kernel in the case of $G=\Sigma G = \Sigma$), but for the rest of the examples you get an operad in the category of groups, but most of those groups are infinite. The groups involved are the so-called pure versions: pure braids, pure ribbon braids, and pure $nn$-fruit cacti. One can then think of an action operad with surjective map $\pi \pi$ as being an extension of the symmetric operad by an operad in the category of groups (the “pure” version), and that action operad is finite if and only if the operad in the category of groups is one containing only finite groups. We can translate this back into thinking about monoidal categories by then noting if we have some notion of strict monoidal category in which $nn$th tensor powers come equipped with a natural action of a finite group $G\left(n\right)G\left(n\right)$ for all $nn$, then we must be able to dig up an operad in the category of finite groups.

Now let’s talk concrete examples: what operads do I know in the category of finite groups? Well, there is obviously the terminal operad, but I can go very slightly further in that I can tell you how to construct some new ones. Here are two methods you can use to construct operads of finite groups.

• Let $AA$ be a finite abelian group. Then there is an operad $\underline{A}\underline\left\{A\right\}$ where $\underline{A}\left(n\right)={A}^{n}\underline\left\{A\right\}\left(n\right) = A^\left\{n\right\}$ (the power here is a cartesian power). The operadic composition map ${A}^{n}×{A}^{{k}_{1}}×\cdots ×{A}^{{k}_{n}}\to {A}^{\Sigma {k}_{i}} A^\left\{n\right\} \times A^\left\{k_\left\{1\right\}\right\} \times \cdots \times A^\left\{k_\left\{n\right\}\right\} \rightarrow A^\left\{\Sigma k_\left\{i\right\}\right\} $ takes the vector $\left({a}_{1},\dots ,{a}_{n}\right)\left(a_\left\{1\right\}, \ldots, a_\left\{n\right\}\right)$ in the first coordinate and duplicates the $ii$th coordinate ${k}_{i}k_\left\{i\right\}$ times, then adds the result to the vector you get by just concatenating the $nn$ vectors in the ${A}^{{k}_{i}}A^\left\{k_\left\{i\right\}\right\}$. Here $AA$ must be abelian as it appears as $\underline{A}\left(1\right)\underline\left\{A\right\}\left(1\right)$, and $G\left(1\right)G\left(1\right)$ must be abelian for any action operad by an Eckmann-Hilton argument.
• Now let $GG$ be any finite group, but in fact you will see that we don’t use anything interesting about the group structure here. There is an operad ${G}^{c2}G^\left\{c2\right\}$ with ${G}^{c2}\left(n\right)={G}^{\left(\genfrac{}{}{0}{}{n}{2}\right)} G^\left\{c2\right\}\left(n\right) = G^\left\{\binom\left\{n\right\}\left\{2\right\}\right\} $ given the pointwise group structure. You should think of this as the set of functions from $\left\{\left(i,j\right):1\le i\\left\{ \left(i,j\right): 1 \leq i \lt j \leq n \\right\}$ to $GG$. Operad composition is a map ${G}^{\left(\genfrac{}{}{0}{}{n}{2}\right)}×{G}^{\left(\genfrac{}{}{0}{}{{k}_{1}}{2}\right)}×\cdots ×{G}^{\left(\genfrac{}{}{0}{}{{k}_{n}}{2}\right)}\to {G}^{\left(\genfrac{}{}{0}{}{\Sigma {k}_{i}}{2}\right)}. G^\left\{\binom\left\{n\right\}\left\{2\right\}\right\} \times G^\left\{\binom\left\{k_\left\{1\right\}\right\}\left\{2\right\}\right\} \times \cdots \times G^\left\{\binom\left\{k_\left\{n\right\}\right\}\left\{2\right\}\right\} \rightarrow G^\left\{\binom\left\{\Sigma k_\left\{i\right\}\right\}\left\{2\right\}\right\}. $ If we are given $1\le a1 \leq a \lt b \leq \sum k_\left\{i\right\}$, we must give back an element of $GG$. If there is some $rr$ such that $\sum _{i=1}^{r-1}{k}_{i}\le a \sum_\left\{i=1\right\}^\left\{r-1\right\} k_\left\{i\right\} \leq a \lt b \lt \sum_\left\{i=1\right\}^\left\{r\right\} k_\left\{i\right\}, $ then we use the function coming from ${G}^{c2}\left(r\right)G^\left\{c2\right\}\left(r\right)$ evaluated on $1\le a+1-\sum _{i=1}^{r-1}{k}_{i} 1 \leq a +1 - \sum_\left\{i=1\right\}^\left\{r-1\right\} k_\left\{i\right\} \lt b+1- \sum_\left\{i=1\right\}^\left\{r-1\right\} k_\left\{i\right\} \leq k_\left\{r\right\}. $ If not, then there exist $rr \lt s$ such that $aa$ lies in the $rr$th “interval” as we had before and $bb$ lies in the $ss$th interval, so then you use the function coming from ${G}^{c2}\left(n\right)G^\left\{c2\right\}\left(n\right)$ evaluated on $rr \lt s$.

I am reasonably content with the first of these constructions, I understand how to do the second but don’t really know where it comes from, and I have some ideas about how one could try to insert the finite group of their choice as $G\left(0\right)G\left(0\right)$ but haven’t checked the details, and know basically nothing else. Furthermore, I don’t know how these interact with each other, or how you can form extensions of $\Sigma \Sigma$ with them outside of some obvious constructions.

Those are just the two straightforward approaches that I know of to construct operads in the category of finite groups. You can also try to construct these operads from operads of topological spaces or simplicial sets, but once again I don’t know of an example that produces finite things (apart from ones giving the groups above). Do you know any others?

### Peter Coles - In the Dark

Now This is What You Call a Gig!

No time for a proper post today, but I couldn’t resist reblogging this advertisement for what must have been an amazing concert with an amazing lineup; so amazing that Pharaoh Sanders and Albert Ayler, who were also there, didn’t even make it onto the poster!

Originally posted on thejazzword:

It was 1966…Pharaoh Sanders and Albert Ayler were also there playing with Coltrane’s group

View original

### John Baez - Azimuth

Markov Models of Social Change (Part 2)

guest post by Vanessa Schweizer

This is my first post to Azimuth. It’s a companion to the one by Alaistair Jamieson-Lane. I’m an assistant professor at the University of Waterloo in Canada with the Centre for Knowledge Integration, or CKI. Through our teaching and research, the CKI focuses on integrating what appears, at first blush, to be drastically different fields in order to make the world a better place. The very topics I would like to cover today, which are mathematics and policy design, are an example of our flavour of knowledge integration. However, before getting into that, perhaps some background on how I got here would be helpful.

### The conundrum of complex systems

For about eight years, I have focused on various problems related to long-term forecasting of social and technological change (long-term meaning in excess of 10 years). I became interested in these problems because they are particularly relevant to how we understand and respond to global environmental changes such as climate change.

In case you don’t know much about global warming or what the fuss is about, part of what makes the problem particularly difficult is that the feedback from the physical climate system to human political and economic systems is exceedingly slow. It is so slow, that under traditional economic and political analyses, an optimal policy strategy may appear to be to wait before making any major decisions – that is, wait for scientific knowledge and technologies to improve, or at least wait until the next election [1]. Let somebody else make the tough (and potentially politically unpopular) decisions!

The problem with waiting is that the greenhouse gases that scientists are most concerned about stay in the atmosphere for decades or centuries. They are also churned out by the gigatonne each year. Thus the warming trends that we have experienced for the past 30 years, for instance, are the cumulative result of emissions that happened not only recently but also long ago—in the case of carbon dioxide, as far back as the turn of the 20th century. The world in the 1910s was quainter than it is now, and as more economies around the globe industrialize and modernize, it is natural to wonder: how will we manage to power it all? Will we still rely so heavily on fossil fuels, which are the primary source of our carbon dioxide emissions?

Such questions are part of what makes climate change a controversial topic. Present-day policy decisions about energy use will influence the climatic conditions of the future, so what kind of future (both near-term and long-term) do we want?

### Futures studies and trying to learn from the past

Many approaches can be taken to answer the question of what kind of future we want. An approach familiar to the political world is for a leader to espouse his or her particular hopes and concerns for the future, then work to convince others that those ideas are more relevant than someone else’s. Alternatively, economists do better by developing and investigating different simulations of economic developments over time; however, the predictive power of even these tools drops off precipitously beyond the 10-year time horizon.

The limitations of these approaches should not be too surprising, since any stockbroker will say that when making financial investments, past performance is not necessarily indicative of future results. We can expect the same problem with rhetorical appeals, or economic models, that are based on past performances or empirical (which also implies historical) relationships.

### A different take on foresight

A different approach avoids the frustration of proving history to be a fickle tutor for the future. By setting aside the supposition that we must be able to explain why the future might play out a particular way (that is, to know the ‘history’ of a possible future outcome), alternative futures 20, 50, or 100 years hence can be conceptualized as different sets of conditions that may substantially diverge from what we see today and have seen before. This perspective is employed in cross-impact balance analysis, an algorithm that searches for conditions that can be demonstrated to be self-consistent [3].

Findings from cross-impact balance analyses have been informative for scientific assessments produced by the Intergovernmental Panel on Climate Change Research, or IPCC. To present a coherent picture of the climate change problem, the IPCC has coordinated scenario studies across economic and policy analysts as well as climate scientists since the 1990s. Prior to the development of the cross-impact balance method, these researchers had to do their best to identify appropriate ranges for rates of population growth, economic growth, energy efficiency improvements, etc. through their best judgment.

A retrospective using cross-impact balances on the first Special Report on Emissions Scenarios found that the researchers did a good job in many respects. However, they underrepresented the large number of alternative futures that would result in high greenhouse gas emissions in the absence of climate policy [4].

As part of the latest update to these coordinated scenarios, climate change researchers decided it would be useful to organize alternative futures according socio-economic conditions that pose greater or fewer challenges to mitigation and adaptation. Mitigation refers to policy actions that decrease greenhouse gas emissions, while adaptation refers to reducing harms due to climate change or to taking advantage of benefits. Some climate change researchers argued that it would be sufficient to consider alternative futures where challenges to mitigation and adaptation co-varied, e.g. three families of futures where mitigation and adaptation challenges would be low, medium, or high.

Instead, cross-impact balances revealed that mixed-outcome futures—such as socio-economic conditions simultaneously producing fewer challenges to mitigation but greater challenges to adaptation—could not be completely ignored. This counter-intuitive finding, among others, brought the importance of quality of governance to the fore [5].

Although it is generally recognized that quality of governance—e.g. control of corruption and the rule of law—affects quality of life [6], many in the climate change research community have focused on technological improvements, such as drought-resistant crops, or economic incentives, such as carbon prices, for mitigation and adaptation. The cross-impact balance results underscored that should global patterns of quality of governance across nations take a turn for the worse, poor governance could stymie these efforts. This is because the influence of quality of governance is pervasive; where corruption is permitted at the highest levels of power, it may be permitted at other levels as well—including levels that are responsible for building schools, teaching literacy, maintaining roads, enforcing public order, and so forth.

The cross-impact balance study revealed this in the abstract, as summarized in the example matrices below. Alastair included a matrix like these in his post, where he explained that numerical judgments in such a matrix can be used to calculate the net impact of simultaneous influences on system factors. My purpose in presenting these matrices is a bit different, as the matrix structure can also explain why particular outcomes behave as system attractors.

In this example, a solid light gray square means that the row factor directly influences the column factor some amount, while white space means that there is no direct influence:

Dark gray squares along the diagonal have no meaning, since everything is perfectly correlated to itself. The pink squares highlight the rows for the factors “quality of governance” and “economy.” The importance of these rows is more apparent here; the matrix above is a truncated version of this more detailed one:

(Click to enlarge.)

The pink rows are highlighted because of a striking property of these factors. They are the two most influential factors of the system, as you can see from how many solid squares appear in their rows. The direct influence of quality of governance is second only to the economy. (Careful observers will note that the economy directly influences quality of governance, while quality of governance directly influences the economy). Other scholars have meticulously documented similar findings through observations [7].

As a method for climate policy analysis, cross-impact balances fill an important gap between genius forecasting (i.e., ideas about the far-off future espoused by one person) and scientific judgments that, in the face of deep uncertainty, are overconfident (i.e. neglecting the ‘fat’ or ‘long’ tails of a distribution).

### Wanted: intrepid explorers of future possibilities

However, alternative visions of the future are only part of the information that’s needed to create the future that is desired. Descriptions of courses of action that are likely to get us there are also helpful. In this regard, the post by Jamieson-Lane describes early work on modifying cross-impact balances for studying transition scenarios rather than searching primarily for system attractors.

This is where you, as the mathematician or physicist, come in! I have been working with cross-impact balances as a policy analyst, and I can see the potential of this method to revolutionize policy discussions—not only for climate change but also for policy design in general. However, as pointed out by entrepreneurship professor Karl T. Ulrich, design problems are NP-complete. Those of us with lesser math skills can be easily intimidated by the scope of such search problems. For this reason, many analysts have resigned themselves to ad hoc explorations of the vast space of future possibilities. However, some analysts like me think it is important to develop methods that do better. I hope that some of you Azimuth readers may be up for collaborating with like-minded individuals on the challenge!

### References

The graph of carbon emissions is from reference [2]; the pictures of the matrices are adapted from reference [5]:

[1] M. Granger Morgan, Milind Kandlikar, James Risbey and Hadi Dowlatabadi, Why conventional tools for policy analysis are often inadequate for problems of global change, Climatic Change 41 (1999), 271–281.

[2] T.F. Stocker et al., Technical Summary, in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (2013), T.F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex, and P.M. Midgley (eds.) Cambridge University Press, New York.

[3] Wolfgang Weimer-Jehle, Cross-impact balances: a system-theoretical approach to cross-impact analysis, Technological Forecasting & Social Change 73 (2006), 334–361.

[4] Vanessa J. Schweizer and Elmar Kriegler, Improving environmental change research with systematic techniques for qualitative scenarios, Environmental Research Letters 7 (2012), 044011.

[5] Vanessa J. Schweizer and Brian C. O’Neill, Systematic construction of global socioeconomic pathways using internally consistent element combinations, Climatic Change 122 (2014), 431–445.

[6] Daniel Kaufman, Aart Kray and Massimo Mastruzzi, Worldwide Governance Indicators (2013), The World Bank Group.

[7] Daron Acemoglu and James Robinson, The Origins of Power, Prosperity, and Poverty: Why Nations Fail. Website.

### Lubos Motl - string vacua and pheno

Particle fever: where to see
Particle Fever, the universally praised David Kaplan's full-fledged movie about particle physics, is coming to the movie theaters in the U.S. today.

Lots of news outlets discuss what the movie is all about.

It took 7 years for the movie to be shot, and so on. It is both about theorists and experimenters and the emotions they have been going through

For your convenience, here is a copy of the dates and places where the film will be shown. The film's website also tells you about the film festivals where the movie is participating.

March 5
New York, NY - Film Forum

March 6
Santa Barbara, CA - UCSB

March 7
Los Angeles, CA - Nuart
Toronto, ON – The Bloor
Irvine, CA – University Town Center

March 8
Sioux Falls, SD – Cinema Falls

March 14
Seattle, WA - Landmark
San Francisco, CA - Landmark
Berkeley, CA - Landmark
Bellingham, WA - Pickford Film Center
Scottsdale, AZ - Harkins Camelview
Chicago, IL - Music Box
Naperville, IL – AMC Showplace 16
Nashville, TN - Belcourt

March 16
Sioux Falls, SD – Cinema Falls

March 19
Ithaca, NY - Cornell Cinema

March 21
Cambridge, MA - Kendall Square
Minneapolis, MN - Landmark
Washington D.C. - E Street
Baltimore, MD - Charles
San Diego, CA - Landmark
Denver, CO – Landmark

March 28
Santa Fe, NM - CCA
Columbus, OH - Gateway
Kansas City, KS - Tivoli
Atlanta, GA – Midtown Art
Houston, TX – Sundance Cinemas

March 31
Ann Arbor, MI – Michigan Theater

April 3
Oklahoma City, OK – Oklahoma City Museum of Art

April 4
Boise, ID - Flicks
Charlotte, NC – Manor
Charlottesville, VA – Downtown Mall

April 11
Albany, NY – Spectrum 8

April 15
Portland, OR – Oregon Museum of Science

April 18
Austin, TX – Arbor
Knoxville, TN – Downtown West
Eugene, OR – Bijou Art

April 25
Lincoln, NB – Mary Riepma Ross Film Center

### Clifford V. Johnson - Asymptotia

Cloudy with a chance of Physics
I don't know. That's a bit of a desperate title. But in exchange, a rather nice cloud formation, don't you think? This was from the sky over Los Angeles yesterday evening (a shot of the sky in the other direction is to the right - click for larger), and my first thought was "what's the physics behind these beautiful structures?" There's enough regularity here to expect there to be a mechanism, but I do not know what it is. Some combination of atmospheric conditions like wind speed, temperature, perhaps some layering of different bodies of air, and so forth, resulted in this and I'd love to know more. What factors set the roughly regular size of the structures, their pretty uniform distance apart, etc? (These are typical physicist's questions, in case you're [...] Click to continue reading this post

## March 04, 2014

### Quantum Diaries

Particle Beam Cancer Therapy: The Promise and Challenges

Advances in accelerators built for fundamental physics research have inspired improved cancer treatment facilities. But will one of the most promising—a carbon ion treatment facility—be built in the U.S.? Participants at a symposium organized by Brookhaven Lab for the 2014 AAAS meeting explored the science and surrounding issues.

by Karen McNulty Walsh

Accelerator physicists are natural-born problem solvers, finding ever more powerful ways to generate and steer particle beams for research into the mysteries of physics, materials, and matter. And from the very beginning, this field born at the dawn of the atomic age has actively sought ways to apply advanced technologies to tackle more practical problems. At the top of the list—even in those early days— was taking aim at cancer, the second leading cause of death in the U.S. today, affecting one in two men and one in three women.

Using beams of accelerated protons or heavier ions such as carbon, oncologists can deliver cell-killing energy to precisely targeted tumors—and do so without causing extensive damage to surrounding healthy tissue, eliminating the major drawback of conventional radiation therapy using x-rays.

“This is cancer care aimed at curing cancer, not just treating it,” said Ken Peach, a physicist and professor at the Particle Therapy Cancer Research Institute at Oxford University.

Peach was one of six participants in a symposium exploring the latest advances and challenges in this field—and a related press briefing attended by more than 30 science journalists—at the 2014 meeting of the American Association for the Advancement of Science in Chicago on February 16. The session, “Targeting Tumors: Ion Beam Accelerators Take Aim at Cancer,” was organized by the U.S. Department of Energy’s (DOE’s) Brookhaven National Laboratory, an active partner in an effort to build a prototype carbon-ion accelerator for medical research and therapy. Brookhaven Lab is also currently the only place in the U.S. where scientists can conduct fundamental radiobiological studies of how beams of ions heavier than protons, such as carbon ions, affect cells and DNA.

Participants in a symposium and press briefing exploring the latest advances and challenges in particle therapy for cancer at the 2014 AAAS meeting: Eric Colby (U.S. Department of Energy), Jim Deye (National Cancer Institute), Hak Choy (University of Texas Southwestern Medical Center), Kathryn Held (Harvard Medical School and Massachusetts General Hospital), Stephen Peggs (Brookhaven National Laboratory and Stony Brook University), and Ken Peach (Oxford University). (Credit: AAAS)

“We could cure a very high percentage of tumors if we could give sufficiently high doses of radiation, but we can’t because of the damage to healthy tissue,” said radiation biologist Kathryn Held of Harvard Medical School and Massachusetts General Hospital during her presentation. “That’s the advantage of particles. We can tailor the dose to the tumor and limit the amount of damage in the critical surrounding normal tissues.”

Yet despite the promise of this approach and the emergence of encouraging clinical results from carbon treatment facilities in Asia and Europe, there are currently no carbon therapy centers operating in the U.S.

Participants in the Brookhaven-organized session agreed: That situation has to change—especially since the very idea of particle therapy was born in the U.S.

Physicists as pioneers

“When Harvard physicist Robert Wilson, who later became the first director of Fermilab, was asked to explore the potential dangers of proton particle radiation [just after World War II], he flipped the problem on its head and described how proton beams might be extremely useful—as effective killers of cancer cells,” said Stephen Peggs, an accelerator physicist at Brookhaven Lab and adjunct professor at Stony Brook University.

As Peggs explained, the reason is simple: Unlike conventional x-rays, which deposit energy—and cause damage—all along their path as they travel through healthy tissue en route to a tumor (and beyond it), protons and other ions deposit most of their energy where the beam stops. Using magnets, accelerators can steer these charged particles left, right, up, and down and vary the energy of the beam to precisely place the cell-killing energy right where it’s needed: in the tumor.

The first implementation of particle therapy used helium and other ions generated by the Bevatron at Berkeley Lab. Those spin-off studies “established a foundation for all subsequent ion therapy,” Peggs said. And as accelerators for physics research grew in size, pioneering experiments in particle therapy continued, operating “parasitically” until the very first accelerator built for hospital-based proton therapy was completed with the help of DOE scientists at Fermilab in 1990.

But even before that machine left Illinois for Loma Linda University Medical Center in California, physicists were thinking about how it could be made better. The mantra of making machines smaller, faster, cheaper—and capable of accelerating more kinds of ions—has driven the field since then.

Advances in magnet technology, including compact superconducting magnets and beam-delivery systems developed at Brookhaven Lab, hold great promise for new machines. Peggs is working to incorporate these technologies in a prototype ‘ion Rapid Cycling Medical Synchrotron’ (iRCMS) capable of delivering protons and/or carbon ions for radiobiology research and for treating patients.

Brookhaven Lab accelerator physicist Stephen Peggs with magnet technology that could reduce the size of particle accelerators needed to steer heavy ion beams and deliver cell-killing energy to precisely targeted tumors while sparing surrounding healthy tissue.

Small machine, big particle impact

The benefits of using charged particles heavier than protons (e.g., carbon ions) stem not only from their physical properties—they stop and deposit their energy over an even smaller and better targeted tumor volume than protons—but also a range of biological advantages they have over x-rays.

As Kathryn Held elaborated in her talk, compared with x-ray photons, “carbon ions are much more effective at killing tumor cells. They put a huge hole through DNA compared to the small pinprick caused by x-rays, which causes clustered or complex DNA damage that is less accurately repaired between treatments—less repaired, period—and thus more lethal [to the tumor].” Carbon ions also appear to be more effective than x-rays at killing oxygen-deprived tumor cells, and might be most effective in fewer higher doses, “but we need more basic biological studies to really understand these effects,” Held said.

Different types of radiation treatment cause different kinds of damage to the DNA in a tumor cell. X-ray photons (top arrow) cause fairly simple damage (purple area) that cancer cells can sometimes repair between treatments. Charged particles—particularly ions heavier than protons (bottom arrow)—cause more and more complex forms of damage, resulting in less repair and a more lethal effect on the tumor. (Credit: NASA)

Held conducts research at the NASA Space Radiation Laboratory (NSRL) at Brookhaven Lab, an accelerator-based facility designed to fully understand risks and design protections for future astronauts exposed to radiation. But much of that research is relevant to understanding the mechanisms and basic radiobiological responses that can apply to the treatment of cancer. But additional facilities and funding are needed for research specifically aimed at understanding the radiobiological effects of heavier ions for potential cancer therapies, Held emphasized.

Hak Choy, a radiation oncologist and chair in the Department of Radiation Oncology at the University of Texas Southwestern Medical Center, presented compelling clinical data on the benefits of proton particle therapy, including improved outcomes and reduced side effects when compared with conventional radiation, particularly for treating tumors in sensitive areas such as the brain and spine and in children. “When you can target the tumor and spare critical tissue you get fewer side effects,” he said.

Data from Japan and Europe suggest that carbon ions could be three or four times more biologically potent than protons, Choy said, backing that claim with impressive survival statistics for certain types of cancers where carbon therapy surpassed protons, and was even better than surgery for one type of salivary gland cancer. “And carbon therapy is noninvasive,” he emphasized.

To learn more about this promising technology and the challenges of building a carbon ion treatment/research facility in the U.S., including perspectives from the National Cancer Institute, DOE and a discussion about economics, read the full summary of the AAAS symposium here: http://www.bnl.gov/newsroom/news.php?a=24672.

Karen McNulty Walsh is a science writer in the Media & Communications Office at Brookhaven National Laboratory.

### Symmetrybreaking - Fermilab/SLAC

There’s an app for that

From simulators and reference tools to fun and games, physics-related mobile applications run the gamut. Some of the apps were designed by physicists for use by physicists, while others are intended to inform the general public about physics laws and the field's grandest experiments, or offer an entertaining escape.

### arXiv blog

How Airships Are Set to Revolutionize Science

Airships can patrol the upper atmosphere, monitoring the ground or peering at the stars for a fraction of a cost of satellites, according to a new report. All that’s needed is a prize to kick-start innovation.

### arXiv blog

How Airships Are Set To Revolutionise Science

Airships can patrol the upper atmosphere, monitoring the ground or peering at the stars for a fraction of a cost of satellites, according to a new report. All that’s needed is a prize to kickstart innovation.

The Naval Air Engineering Station in Lakehurst New jersey must be one of the most famous airfields in the world. If you’ve ever watched the extraordinary footage of the German passenger airship Hindenburg catching fire as it attempted to moor, you’ll have seen Lakehurst. That’s where the disaster took place.

### Peter Coles - In the Dark

Is Inflation Testable?

It seems the little poll about cosmic inflation I posted last week with humorous intent has ruffled a few feathers, but at least it gives me the excuse to wheel out an updated and edited version of an old piece I wrote on the subject.

Just over thirty  years ago a young physicist came up with what seemed at first to be an absurd idea: that, for a brief moment in the very distant past, just after the Big Bang, something weird happened to gravity that made it push rather than pull.  During this time the Universe went through an ultra-short episode of ultra-fast expansion. The physicist in question, Alan Guth, couldn’t prove that this “inflation” had happened nor could he suggest a compelling physical reason why it should, but the idea seemed nevertheless to solve several major problems in cosmology.

Three decades later, Guth is a professor at MIT and inflation is now well established as an essential component of the standard model of cosmology. But should it be? After all, we still don’t know what caused it and there is little direct evidence that it actually took place. Data from probes of the cosmic microwave background seem to be consistent with the idea that inflation happened, but how confident can we be that it is really a part of the Universe’s history?

According to the Big Bang theory, the Universe was born in a dense fireball which has been expanding and cooling for about 14 billion years. The basic elements of this theory have been in place for over eighty years, but it is only in the last decade or so that a detailed model has been constructed which fits most of the available observations with reasonable precision. The problem is that the Big Bang model is seriously incomplete. The fact that we do not understand the nature of the dark matter and dark energy that appears to fill the Universe is a serious shortcoming. Even worse, we have no way at all of describing the very beginning of the Universe, which appears in the equations used by cosmologists as a “singularity”- a point of infinite density that defies any sensible theoretical calculation. We have no way to define a priori the initial conditions that determine the subsequent evolution of the Big Bang, so we have to try to infer from observations, rather than deduce by theory, the parameters that govern it.

The establishment of the new standard model (known in the trade as the “concordance” cosmology) is now allowing astrophysicists to turn back the clock in order to understand the very early stages of the Universe’s history and hopefully to understand the answer to the ultimate question of what happened at the Big Bang itself and thus answer the question “How did the Universe Begin?”

Paradoxically, it is observations on the largest scales accessible to technology that provide the best clues about the earliest stages of cosmic evolution. In effect, the Universe acts like a microscope: primordial structures smaller than atoms are blown up to astronomical scales by the expansion of the Universe. This also allows particle physicists to use cosmological observations to probe structures too small to be resolved in laboratory experiments.

Our ability to reconstruct the history of our Universe, or at least to attempt this feat, depends on the fact that light travels with a finite speed. The further away we see a light source, the further back in time its light was emitted. We can now observe light from stars in distant galaxies emitted when the Universe was less than one-sixth of its current size. In fact we can see even further back than this using microwave radiation rather than optical light. Our Universe is bathed in a faint glow of microwaves produced when it was about one-thousandth of its current size and had a temperature of thousands of degrees, rather than the chilly three degrees above absolute zero that characterizes the present-day Universe. The existence of this cosmic background radiation is one of the key pieces of evidence in favour of the Big Bang model; it was first detected in 1964 by Arno Penzias and Robert Wilson who subsequently won the Nobel Prize for their discovery.

The process by which the standard cosmological model was assembled has been a gradual one, but the latest step was taken by the European Space Agency’s Planck mission . I’ve blogged about the implications of the Planck results for cosmic inflation in more technical detail here. In a nutshell, for several years this satellite mapped  the properties of the cosmic microwave background and how it varies across the sky. Small variations in the temperature of the sky result from sound waves excited in the hot plasma of the primordial fireball. These have characteristic properties that allow us to probe the early Universe in much the same way that solar astronomers use observations of the surface of the Sun to understand its inner structure,  a technique known as helioseismology. The detection of the primaeval sound waves is one of the triumphs of modern cosmology, not least because their amplitude tells us precisely how loud the Big Bang really was.

The pattern of fluctuations in the cosmic radiation also allows us to probe one of the exciting predictions of Einstein’s general theory of relativity: that space should be curved by the presence of matter or energy. Measurements from Planck and its predecessor WMAP reveal that our Universe is very special: it has very little curvature, and so has a very finely balanced energy budget: the positive energy of the expansion almost exactly cancels the negative energy relating of gravitational attraction. The Universe is (very nearly) flat.

The observed geometry of the Universe provides a strong piece of evidence that there is an mysterious and overwhelming preponderance of dark stuff in our Universe. We can’t see this dark matter and dark energy directly, but we know it must be there because we know the overall budget is balanced. If only economics were as simple as physics.

Computer Simulation of the Cosmic Web

The concordance cosmology has been constructed not only from observations of the cosmic microwave background, but also using hints supplied by observations of distant supernovae and by the so-called “cosmic web” – the pattern seen in the large-scale distribution of galaxies which appears to match the properties calculated from computer simulations like the one shown above, courtesy of Volker Springel. The picture that has emerged to account for these disparate clues is consistent with the idea that the Universe is dominated by a blend of dark energy and dark matter, and in which the early stages of cosmic evolution involved an episode of accelerated expansion called inflation.

A quarter of a century ago, our understanding of the state of the Universe was much less precise than today’s concordance cosmology. In those days it was a domain in which theoretical speculation dominated over measurement and observation. Available technology simply wasn’t up to the task of performing large-scale galaxy surveys or detecting slight ripples in the cosmic microwave background. The lack of stringent experimental constraints made cosmology a theorists’ paradise in which many imaginative and esoteric ideas blossomed. Not all of these survived to be included in the concordance model, but inflation proved to be one of the hardiest (and indeed most beautiful) flowers in the cosmological garden.

Although some of the concepts involved had been formulated in the 1970s by Alexei Starobinsky, it was Alan Guth who in 1981 produced the paper in which the inflationary Universe picture first crystallized. At this time cosmologists didn’t know that the Universe was as flat as we now think it to be, but it was still a puzzle to understand why it was even anywhere near flat. There was no particular reason why the Universe should not be extremely curved. After all, the great theoretical breakthrough of Einstein’s general theory of relativity was the realization that space could be curved. Wasn’t it a bit strange that after all the effort needed to establish the connection between energy and curvature, our Universe decided to be flat? Of all the possible initial conditions for the Universe, isn’t this very improbable? As well as being nearly flat, our Universe is also astonishingly smooth. Although it contains galaxies that cluster into immense chains over a hundred million light years long, on scales of billions of light years it is almost featureless. This also seems surprising. Why is the celestial tablecloth so immaculately ironed?

Guth grappled with these questions and realized that they could be resolved rather elegantly if only the force of gravity could be persuaded to change its sign for a very short time just after the Big Bang. If gravity could push rather than pull, then the expansion of the Universe could speed up rather than slow down. Then the Universe could inflate by an enormous factor (1060 or more) in next to no time and, even if it were initially curved and wrinkled, all memory of this messy starting configuration would be lost. Our present-day Universe would be very flat and very smooth no matter how it had started out.

But how could this bizarre period of anti-gravity be realized? Guth hit upon a simple physical mechanism by which inflation might just work in practice. It relied on the fact that in the extreme conditions pertaining just after the Big Bang, matter does not behave according to the classical laws describing gases and liquids but instead must be described by quantum field theory. The simplest type of quantum field is called a scalar field; such objects are associated with particles that have no spin. Modern particle theory involves many scalar fields which are not observed in low-energy interactions, but which may well dominate affairs at the extreme energies of the primordial fireball.

Classical fluids can undergo what is called a phase transition if they are heated or cooled. Water for example, exists in the form of steam at high temperature but it condenses into a liquid as it cools. A similar thing happens with scalar fields: their configuration is expected to change as the Universe expands and cools. Phase transitions do not happen instantaneously, however, and sometimes the substance involved gets trapped in an uncomfortable state in between where it was and where it wants to be. Guth realized that if a scalar field got stuck in such a “false” state, energy – in a form known as vacuum energy – could become available to drive the Universe into accelerated expansion.We don’t know which scalar field of the many that may exist theoretically is responsible for generating inflation, but whatever it is, it is now dubbed the inflaton.

This mechanism is an echo of a much earlier idea introduced to the world of cosmology by Albert Einstein in 1916. He didn’t use the term vacuum energy; he called it a cosmological constant. He also didn’t imagine that it arose from quantum fields but considered it to be a modification of the law of gravity. Nevertheless, Einstein’s cosmological constant idea was incorporated by Willem de Sitter into a theoretical model of an accelerating Universe. This is essentially the same mathematics that is used in modern inflationary cosmology.  The connection between scalar fields and the cosmological constant may also eventually explain why our Universe seems to be accelerating now, but that would require a scalar field with a much lower effective energy scale than that required to drive inflation. Perhaps dark energy is some kind of shadow of the inflaton

Guth wasn’t the sole creator of inflation. Andy Albrecht and Paul Steinhardt, Andrei Linde, Alexei Starobinsky, and many others, produced different and, in some cases, more compelling variations on the basic theme. It was almost as if it was an idea whose time had come. Suddenly inflation was an indispensable part of cosmological theory. Literally hundreds of versions of it appeared in the leading scientific journals: old inflation, new inflation, chaotic inflation, extended inflation, and so on. Out of this activity came the realization that a phase transition as such wasn’t really necessary, all that mattered was that the field should find itself in a configuration where the vacuum energy dominated. It was also realized that other theories not involving scalar fields could behave as if they did. Modified gravity theories or theories with extra space-time dimensions provide ways of mimicking scalar fields with rather different physics. And if inflation could work with one scalar field, why not have inflation with two or more? The only problem was that there wasn’t a shred of evidence that inflation had actually happened.

This episode provides a fascinating glimpse into the historical and sociological development of cosmology in the eighties and nineties. Inflation is undoubtedly a beautiful idea. But the problems it solves were theoretical problems, not observational ones. For example, the apparent fine-tuning of the flatness of the Universe can be traced back to the absence of a theory of initial conditions for the Universe. Inflation turns an initially curved universe into a flat one, but the fact that the Universe appears to be flat doesn’t prove that inflation happened. There are initial conditions that lead to present-day flatness even without the intervention of an inflationary epoch. One might argue that these are special and therefore “improbable”, and consequently that it is more probable that inflation happened than that it didn’t. But on the other hand, without a proper theory of the initial conditions, how can we say which are more probable? Based on this kind of argument alone, we would probably never really know whether we live in an inflationary Universe or not.

But there is another thread in the story of inflation that makes it much more compelling as a scientific theory because it makes direct contact with observations. Although it was not the original motivation for the idea, Guth and others realized very early on that if a scalar field were responsible for inflation then it should be governed by the usual rules governing quantum fields. One of the things that quantum physics tells us is that nothing evolves entirely smoothly. Heisenberg’s famous Uncertainty Principle imposes a degree of unpredictability of the behaviour of the inflaton. The most important ramification of this is that although inflation smooths away any primordial wrinkles in the fabric of space-time, in the process it lays down others of its own. The inflationary wrinkles are really ripples, and are caused by wave-like fluctuations in the density of matter travelling through the Universe like sound waves travelling through air. Without these fluctuations the cosmos would be smooth and featureless, containing no variations in density or pressure and therefore no sound waves. Even if it began in a fireball, such a Universe would be silent. Inflation puts the Bang in Big Bang.

The acoustic oscillations generated by inflation have a broad spectrum (they comprise oscillations with a wide range of wavelengths), they are of small amplitude (about one hundred thousandth of the background); they are spatially random and have Gaussian statistics (like waves on the surface of the sea; this is the most disordered state); they are adiabatic (matter and radiation fluctuate together) and they are formed coherently.  This last point is perhaps the most important. Because inflation happens so rapidly all of the acoustic “modes” are excited at the same time. Hitting a metal pipe with a hammer generates a wide range of sound frequencies, but all the different modes of the start their oscillations at the same time. The result is not just random noise but something moderately tuneful. The Big Bang wasn’t exactly melodic, but there is a discernible relic of the coherent nature of the sound waves in the pattern of cosmic microwave temperature fluctuations seen in the Cosmic Microwave Background. The acoustic peaks seen in the  Planck  angular spectrum  provide compelling evidence that whatever generated the pattern did so coherently.

There are very few alternative theories on the table that are capable of reproducing these results, but does this mean that inflation really happened? Do they “prove” inflation is correct? More generally, is the idea of inflation even testable?

So did inflation really happen? Does Planck prove it? Will we ever know?

It is difficult to talk sensibly about scientific proof of phenomena that are so far removed from everyday experience. At what level can we prove anything in astronomy, even on the relatively small scale of the Solar System? We all accept that the Earth goes around the Sun, but do we really even know for sure that the Universe is expanding? I would say that the latter hypothesis has survived so many tests and is consistent with so many other aspects of cosmology that it has become, for pragmatic reasons, an indispensable part our world view. I would hesitate, though, to say that it was proven beyond all reasonable doubt. The same goes for inflation. It is a beautiful idea that fits snugly within the standard cosmological and binds many parts of it together. But that doesn’t necessarily make it true. Many theories are beautiful, but that is not sufficient to prove them right.

When generating theoretical ideas scientists should be fearlessly radical, but when it comes to interpreting evidence we should all be unflinchingly conservative. The Planck measurements have also provided a tantalizing glimpse into the future of cosmology, and yet more stringent tests of the standard framework that currently underpins it. Primordial fluctuations produce not only a pattern of temperature variations over the sky, but also a corresponding pattern of polarization. This is fiendishly difficult to measure, partly because it is such a weak signal (only a few percent of the temperature signal) and partly because the primordial microwaves are heavily polluted by polarized radiation from our own Galaxy. Polarization data from Planck are yet to be released; the fiendish data analysis challenge involved is the reason for the delay.  But there is a crucial target that justifies these endeavours. Inflation does not just produce acoustic waves, it also generates different modes of fluctuation, called gravitational waves, that involve twisting deformations of space-time. Inflationary models connect the properties of acoustic and gravitational fluctuations so if the latter can be detected the implications for the theory are profound. Gravitational waves produce very particular form of polarization pattern (called the B-mode) which can’t be generated by acoustic waves so this seems a promising way to test inflation. Unfortunately the B-mode signal is expected to be very weak and the experience of WMAP suggests it might be swamped by foregrounds. But it is definitely worth a go, because it would add considerably to the evidence in favour of inflation as an element of physical reality.

But would even detection of primordial gravitational waves really test inflation? Not really. The problem with inflation is that it is a name given to a very general idea, and there are many (perhaps infinitely many) different ways of implementing the details, so one can devise versions of the inflationary scenario that produce a wide range of outcomes. It is therefore unlikely that there will be a magic bullet that will kill inflation dead. What is more likely is a gradual process of reducing the theoretical slack as much as possible with observational data, such as is happening in particle physics. For example, we have not yet identified the inflaton field (nor indeed any reasonable candidate for it) but we are gradually improving constraints on the allowed parameter space. Progress in this mode of science is evolutionary not revolutionary.

Many critics of inflation argue that it is not a scientific theory because it is not falsifiable. I don’t think falsifiability is a useful concept in this context; see my many posts relating to Karl Popper. Testability is a more appropriate criterion. What matters is that we have a systematic way of deciding which of a set of competing models is the best when it comes to confrontation with data. In the case of inflation we simply don’t have a compelling model to test it against. For the time being therefore, like it or not, cosmic inflation is clearly the best model we have. Maybe someday a worthy challenger will enter the arena, but this has not happened yet.

Most working cosmologists are as aware of the difficulty of testing inflation as they are of its elegance. There are also those  who talk as if inflation were an absolute truth, and those who assert that it is not a proper scientific theory (because it isn’t falsifiable). I can’t agree with either of these factions. The truth is that we don’t know how the Universe really began; we just work on the best ideas available and try to reduce our level of ignorance in any way we can. We can hardly expect  the secrets of the Universe to be so easily accessible to our little monkey brains.

## March 03, 2014

### arXiv blog

Mathematical Proof Reveals How To Make The Internet More Earthquake-Proof

Decentralised networks are naturally robust against certain types of attack. Now one mathematician says advanced geometry shows how to make them even more robust.

One of the common myths about the internet is that it was originally designed during the Cold War to survive nuclear attack. Historians of the internet are quick to point out that this was not at all one of the design goals of the early network, although the decentralised nature of the system turns out to make it much more robust than any kind of centralised network.

### ZapperZ - Physics and Physicists

Checking On Antimatter
This is a rather nice, short summary on the study of anti-atoms, and in particular, CERN's effort to study the properties of anti-hydrogen and why it is so important.

With a big enough sample of anti-hydrogen, one can make detailed studies of the energy levels that the positron can occupy in its journey around the antiproton. These energy levels have been measured very precisely for hydrogen, and the expectation is that they should be identical in antihydrogen. But we won’t know until we look.
.
.
The symmetry principle which these experiments are designed to test is whether physics, and therefore the whole universe, would look the same if we simultaneously swapped all matter for antimatter, left for right, and backwards in time for forwards in time. This is called a CPT (Charge/Parity/Time) inversion. The Standard Model of physics, and almost all variants on it, require that indeed the universe would be identical after such an inversion.
Now pay attention, kids. In physics, even when some of our most cherished theories have been used, and known to be valid, we STILL go out and test out many of its predictions. Here, the Standard Model says that antihydrogen should behave the same way as hydrogen. While the Standard Model certainly has been useful, and has been correct in many aspects, we do not simply accept its predictions for the behavior of antihydrogen. We still want to test it! In fact, many physicists are hoping that we see something the Standard Model can't explain, that something "weird" is going on that might give hints of new physics. This is what many of us in this field look gleefully for!

This is how science works. We verify an idea, a theory, etc., but we continue to test its RANGE OF VALIDITY, i.e. how far out does this thing work? It works here, but does it work there? It works when you do this, but does it work when you do that? This is how we expand the boundaries of our knowledge.

Zz.

### astrobites - astro-ph reader's digest

Kepler 2.0

Last May, we learned that the Kepler Space Telescope could no longer go on finding transiting exoplanets as it had since its launch in 2009 due to the failure of a critical reaction wheel used to accurately point the telescope. Although Kepler could no longer carry out its intended mission, many of its powerful capabilities remain intact. The team called for ideas for a second mission for Kepler, and astronomers enthusiastically submitted their plans, which I summarized in this astrobite from last September. Since then, the team has considered these ideas while formulating a new mission for Kepler called “K2″. K2 was discussed at the AAS meeting in January, which was covered in this astrobite. Now, Steve Howell and collaborators have put on the archive a more detailed description of the K2 mission, which I’ll review in this post.

The original Kepler mission observed a single patch of sky, monitoring a pre-selected set of 156,000 stars for the changes in their brightness that indicate the presence of  transiting planets. With Kepler’s reduced pointing capability, the light from the stars would drift across the camera over time, smearing out the signal and reducing the sensitivity of the instrument to detect very minute brightness fluctuations. K2 is designed to minimize this problem by pointing only in the ecliptic (the plane defined by Kepler’s orbit around the Sun), so that the photon pressure from the Sun on the spacecraft is balanced.

But pointing in the ecliptic means that Kepler can no longer focus on a single patch of sky, because at some point in its orbit, Kepler would be pointed towards the Sun (and sunlight would get into the telescope). So for the K2 mission, Kepler will point to a new field of view every 83 days. These 83 day long periods of time, and their corresponding fields of view, are called “campaigns”.  The way that Kepler will reorient to a new field of view for each campaign is shown in Figure 1, and the locations of these fields are shown in Figure 2.

While the campaign fields have been chosen, the specific targets within each field to be monitored have not. K2 is a “community-driven” observatory, so targets will be selected from proposals submitted by anyone in the scientific community. The team expects to observe between 10,000 and 20,000 targets in each campaign.

Figure 1. A diagram of how Kepler will reorient to a new field of view for each campaign. Also shown is how the spacecraft will be balanced against radiation pressure by staying pointed within the ecliptic plane.

Compared to the original mission, K2′s ability to detect transiting planets is reduced in two ways. First, its photometric precision–the ability to detect minute changes in a stars brightness–is about 4 times worse (although it would have been even worse if the telescope where not confined to point in the ecliptic). Second, the time baseline over which each target can be monitored is drastically shortened due the necessity of using multiple campaigns. K2 will not be able to detect new planets in the habitable zone around Sun-like stars because the planet’s orbital period would be about one year, compared to an 83 day campaign. M stars will make great targets for K2′s planet search because the habitable zones are much closer to these stars (so habitable planets will have shorter orbital periods), and the stars are smaller, so small planets can still make a detectable dimming effect when transiting, even with K2′s reduced sensitivity. Despite these limitations, K2′s large field of view, its still impressive photometric precision, and its ability to continuously monitor targets with high cadence make it superior to ground based programs for detecting transiting planets.

The target stars of the original mission were generally far away, and thus fairly faint, but K2 will likely target much closer stars. The planets that will be found by K2 around these nearby stars will thus be much easier to characterize with detailed follow up observations using other telescopes. K2 will also be able to target stellar clusters, which have not yet been thoroughly studied for transiting planets. Young planetary systems could also be studied by K2, from which we could constrain our planet formation and migration theories.

In addition to looking for more transiting planets, K2 can study anything that varies in brightness. This includes the target stars themselves, as well as AGN, supernovae, and gravitational microlensing signals. Of course, the exact science that K2 will do depends on the targets proposed by the community!

While the K2 program does not have official funding support yet from NASA, campaign 0 has recently begun. You can follow the latest on the K2 mission here and here. And, of course, we’ll keep you updated with important developments on astrobites!

Figure 2. The locations in the sky of the fields of view along the ecliptic for the ten proposed K2 campaigns. These span a range of galactic coordinates. Note that field 9 appears out of place because the telescope will be pointed “forward” during this campaign (it will usually point backward relative to its orbital direction).

### Symmetrybreaking - Fermilab/SLAC

'Particle Fever' opens in the US

Particle Fever, a documentary that follows scientists involved in research at the Large Hadron Collider, opens this week in select theaters across the United States.

Wish you could have witnessed the euphoria and excitement rippling through the CERN Control Center when the Large Hadron Collider first turned on? Or been in the room when the discovery of the Higgs boson was announced?

### Quantum Diaries

CDMS result covers new ground in search for dark matter

The Cryogenic Dark Matter Search has set more stringent limits on light dark matter.

Scientists looking for dark matter face a serious challenge: No one knows what dark matter particles look like. So their search covers a wide range of possible traits—different masses, different probabilities of interacting with regular matter.

Today, scientists on the Cryogenic Dark Matter Search experiment, or CDMS, announced they have shifted the border of this search down to a dark-matter particle mass and rate of interaction that has never been probed.

“We’re pushing CDMS to as low mass as we can,” says Fermilab physicist Dan Bauer, the project manager for CDMS. “We’re proving the particle detector technology here.”

Their result, which does not claim any hints of dark matter particles, contradicts a result announced in January by another dark matter experiment, CoGeNT, which uses particle detectors made of germanium, the same material as used by CDMS.

To search for dark matter, CDMS scientists cool their detectors to very low temperatures in order to detect the very small energies deposited by the collisions of dark matter particles with the germanium. They operate their detectors half of a mile underground in a former iron ore mine in northern Minnesota. The mine provides shielding from cosmic rays that could clutter the detector as it waits for passing dark matter particles.

Today’s result carves out interesting new dark matter territory for masses below 6 billion electronvolts. The dark matter experiment Large Underground Xenon, or LUX, recently ruled out a wide range of masses and interaction rates above that with the announcement of its first result in October 2013.

Scientists have expressed an increasing amount of interest of late in the search for low-mass dark matter particles, with CDMS and three other experiments—DAMA, CoGeNT and CRESST—all finding their data compatible with the existence of dark matter particles between 5 billion and 20 billion electronvolts. But such light dark-matter particles are hard to pin down. The lower the mass of the dark-matter particles, the less energy they leave in detectors, and the more likely it is that background noise will drown out any signals.

Even more confounding is the fact that scientists don’t know whether dark matter particles interact in the same way in detectors built with different materials. In addition to germanium, scientists use argon, xenon, silicon and other materials to search for dark matter in more than a dozen experiments around the world.

“It’s important to look in as many materials as possible to try to understand whether dark matter interacts in this more complicated way,” says Adam Anderson, a graduate student at MIT who worked on the latest CDMS analysis as part of his thesis. “Some materials might have very weak interactions. If you only picked one, you might miss it.”

Scientists around the world seem to be taking that advice, building different types of detectors and constantly improving their methods.

“Progress is extremely fast,” Anderson says. “The sensitivity of these experiments is increasing by an order of magnitude every few years.”

Kathryn Jepsen

### Peter Coles - In the Dark

Lincoln – Green Shoots for Maths and Physics?

I noticed over the weekend that there’s a job being advertised at the University of Lincoln designated Founding Head of the School of Mathematics and Physics. It seems the powers that be at Lincoln University (which is in the Midlands) have decided to set up an entire new activity in Mathematics and Physics. I’m pointing this out not because of any personal connection with the position, but because it’s refreshing to see a new(ish) Higher Education Institute apparently willing to take the plunge and invest in a new venture, particularly because it includes Physics. It wasn’t at all long ago that UK Physics departments were being closed down – the University of Reading being a prominent example, in 2006. I think Reading is thinking of starting up Physics again, in fact. Perhaps these are the green shoots that presage a new spring for Physics in this country? I do hope so.

It won’t be an easy task to start up a new department from scratch in Lincoln: grant funding is tight and the competition for students among established institutions is already so intense that it will be very difficult for a brand new outfit to break through. Nevertheless, I think it’s a praiseworthy initiative and I wish it well.

### Tommaso Dorigo - Scientificblogging

The dyslectic guy with an erection problem...
Did you know about that dyslectic guy with an impotence problem who once came to Fermilab ? He said he'd been advised to go there as he wanted to get a hadron.

### Quantum Diaries

In case you haven’t figured it out already from reading the US LHC blog or any of the others at Quantum Diaries, people who do research in particle physics feel passionate about their work. There is so much to be passionate about! There are challenging intellectual issues, tricky technical problems, and cutting-edge instrumentation to work with — all in pursuit of understanding the nature of the universe at its most fundamental level. Your work can lead to global attention and support Nobel Prizes. It’s a lot of effort put in over long days and nights, but there is also a lot of satisfaction to be gained from our accomplishments.

That being said, a fundamental truth about our field is that not everyone doing particle-physics research will be doing that for their entire career. There are fewer permanent jobs in the field than there are people who are qualified to hold them. It is certainly easy to do the math about university jobs in particular — each professor may supervise a large number of PhD students in his or her career, but only one could possibly inherit that job position in the end. Most of our researchers will end up working in other fields, quite likely in the for-profit sector, and as a field we do need to make sure that they are well-prepared for jobs in that part of the world.

I’ve always believed that we do a good job of this, but my belief was reinforced by a recent column by Tom Friedman in The New York Times. It was based around an interview with the Google staff member who oversees hiring for the company. The essay describes the attributes that Google looks for in new employees, and I couldn’t help but to think that people who work in the large experimental particle physics projects such as those at the LHC have all of those attributes. Google is not just looking for technical skills — it goes without saying that they are, and that particle physicists have those skills and great experience with digesting large amounts of computerized data. Google is also looking for social and personality traits that are also important for success in particle physics.

(Side note: I don’t support all of what Friedman writes in his essay; he is somewhat dismissive of the utility of a college education, and as a university professor I think that we are doing better than he suggests. But I will focus on some of his other points here. I also recognize that it is perhaps too easy for me to write about careers outside the field when I personally hold a permanent job in particle physics, but believe me that it just as easily could have wound up differently for me.)

For example, just reading from the Friedman column, one thing Google looks for is what is referred to as “emergent leadership”. This is not leadership in the form of holding a position with a particular title, but seeing when a group needs you to step forward to lead on something when the time is right, but also to step back and let someone else lead when needed. While the big particle-physics collaborations appear to be massive organizations, much of the day to day work, such as the development of a physics measurement, is done in smaller groups that function very organically. When they function well, people do step up to take on the most critical tasks, especially when they see that they are particularly positioned to do them. Everyone figures out how to interact in such a way that the job gets done. Another facet of this is ownership: everyone who is working together on a project feels personally responsible for it and will do what is right for the group, if not the entire experiment — even if it means putting aside your own ideas and efforts when someone else clearly has the better thing.

And related to that in turn is what is referred to in the column as “intellectual humility.” We are all very aggressive in making our arguments based on the facts that we have in hand. We look at the data and we draw conclusions, and we develop and promote research techniques that appear to be effective. But when presented with new information that demonstrates that the previous arguments are invalid, we happily drop what we had been pursuing and move on to the next thing. That’s how all of science works, really; all of your theories are only as good as the evidence that supports them, and are worthless in the face of contradictory evidence. Google wants people who take this kind of approach to their work.

I don’t think you have to be Google to be looking for the same qualities in your co-workers. If you are an employer who wants to have staff members who are smart, technically skilled, passionate about what they do, able to incorporate disparate pieces of information and generate new ideas, ready to take charge when they need to, feel responsible for the entire enterprise, and able to say they are wrong when they are wrong — you should be hiring particle physicists.

### Matt Strassler - Of Particular Significance

A 100 TeV Proton-Proton Collider?

During the gap between the first run of the Large Hadron Collider [LHC], which ended in 2012 and included the discovery of the Higgs particle (and the exclusion of quite a few other things), and its second run, which starts a year from now, there’s been a lot of talk about the future direction for particle physics. By far the most prominent option, both in China and in Europe, involves the long-term possibility of a (roughly) 100 TeV proton-proton collider — that is, a particle accelerator like the LHC, but with 5 to 15 times more energy per collision.

Do we need such a machine?

The answer is “Yes, Definitely”. Definitely, if human beings are to continue to explore the inner world of the elementary laws of nature with the same level of commitment with which they explore the outer world of our neighboring planets, the nearby stars and their own planets, and distant galaxies far-flung across the universe. If we can send the Curiosity rover to roam around the surface of the Red Planet and beam back pictures and scientific information — if we can send telescopes like Kepler into space whose sole purpose is to look for signs of planets around distant stars — then surely we can build a machine on Earth whose sole purpose is to help us understand the fundamental principles and elementary objects that underlie the natural world. That’s why we built the LHC, and machines before it; and the justification for a 100 TeV machine remains the same.

Definitely, also, if the exploration of the laws of nature is to continue as a healthy research field. We have a large number of experts who know how to build a big particle accelerator. If we were to postpone building such a machine for a generation, we would suffer some of the same problems suffered by the U.S. space program. All sorts of crucial knowledge of the craft of rocket building was lost when the U.S. failed to follow up on its several trips to the Moon. If we have a hiatus of a generation between the current machine and the next, we will find it much more difficult and expensive to build the next one when we finally decide to do it. So it makes sense to do maintain continuity, especially if it can be done at reasonable cost.

One thing that’s interesting to keep in mind is that a roughly 100 TeV machine is hardly a stretch for modern technology; it’s not going to be a machine with a significant risk of failure. The Superconducting SuperCollider (SSC), which was to be the U.S. flagship machine and was due to start running in the year 2000 (in which case it would definitely have discovered the Higgs particle many years ago — sadly, the U.S. congress canceled it, after it was well underway, in 1993), would have been a 40 TeV machine. The technological step from 40 TeV to 80 or 120 is not a big one. Moreover, the SSC would have been an easier machine to run than is the LHC, which has to strain with very high collision rates to make up for the fact that its energy per collision is a third of what the SSC would have been capable of. The main challenge for such an accelerator is that it has to be very large — which requires a very long tunnel (over 50 miles/80 km) and a very large number of powerful magnets.

It’s no wonder the Chinese are interested in potentially building this machine. With an economy growing rapidly enough to catch up with the other great nations of the world in the next decade or two, and with scientific prowess rapidly increasing (see here and here), some in China rightly see a 100 TeV proton-proton collider both as an opportunity to gain all sorts of technical and technological knowledge that they have previously lacked, and to establish themselves among the few nations that can be viewed as scientific superpowers. Yet it will not require them to go far out on a limb with technology that no one has ever attempted at all, and invent whole new methods that don’t currently exist. Moreover, some of the things that would be expensive or politically complex in the U.S. or Europe will be easier in China. They may be able to pay for and construct this machine themselves, with technical advice and personnel from other countries, but without being dependent on other nations’ political and financial challenges.

In fact, there’s another huge potential benefit along the way, even before the 100 TeV machine is built: a “Higgs factory”. One can potentially use this same tunnel to first build an accelerator that smashes electrons and positrons [i.e. anti-electrons] together, at an energy which isn’t that high, but is sufficient to make Higgs particles at a high rate — not as many Higgs particles as the LHC will produce, but in an environment where precise measurements are much easier to make. [Protons are messy, and all measurements in proton-proton collisions are very difficult due to huge collision rates and large backgrounds; electrons and positrons are simple objects and measurements tend to be much more straightforward.  This comes at a cost: it is harder to get collisions at the highest energies physicists would ideally want.]

The value of a Higgs factory is obvious: a no-brainer. The Higgs particle is our main way of gaining insight into the nature of the all-important Higgs field, and moreover the Higgs particle might also, through its possible rare decays, illuminate a currently veiled world of unknown particles and forces. It’s a research effort whose importance no one can deny, and it serves as a technical stepping stone to a 100 TeV collider, complete with the realistic possibility of Nobel Prize-worthy discoveries in the near term. For China, it’s perfect.

Of course, the Chinese aren’t the only ones interested.  My European colleagues, recognizing a good thing when they see it, and with the advantage that they built and ran the LHC, are also considering building such a machine. [Neither the U.S., which is expertly squandering its scientific leadership in many scientific fields (and pushing many of its best scientists toward the Chinese effort), nor Russia, which is busy starting a disastrous invasion of its neighbor, seem able to make any intelligent decisions at the moment, and surely aren't going to be the leaders in such an effort.] For the moment, the scientists involved are all working together.  Over recent years, any particle physicist worth his or her salt (including me) would spend some time at Europe’s CERN laboratory, which hosts the LHC. And now, many young U.S. experts in theoretical particle physics are planning to spend extended time at China’s “Center for the Future of High Energy Physics“. There was a time young Chinese geniuses like T.D. Lee, C.N. Yang and C.S. Wu did Nobel Prize-winning (or -deserving) work in the United States. Soon, perhaps, it will be the other way around.

But what, scientifically, is the justification for this machine?

Why build a 100 TeV collider?

It’s important to distinguish two types of scientific enterprises: exploratory and targeted. Exploratory refers to when you’re doing a search, in a plausible place, for anything unexpected — perhaps for something whose existence you might suspect, but perhaps more broadly. Targeted refers to doing a search or study where you know roughly, or even exactly, what you’re looking for.

Often a targeted enterprise is also exploratory; while looking for one thing, you can always stumble on something else. Many scientific discoveries, such as X-rays, have been made while doing or preparing experiments with a completely different purpose. On the other hand, an exploratory enterprise may not have any targets at all, or at best, only a very vague target. Sometimes we go searching just because we can. When Galileo pointed his first telescopes at the moon and the planets and the stars, he had no idea what he would find; he just knew he had a great opportunity to discover something.

The LHC was built as a clearly targeted machine: its main goal was to find the Higgs particle (or particles) if it (or they) existed, or whatever replaced them if they did not. Well, now we know that one Higgs particle exists, and it resembles the simplest possible type of Higgs particle, which is termed a “Standard Model Higgs”.   But much remains to learn.  Is this Higgs particle really Standard Model-like, not just at the 30% level but at the 3% level and better? Are there other Higgs particles?  Are there other as-yet unknown particles being produced at the LHC? Are there new forces beyond the ones we’re aware of? Other than the detailed study of the new Higgs particle, these questions are mostly exploratory. In short, though the LHC was built as a targeted machine with a near-guarantee of success, its mission has now shifted toward exploration of the unknown, with no guarantee of further discoveries. But it’s also important to understand that a lack of discoveries will be just as important to our understanding of nature as discoveries would be, for reasons I’ll return to in my next post.

Now what about the 100 TeV machine? Will it be a targeted experimental facility, or an exploratory one?

For the moment, the answer is: we don’t know. Currently, there is no clear target; more precisely, there are lots of possible targets, but none that we know could emerge to be a major, central one. But this machine won’t be built and completed for a couple of decades, and things could change dramatically by then. If the LHC discovers something not predicted by the Standard Model (the equations used to describe the known elementary particles and forces), then clarifying this new discovery will become a major target, and possibly the main target, of the 100 TeV machine.

This highlights one of the challenges with large experimental projects. One has to start thinking about them far in advance, long before it’s entirely clear what their precise use will be. When the SSC and the LHC were first proposed, they did have a proposed target — finding the Higgs particle or particles. But if the recently discovered Higgs particle’s mass had been, say, half of what it actually is, it would have been discovered some years before the SSC or LHC were completed… in which case, the target of the SSC and LHC would have significantly shifted. So we have to start considering, proposing, and perhaps even building the 100 TeV machine before it’s completely clear whether it will have a prominent and definite target, or whether it will be mainly exploratory. That ambiguity is something we just have to live with.

In contrast to the 100 TeV machine, which currently has to be viewed as exploratory, the Higgs factory that would precede it in the same tunnel is much more sharply targeted… targeted at detailed study of the Higgs particle. There are some other targeted and exploratory activities that it can be involved in, including more detailed investigation of the Z particle, W particle and top quark, but its main focus is the Higgs.

However, even if no prominent target for the 100 TeV collider shows up before it is built, its justification as an exploratory machine is clear. In quantum field theory, collisions at higher energy and momentum allow you to probe physics at shorter times and distances — for “particles” are really quanta, i.e., ripples in quantum fields, and a higher-energy quantum has a shorter wavelength and a faster frequency. And we’ve learned time and time again that one way (though not the only one) to discover new things about the world is to examine its behavior on shorter times and shorter distances than we’ve previously been capable of. This enterprise has been going on for generations; first microscopes discovered bacteria and other cells; then these were found, with more powerful experiments, to be made of molecules, in turn made from atoms; yet more powerful experiments showed first that the atoms contain electrons and atomic nuclei, then that the nuclei are made from protons and neutrons, and then that these in turn are made from quarks and gluons. All of this has been discovered by probing the world with ever more powerful particle collisions of one form or another. So building a higher energy accelerator is to take another step along a well-trodden path.

However, it’s not the only path, nor has it ever been.

Is this the most promising path to explore?

The LHC is still in its adolescence, and we can’t predict its future discoveries. At this point the LHC experiments have collected a few percent of the data they’ll collect over the next decade, and they have done so with proton-proton collisions whose energy is only about 60% of what we expect to see in the next few years. Moreover, even the existing data set, collected in 2011-2012, hasn’t been fully analyzed; this data could still yield discoveries (but only if the experimenters choose to make the relevant measurements.) So we certainly can’t know yet whether the LHC will produce a new target for the 100 TeV machine. If it does, then it will be much clearer what to do next and how to use the 100 TeV machine. If it doesn’t… well, that’s something that deserves a bit more discussion.

Suppose that, after the LHC’s last run, nothing other than the Higgs particle’s been found, with properties that are consistent, to a few percent, with a Standard Model Higgs. While this sounds dull at first glance, it’s actually among the most radical possible outcomes of the LHC. That’s because of the “naturalness puzzle”, which I discussed in some detail in this article. Never before in nature, in any generic context, have we come across a low-mass spin-zero particle (i.e. something like the Higgs particle) without other particles associated with it.  In this sense, the Standard Model is an extraordinarily non-generic theory, at least from our current point of view and understanding.  It will be quite shocking if it completely describes all LHC data.

But maybe it does.  If it does, what does this potentially imply about nature?  And what would be the implications for our future explorations of nature at its most elementary level? I’ll address this issue in my next post.

Filed under: Higgs, LHC News, Other Collider News, Particle Physics, The Scientific Process Tagged: atlas, cms, Higgs, LHC, particle physics

### Tommaso Dorigo - Scientificblogging

The Plot Of The Week - Dark Matter Candidates In Super-CDMS
The Super-CDMS dark-matter search has released two days ago the results from the analysis of nine months of data taking. The experiment has excellent sensitivity to weak interacting massive particles producing inelastic scattering with the Germanium in the detector.

The detector is composed of fifteen cylindrical 0.6 kg crystals stacked in groups of three, equipped with ionization and phonon detectors that are capable of measuring the energy of the signals. From that the recoil energy can be derived, and a rough estimate of WIMP candidates mass. The towers are kept at close to absolute zero temperature in the Soudan mine, where backgrounds from cosmic rays and other sources are very small.

### The n-Category Cafe

Should Mathematicians Cooperate with GCHQ?

I’ve just submitted a piece for the new Opinions section of the monthly LMS Newsletter: Should mathematicians cooperate with GCHQ? The LMS is the London Mathematical Society, which is the UK’s national mathematical society. My piece should appear in the April edition of the newsletter, and you can read it below.

Here’s the story. Since November, I’ve been corresponding with people at the LMS, trying to find out what connections there are between it and GCHQ. Getting the answer took nearly three months and a fair bit of pushing. In the process, I made some criticisms of the LMS’s total silence over the GCHQ/NSA scandal:

GCHQ is a major employer of mathematicians in the UK. The NSA is said to be the largest employer of mathematicians in the world. If there had been a major scandal at the heart of the largest publishing houses in the world, unfolding constantly over the last eight months, wouldn’t you expect it to feature prominently in every issue of the Society of Publishers’ newsletter?

To its credit, the LMS responded by inviting me to write an inaugural piece for a new Opinions section of the newsletter. Here it is.

Should mathematicians cooperate with GCHQ?

Tom Leinster

One of the UK’s largest employers of mathematicians has been embroiled in a major international scandal for the last nine months, stands accused of law-breaking on an industrial scale, and is now the object of widespread outrage. How has the mathematical community responded? Largely by ignoring it.

GCHQ and its partners have been systematically monitoring as much of our lives as they possibly can, including our emails, phone calls, text messages, bank transactions, web browsing, Skype calls, and physical location. The goal: “collect it all”. They tap internet trunk cables, bug charities and political leaders, disrupt lawful activist groups, and conduct economic espionage, all under the banner of national security.

Perhaps most pertinently to mathematicians, the NSA (GCHQ’s major partner and partial funder) has deliberately undermined internet encryption, inserting a secret back door into a standard elliptic curve algorithm. This can be exploited by anyone sufficiently skilled and malicious — not only the NSA/GCHQ. (See Thomas Hales’s piece in February’s Notices of the AMS.) We may never know what else mathematicians have been complicit in; GCHQ’s policy is not to comment on intelligence matters, which is to say, anything it does.

Indifference to mass surveillance rests partly on misconceptions such as “it’s only metadata”. This is certainly false; for instance, GCHQ has used webcams to collect images, many sexually intimate, of millions of ordinary citizens. It is also misguided, even according to the NSA’s former legal counsel: “metadata absolutely tells you everything about somebody’s life”.

Some claim to be unbothered by the recording of their daily activities, confident that no one will examine their records. They may be right. But even if you feel that way, do you want the secret services to possess such a powerful tool for chilling dissent, activism, and even journalism? Do you trust an organization operating in secret, and subject to only “light oversight” (a GCHQ lawyer’s words), never to abuse that power?

Mathematicians seldom have to face ethical questions. But now we must decide: cooperate with GCHQ or not? It has been suggested that mathematicians today are in the same position as nuclear physicists in the 1940s. However, the physicists knew they were building a bomb, whereas mathematicians working for GCHQ may have little idea how their work will be used. Colleagues who have helped GCHQ in the past, trusting that they were contributing to legitimate national security, may justifiably feel betrayed.

At a bare minimum, we as a community should talk about it. Sasha Beilinson has proposed that working for the NSA/GCHQ should be made “socially unacceptable”. Not everyone will agree, but it reminds us that we have both individual choice and collective power. Individuals can withdraw their labour from GCHQ. Heads of department can refuse staff leave to work for GCHQ. The LMS can refuse GCHQ’s money.

At a bare minimum, let us acknowledge that the choices are ours to make. We are human beings before we are mathematicians, and if we do not like what GCHQ is doing, we do not have to cooperate.

I had a 500-word limit, so I omitted a lot. Here are the facts on the LMS’s links with GCHQ, as stated to me by the LMS President Terry Lyons:

The Society has an indirect relationship with GCHQ via a funding agreement with the Heilbronn Institute, in which the Institute will give up to £20,000 per year to the Society. This is approximately 0.7% of our total income. This is a recently made agreement and the funding will contribute directly to the LMS-CMI Research Schools, providing valuable intensive training for early career mathematicians. GCHQ is not involved in the choice of topics covered by the Research Schools.

So, GCHQ’s financial support for the LMS is small enough that declining it would not make a major financial impact.

I hope the LMS will make a public statement clarifying its relationship with GCHQ. I see no argument against transparency.

Another significant factor (which Lyons alludes to above and is already a matter of public record) is that GCHQ is a funder of the Heilbronn Institute, which is a collaboration between GCHQ and the University of Bristol. I don’t know that the LMS is involved with Heilbronn beyond what’s mentioned above, but Heilbronn does seem to provide an important channel through which (some!) British mathematicians support the secret services.

Finally, I want to make clear that although I think there are some problems with the LMS as an institution, I don’t blame the people running it, many of whom are taking time out of extremely busy schedules for the most altruistic reasons. As I wrote to one of them:

I’m genuinely in awe of the amount that you […] give to the mathematical community, both in terms of your selflessness and your energy. I don’t know how you do it. Anything critical I have to say is said with that admiration as the backdrop, and I hope I’d never say anything of the form “do more!”, because to ask that would be ridiculous.

Rules for commenting here  I’ve now written several posts on this and related subjects (1, 2, 3, 4). Every time, I’ve deleted some off-topic comments — including some I’ve enjoyed and agreed with heartily. Please keep comments on-topic. In case there’s any doubt, the topic is the relationship between mathematicians and the secret services. Comments that stray too far from this will be deleted.

## March 02, 2014

### Andrew Jaffe - Leaves on the Line

Nearly a decade ago, blogging was young, and its place in the academic world wasn’t clear. Back in 2005, I wrote about an anonymous article in the Chronicle of Higher Education, a so-called “advice” column admonishing academic job seekers to avoid blogging, mostly because it let the hiring committee find out things that had nothing whatever to do with their academic job, and reject them on those (inappropriate) grounds.

I thought things had changed. Many academics have blogs, and indeed many institutions encourage it (here at Imperial, there’s a College-wide list of blogs written by people at all levels, and I’ve helped teach a course on blogging for young academics). More generally, outreach has become an important component of academic life (that is, it’s at least necessary to pay it lip service when applying for funding or promotions) and blogging is usually seen as a useful way to reach a wide audience outside of one’s field.

So I was distressed to see the lament — from an academic blogger — “Want an academic job? Hold your tongue”. Things haven’t changed as much as I thought:

… [A senior academic said that] the blog, while it was to be commended for its forthright tone, was so informal and laced with profanity that the professor could not help but hold the blog against the potential faculty member…. It was the consensus that aspiring young scientists should steer clear of such activities.

Depending on the content of the blog in question, this seems somewhere between a disregard for academic freedom and a judgment of the candidate on completely irrelevant grounds. Of course, it is natural to want the personalities of our colleagues to mesh well with our own, and almost impossible to completely ignore supposedly extraneous information. But we are hiring for academic jobs, and what should matter are research and teaching ability.

Of course, I’ve been lucky: I already had a permanent job when I started blogging, and I work in the UK system which doesn’t have a tenure review process. And I admit this blog has steered clear of truly controversial topics (depending on what you think of Bayesian probability, at least).

### John Baez - Azimuth

Network Theory I

Here’s a video of a talk I gave last Tuesday—part of a series. You can see the slides here:

One reason I’m glad I gave this talk is because afterwards Jamie Vicary pointed out some very interesting consequences of the relations among signal-flow diagrams listed in my talk. It turns out they imply equations familiar from the theory of complementarity in categorical quantum mechanics!

This is the kind of mathematical surprise that makes life worthwhile for me. It seemed utterly shocking at first, but I think I’ve figured out why it happens. Now is not the time to explain… but I’ll have to do it soon, both here and in the paper that Jason Eberle are writing about control theory.

• Brendan Fong, A compositional approach to control theory.

### The n-Category Cafe

Network Theory Talks at Oxford

One of my dreams these days is to get people to apply modern math to ecology and biology, to help us design technologies that work with nature instead of against it. I call this dream ‘green mathematics’. But this will take some time to reach, since living systems are subtle, and most mathematicians are more familiar with physics.

So, I’ve been warming up by studying the mathematics of chemistry, evolutionary game theory, electrical engineering, control theory and information theory. There are a lot of ideas in common to all these fields, but making them clear requires some category theory. I call this project ‘network theory’. I’m giving some talks about it at Oxford.

(This diagram is written in Systems Biology Graphical Notation.)

Here’s the plan:

#### Network Theory

Nature and the world of human technology are full of networks. People like to draw diagrams of networks: flow charts, electrical circuit diagrams, signal-flow graphs, Bayesian networks, Feynman diagrams and the like. Mathematically minded people know that in principle these diagrams fit into a common framework: category theory. But we are still far from a unified theory of networks. After an overview, we will look at three portions of the jigsaw puzzle in three separate talks:

I. Electrical circuits and signal-flow graphs.

II. Stochastic Petri nets, chemical reaction networks and Feynman diagrams.

III. Bayesian networks, information and entropy.

All these talks will be in Lecture Theatre B of the Computer Science Department—you can see a map here, but the entrance is on Keble Road. Here are the times:

• Friday 21 February 2014, 2 pm: Network Theory: overview. Also available on YouTube.

• Tuesday 25 February, 3:30 pm: Network Theory I: electrical circuits and signal-flow graphs. Also available on YouTube.

• Tuesday 4 March, 3:30 pm: Network Theory II: stochastic Petri nets, chemical reaction networks and Feynman diagrams.

• Tuesday 11 March, 3:30 pm: Network Theory III: Bayesian networks, information and entropy.

I thank Samson Abramsky, Bob Coecke and Jamie Vicary of the Computer Science Department for inviting me, and Ulrike Tillmann and Minhyong Kim of the Mathematical Institute for helping me get set up. I also thank all the people who helped do the work I’ll be talking about, most notably Jacob Biamonte, Jason Erbele, Brendan Fong, Tobias Fritz, Tom Leinster, Tu Pham, and Franciscus Rebro.

Ulrike Tillmann has also kindly invited me to give a topology seminar:

#### Operads and the Tree of Life

Trees are not just combinatorial structures: they are also biological structures, both in the obvious way but also in the study of evolution. Starting from DNA samples from living species, biologists use increasingly sophisticated mathematical techniques to reconstruct the most likely “phylogenetic tree” describing how these species evolved from earlier ones. In their work on this subject, they have encountered an interesting example of an operad, which is obtained by applying a variant of the Boardmann–Vogt “W construction” to the operad for commutative monoids. The operations in this operad are labelled trees of a certain sort, and it plays a universal role in the study of stochastic processes that involve branching. It also shows up in tropical algebra. This talk is based on work in progress with Nina Otter.

I’m not sure exactly where this will take place, but probably somewhere in the Mathematical Institute, shown on this map. Here’s the time:

• Monday 24 February, 3:30 pm, Operads and the Tree of Life.

If you’re nearby, I hope you can come to some of these talks — and say hi!

(This diagram was drawn by Darwin.)

### Lubos Motl - string vacua and pheno

Gross vs Strassler: Gross is right
I was told about Matt Strassler's 50-minute talk at JoeFest (click for different formats of the video/audio/slides) and his verbal exchange with David Gross that begins around 35:00.

Matt's talk is pretty nice, touching technical things like the Myers effect, pomerons etc. but also reviewing his work with Joe Polchinski and giving Joe some homework exercises all the time. Matt said various things about the effective field theory's and/or string theory's inability to solve the hierarchy problem even with the anthropic bias taken into account. He would be distinguishing the existence of hierarchies from the lightness of the Higgs in a way that I didn't quite find logical.

They were thought-provoking comments but I just disagree about the basic conclusions. He can't pinpoint any contradiction in these matters because the QFT framework doesn't tell us which QFT is more likely – it goes beyond the domain of questions that an effective QFT may answer. And even the rules to extract such a probabilistic distribution of the vacua from string theory is unknown. If there are no predictions about a particular question – even if it is a "pressing" question like that – there can't be contradictions.

But the main conflict arose due to Matt's vague yet unusual and combative enough comments about the value of the 100-TeV collider.

He would say it could be a bad idea to give our eggs into the basket of this collider planned for the longer term. The reasons? Similar to the luminiferous aether. Michelson was trying to find the aether wind which was misguided, Matt says, so there should have been other experiments.

Unfortunately, he didn't say what those were supposed to be.

David Gross' reaction made it very clear that they disagree not only about the 100-TeV collider but also about the right strategy and interpretation of hypotheses and their testing of the late 19th century, i.e. about the Michelson issue. Previously, Matt Strassler would say lots of weird things such as "the prediction of new physics at the LHC is the only prediction one can deduce from string theory".

This is clearly wrong. No one can deduce anything like that. Effective field theories with some extra assumptions about the distribution of parameters could perhaps lead you to guess that particles with masses comparable to the Higgs may exist. But these are not conclusions of QFT itself. And in string theory, such "predictions" are even more impossible because string theory has no adjustable continuous dimensionless parameters. That means that there are no "natural distributions" on the space of parameters that would follow from string theory, at least as long as we interpret string theory as the currently understood body of knowledge.

Even more qualitatively, there is clearly no derivation that would imply that "string theory is progressive" or "string theory is conservative" when it comes to the amount of new low-energy physics. The latter – conservative string theory – is totally compatible with everything we know. After all, your humble correspondent is not the only one who thinks that string theory is a mostly if not very conservative theory. The claim that it inevitably predicts new things – or that it predicts more new things than possible or real alternatives – is just wrong.

Just to be sure, you may remember that your humble correspondent considers an even larger hadron collider to be the single most meaningful way to progress in experimental particle physics. We march towards the unknown which means that higher-energy experiments are needed for that. This relationship is probably true up to the Planck scale. The higher-energies we investigate experimentally, the deeper we penetrate into the realm of the unknown. David Gross and others clearly share the same viewpoint.

Is it possible that the Very Large Hadron Collider will find the Standard Model only and nothing else? Absolutely. That will be a disappointment but physicists will still learn something. But if you want to propose alternative experiments, you should know what they are. Some people are looking for the dark matter directly. There are experiments trying to detect axions and other things. Physics seems to have enough money for those – they are not as expensive as the large colliders. If you had another important idea what should be tested, you should say what it is, otherwise the claim that the colliders are overvalued contradicts the evidence you are able to offer. Matt isn't able to justify any true alternative.

But David Gross really disagreed about Matt's suggestion that the Michelson-Morley experiment wasn't naturally the best things to do at that time. Well, it was very sensible. Their understanding of the origin of the electromagnetic fields within classical physics implies the aether wind, so they were logically and justifiably trying to find it. More generally, these interferometer-based experiments were a "proxy" for the tests of many or all conceivable phenomena whose strength is proportional to $$v/c$$ or its power i.e. effects that become strong when speeds approach the speed of light. They chose an experimentally smart representative of the "tests of relativistic physics", as we could call it decades later.

But during Michelson's times, there was no relativity. People just didn't know it yet. And Michelson's experiments happened to play just a minor role in Einstein's theoretical investigations. But as far as the people who relied on the experimental data and not Einstein's ingenuity are concerned, it was just completely logical to study the aether wind. It was really a critical experiment that would kickstart relativity if Einstein were paying more attention to the experiments or if another physicists who was well aware of the experiments had a little bit more of Einstein's ingenuity.

When relativity was discovered, it became clear that the aether wind doesn't exist but lots of other relativistic effects (corrections to Newton's physics suppressed by powers of $$v/c$$) do exist. Theorists like Einstein were crucial to make the progress in 1905 etc. but when it comes to the experiments, Michelson's experiments in the 1880s were really the optimum thing to make. They were even lucky because they revealed a particular situation where the right (now: relativistic) physics not only deviates from Newton's physics but where it gives a very simple result (the speed of light is constant, regardless of the speeds of sources or observers).

So just like David Gross and others, I think that Matt is just wrong even in the case of Michelson.

Nati Seiberg would also criticize Matt Strassler. Seiberg says that Matt's doubts are not only in contradiction with the philosophy of high-energy physics; they are in contradiction with reductionism itself that has worked for centuries. The further you go to shorter distances, the more detailed understanding of the physics you may acquire. Why should it break down now? Matt says that quantum gravity is an example where shorter distances fail to allow you to study more detailed physics. Seiberg correctly says that it is right but quantum gravity is very far. Strassler says that this comment by Seiberg might be right or wrong.

Well, I do think that quantum gravity is "probably" at the usual 4D Planck scale or nearby, roughly at the conventional GUT scale or higher. It is also hypothetically possible that quantum gravity kicks in at nearby energies, near 1 TeV. But this scenario predicts *exactly* what Matt Strassler attributes to string theory in general. This scenario implies significant deviations from the Standard Model at the LHC energies. It is really excluded experimentally. String theory is not excluded but quantum gravity at 1 TeV is excluded. I don't know why Matt is getting these basic things backwards.

Nima Arkani-Hamed addressed a question touched by Matt, "why a light Higgs and not technicolor". This question has an answer. As Nima decided with Savas after some scrutiny, technicolor models with light fermions are inevitably tuned to 1-in-a-million or worse because the light fermion masses require us to introduce some running that easily and generically creates new exponential hierarchies between the electroweak scale and the QCD scale, and related things. So SUSY with some tuning for a scalar is still less fine-tuned than technicolor. And if the electroweak symmetry is broken by a strong force, there are no baryons – just neutrinos.

Nima also defends the 100-TeV collider. No one is really suggesting to put all eggs into one basket; people are thinking about and building many, many experiments. Going to high energies is still a very important thing for many reasons.

Matt replies to Nati that "this time is different" because for the first time, the Standard Model can be the "whole story" (he overlooked gravity but it is a potentially complete theory of non-gravitational physics) so there is no reason to think that the new discoveries will come soon. Despite my expectations that new physics does exist below the scale of a few TeV, I agree with that. Strassler also says – and it is pretty much equivalent to the previous sentence – that naturalness and reductionism are not related in any direct way. I agree with that, too: there may be big hierarchies and deserts but reductionism still holds.

David Gross says that the naturalness isn't a prediction; it is a strategy. I completely agree with him (and I have written down this point many times), so let me please try to present this claim in my words, assuming that David Gross would fully subscribe to them, too. Naturalness boils down to a probability distribution on the space of parameters which we can use to think that certain values or patterns are "puzzling" because they are "unnatural" – which means "unlikely" according to this probability distribution. And that's why we focus on them; they are likely to hide something we don't know yet but we should know. At the end, the complete theory makes all these effects in Nature natural (in a more general sense) but because they look unnatural according to an incomplete theory, these effects in Nature are likely to hold a key for a new insight that changes the game, an insight by which the new theory significantly differs from the current incomplete theory. Naturalness cannot be tested quantitatively, however.

Gross said that our inability to calculate something is a problem. I completely agree with that and I add that this problem is worse in the case of parameters that seem parameterically smaller than the expectations because this gap suggests that there is some "big piece of meat" we are missing that changes the story substantially, not just some generic obscure calculation that spits out some numbers of order one. That's where naturalness directs us, I add.

Matt is often drowning in the sea of vagueness – this is something we know from the discussions with him on the blogosphere, too. He tries to say something extremely unusual but he's trying to claim that he isn't saying anything nontrivial at all at the same time. You just can't have it both ways, Matt. In this case, he is saying that we're not spending our time wisely – it's being focused too narrowly. Except that he never says what is the direction in which one might or one should broaden the interest or work.

Someone says he finds it frustrating that the reach for gluinos will only be doubled from 1 TeV to 2 TeV in 2015, not too big a difference. A reason to like a higher-energy machine.

Steve Giddings also points out a bug in Matt's logic concerning reductionism. Even if reductionism (meaning the need to study higher energies) were ending at X TeV, we would clearly need to go slightly above this level to find out that the reductionism fails. Finally, Matt proposes a loophole. Maybe there are extremely light and extremely weakly coupled new effects somewhere, so going to higher energies doesn't help us. Great. So what should we measure instead of the larger collider data?

David Gross says that dark matter is an example of that and Matt says that this makes his (Matt's case) stronger because according to many dark-matter models, one can't discover the new physics by going to higher energies. Well, right, it's plausible. But the difference is that there are "very many" such possible directions. Going to high energy is to increase the value of a quantity that is universal for all of physics – energy (or the inverse distance). Going to study very weakly coupled things means to go in the direction of lowering every conceivable coupling constant anywhere and there are just too many. We may try. We should try those cases that are justified by some arguments. But it is simply not true that any single march towards higher sensitivity in some particular coupling constant of some particular interaction seems to be as important as our ability to go to higher energies. There is only one energy and it's the king; there are way too many coupling constants and each of them seems less fundamental and less universal than energy. So I don't really agree with Matt on this change of the bias, either – unless he tells us what is the particular coupling constant or experiment where it makes sense to go to "much better than considered" sensitivities.

Maybe Matt would propose to build a 1 GeV collider with the luminosity increased 1 billion times? Perhaps it could make sense for some potential possibilities. But he should at least propose such a thing explicitly instead of saying that others are narrow-minded just because they are doing everything that people have conceived so far.

At the very end, Joe Polchinski calmed all bitterness and said that Matt was one of the young persons who comes to Joe's office and pretty much solves a problem in the confining gauge theory, having 100% right on the field-theory side and 80% right on the string-theory side, so Joe added the remaining 20%. Joe improved the flattering joke by saying that this paper was never submitted for publication because it didn't meet Matt's standards LOL. Matt says it's not really true. Joe also says that he didn't really deserve his PhD but with Matt's help and 15 years later, he had finally solved his thesis problem. ;-)

### Sean Carroll - Preposterous Universe

Decennial

Almost forgot again — the leap-year thing always gets me. But I’ve now officially been blogging for ten years. Over 2,000 posts, generating over 57,000 comments. I don’t have accurate stats because I’ve moved around a bit, but on the order of ten million visits. Thanks for coming!

Nostalgia buffs are free to check out the archives (by category or month) via buttons on the sidebar, or see the greatest hits page. Here are some of my personal favorites from each of the past ten years:

## March 01, 2014

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Einstein’s unfinished symphony in the media

Our recent discovery of an unpublished model of the cosmos by Albert Einstein (see last post or here for a preprint of our paper) is receiving a lot of media attention, it’s very humbling. First off the mark was Davide Castelvecchi with a very nice article in Nature. Davide’s article was quickly reproduced in various outlets, from Scientific American here to the Huffington PostTrawling over the internet, I see newspaper and magazine articles describing our discovery in a dozen languages. It’s nice to see historical material receiving this sort of attention, I guess everyone loves an Einstein story.

I’m also intrigued that it was the traditional media that picked up the story – with the exception of Peter Woit, no-one in the blogosphere seemed to notice our preprint or even a blogpost I wrote describing our paper. Perhaps we bloggers need the imprimateur of respected print journals more than we care to admit!

I notice one slightly misleading point in the electronic version of the Nature article is getting repeated everywhere. It’s probably not quite correct to frame Einstein’s attempt at a steady-state model of the cosmos in terms of a resistance to ‘big bang’ theories; there is no reference to the problem of origins in Einstein’s manuscript. Indeed, one of the most interesting aspects of the manuscript is that it appears to have been written in early 1931, at a time when the first tentative astronomical evidence for an expanding universe was emerging but the issue of an explosive beginning for the cosmos had yet to come into focus (e.g. the great debate between Eddington and Lemaitre later in 1931). It’s interesting that the initial mention in Nature of resistance to ‘big bang’ theories  is repeated in almost all other outlets, one can’t help wondering how many science journalists read our abstract. An honorable exception here is John Farrell at Forbes Magazine. John certainly noticed the discrepancy and no wonder – John has written an excellent book on Lemaitre.

All in all, it’s been a lot of fun so far. I’m getting quite a few emails from distinguished colleagues pointing out that Einstein’s model is trivial because it didn’t work, which is of course true. However, our view is that what Einstein is trying to do is very interesting from a philosophical point of view  – and what is even more interesting is that he apparently abandoned the project when he realised that a consistent steady-state model would require an amendment to the field equations. In short, it seems the Great Master conducted an internal debate between steady-state and evolving models of the cosmos decades before the rest of the community…

Update

There is a very nice video describing our discovery here.

### Clifford V. Johnson - Asymptotia

LA Phil Rock Star!
When calling to mind the Los Angeles Philharmonic, everyone's (and all the posters') focus is on Gustavo Dudamel, (or, the Dude, as I call him), all unruly hair and visible enthusiasm and so forth, and that's great. He's an excellent conductor. However, one of the unsung (as far as I know*) visibly spectacular performers of the LA Philharmonic is the excellent principal viola player whose name I do not know [update: see below*] who puts on the most remarkable physical performance every time I go (and presumably those other times too). Actually, the violist who sits next to her is also remarkable, since she manages without being distracted by her neighbour to maintain a very upright and solid, firmly planted, legs wide stance, in part providing a canvas upon which the viola player I first mentioned can splash bright splashes of movement all over the place! She rocks, sways, jerks, and contorts (sometimes even during quiet slow bits)- doing the craziest things with her legs, head, and bow arm, and so much of the time looks like she is about to spectacularly fall off her chair and wipe out at least half the viola section! This is why her colleague right next to her is also remarkable, as she acts as this wonderful un-distractable "straight man" to the physical pyrotechnics helping make them all the more remarkable by contrast. Last night I tried to capture some of the energy of the hyper-energetic viola player in a quick sketch (during [...] Click to continue reading this post

### Clifford V. Johnson - Asymptotia

Take Part in the Festival!

### Jester - Resonaances

Weekend Plot: wimp race
This weekend's plot encompasses almost the entire field of direct detection of WIMP dark matter:

It shows the existing and projected limits on the scattering cross section of dark matter on nucleons. LUX -- a 370 kg xenon detector -- currently holds the leader position for dar matter masses above 6 GeV and promises to improve the limits by another factor of a few next year. The Xenon collaboration on the other side of the Atlantic is already preparing a nuclear response in the form of a 3 ton detector, to which LUX will retaliate with a 5 ton Led Zeppelin, or maybe LUX-Zeplin. Meanwhile, the SuperCDMS experiment will secure a monopoly in the low-mass region. But the arms race cannot go on forever, as direct detection experiments will inevitably hit the neutrino wall. That is to say, they will reach the sufficient sensitivity to observe nuclear recoils due to elastic scattering of solar and atmospheric neutrinos. That will constitute an irreducible background to dark matter searches  (unless directional detection techniques are developed). And so it'll all come to a bitter end: sometime in the next decade WIMP detection experiments will be downgraded to neutrino observatories.

The plot borrowed from this talk who itself borrowed it from somewhere.

### Tommaso Dorigo - Scientificblogging

Death Of The Dijet Anomaly
Do you remember the CDF Dijet bump at 145 GeV? In 2010, CDF published a paper that showed how the same data sample of W + jet events where they had previously isolated the "single-lepton" WW+WZ signal also presented an intriguing excess of events in the dijet mass distribution, in a region where the background -dominated by QCD radiation produced in association with a W- fell smoothly. That signal generated some controversy within the collaboration, and a lot of interest outside of it. It could be interpreted as some signal of a new technicolor resonance !

## February 28, 2014

### ZapperZ - Physics and Physicists

"Dropleton" Makes News
I've given up on trying to figure out why certain things from science make the news, while others don't. My feeble guess would be that a good, catchy name or phrase often can captivate a news reporter or agency more than having an actual importance.

Not that I'm implying the "dropleton" is not not important. After all, it made the cover of this week's Nature! Still, what makes the Los Angeles Times take note of it? I think it is a combination of the name and the sleek image on Nature's cover. Still, I don't think people who read the LA Times article on this thing would know what it is and why it is important enough that it made the cover. Besides, I don't think they would care.

It isn't often that a "new quasiparticle" makes the news. I probably won't see another one again in my lifetime, I would think.

Zz.

### astrobites - astro-ph reader's digest

How Green Can a Planet in a Resonant Orbit Be?

Title: Photosynthetic Potential of Planets in 3:2 Spin Orbit Resonances
Authors: S.P. Brown, A.J. Mead, D.H. Forgan, J.A. Raven, C.S. Cockell
First Author’s Institution: UK Centre for Astrobiology, School of Physics and Astronomy, University of Edinburgh
Paper Status: Accepted for publication in the International Journal of Astrobiology

When looking for exoplanetary homes for life, planets around M-type stars show a lot of potential. Although these dwarf stars are smaller and less luminous than our sun, their lifespans are much longer. A dimmer star means that the potential habitable zone (HZ) is much closer in to the star. Whereas Earth orbits 1 AU (roughly 93 million miles) from the sun, the HZ around an M-dwarf is anywhere from one tenth to even a few hundredths of that distance. Planets this close to their stars have shorter orbital periods, making them easier to find by our current planet-hunting methods.

Planets so close to their stars are more likely to fall into orbital resonances with their star. In its most extreme case, this means tidal locking, where the planet makes one rotation per orbit, as the moon does around the Earth, resulting in permanent day and night hemispheres on the planet. This leads to extreme heat and cold, although there has been research indicating that heat transfer through the atmosphere and a greenhouse-friendly atmosphere could make tidally locked planets less hostile to the possibility of life.

A less extreme example is a 3:2 resonance, where the planet makes three rotations for every two orbits. (Click through for a helpful .gif.) Mercury is in a 3:2 rotation around the sun. Each Mercury year consists of one and a half Mercury days. Mercury, of course, lacks an atmosphere and has a temperature range from 100 – 700 C, so it’s not exactly hospitable. A 3:2 planet around an M star, though, would not be so intensely hot.

Temperature isn’t the only criterion for habitability. Starlight brings more than heat – at least on Earth, the sun’s energy also provides the basis for the food chain, via photosynthesis. Although photosynthesis isn’t necessarily a requirement for life on a planet, it is essential to the biosphere of Earth. This paper investigates the possibility for photosynthetic life to exist on a planet in a 3:2 orbital resonance, where long days are paired with long nights that could starve photosynthetic life out of existence.

First, the authors calculate the flux received over the surface of the planet. On Earth, flux varies by latitude, but works out evenly at a given longitude – the equator gets the same amount of energy from the sun in Quito as in Jakarta. But if a 3:2 planet’s orbit is eccentric – Mercury’s is 0.206; high eccentricity may what keeps a close-orbit planet from becoming tidally locked and at 3:2 instead – things start to get weird. If the eccentricity is above 0.191, the star will have apparent retrograde motion across the sky. At some longitudes, this even means that the sun will rise a bit from where it has set before reversing to set once again (see figure 1).

Figure 1: The position of the star in the sky over 2 orbits, at longitudes along the equator. On the left, with eccentricty of e=0, the star moves straight across the sky, as on Earth. On the right, for e=0.3, the star exhibits retrograde motion. For longitude of 90 degrees (yellow), the dips above and below the line where the angle of the star is 90 or -90 degrees indicate the sun rising from where it has set, or setting back from where it has just risen.

Figure 2: Integrated energy received over 2 orbits as a function of longitude (x-axis) and latitude (y-axis). Red means more flux, blue and black mean less. From left to right, top to bottom, at eccentricities of e=0, e=0.2, e=0.4, e=0.5, e=0.7, e=0.8.

This retrograde motion means that different longitudes will receive different amounts of stellar energy over the course of the two-orbit cycle. Figure 2 shows the stellar energy received over two orbits as a function of longitude and latitude. In the upper left-hand image, with zero eccentricity, all longitudes are the same, with the most flux in the two bright bands. But once eccentricity is up to .2 (upper right), some longitudes receive much more energy than others, as seen in the bright and dark areas.

The longitudinal variations in the light cycles make habitability – on local and planetary levels – a complicated question. The average stellar flux would be enough for photosynthetic life, but organisms can’t bask in a two-orbit average. They would be subject to dramatic local variations, long nights, constrained habitats, as well as planet-wide results of the 3:2 orbit including the risk of planetary freeze-out.

The authors use the research already conducted into potential conditions on fully tidally locked planets (in a 1:1 resonance) to extrapolate to the similar but less-extreme 3:2 planet. A tidally locked planet has a risk of the runway freeze-out of its dark-side atmosphere, although atmospheric heat transfer to the dark side could allay this event. Although the darkness on a 3:2 planet is not permanent but simply long, the authors argue that while freeze-out is a possibility, it is an unlikely one.

Even if long nights don’t cause a planet’s atmosphere to freeze, prolonged darkness is still a major challenge for organisms that get their energy from light. In the cross-disciplinary spirit of astrobiology, the authors collect extensive research from biology that provides examples of photosynthetic life surviving extended darknesses through two methods: dormancy and mixotrophy. Many phytoplankton, dinoflagellates, and diatoms have been shown to survive without light for months, years, and even decades. And some green algae are mixotrophic, subsisting on photosynthesis when light is abundant, and switching to digestion by way of phagotrophy when it is not. These examples show the wide-ranging flexibility of photosynthetic life. The precedent for the adaptations needed to flourish in the weird light conditions of a 3:2 planet is certainly there.

What isn’t certain, of course, is everything else. The authors make an especially interesting point about one of their necessary simplifications, this one in terms of orbital dynamics. This paper’s calculations of a planet’s orbit use fixed Keplerian parameters, but from Mercury (our own local 3:2 planet) we know that its perihelion actually precesses, meaning it shifts gradually over time. As perihelion precesses, the zones of high and low flux (figure 2) shift over the planet’s surface. A colony of photosynthesizers living in a bright zone could find themselves losing the extra daylight they depend on. The authors posit, though that since precession takes place on the same general timescales as evolution, precession could drive adaptation and speciation rather than extinction.

While a planet in a 3:2 orbital resonance could be home to photosynthetic life, its biosphere would show very different patterns than ours. Rather than latitudinal bands of climate, a 3:2 planet would have biomes constrained to certain longitudes. As we get closer and closer to the ability to scan exoplanets’ spectra and atmospheres for signs of life, the familiar biosignature of photosynthesis – oxygen waste – remains a promising clue.

### Symmetrybreaking - Fermilab/SLAC

CDMS result covers new ground in search for dark matter

The Cryogenic Dark Matter Search has set more stringent limits on light dark matter.

Scientists looking for dark matter face a serious challenge: No one knows what dark matter particles look like. So their search covers a wide range of possible traits—different masses, different probabilities of interacting with regular matter.

Today, scientists on the Cryogenic Dark Matter Search experiment, or CDMS, announced they have shifted the border of this search down to a dark-matter particle mass and rate of interaction that has never been probed.

### Matt Strassler - Of Particular Significance

Brane Waves

The first day of the conference celebrating theoretical physicist Joe Polchinski (see also yesterday’s post) emphasized the broad impact of his research career.  Thursday’s talks, some on quantum gravity and others on quantum field theory, were given by

• Juan Maldacena, on his latest thinking on the relation between gravity, geometry and the entropy of quantum entanglement;
• Igor Klebanov, on some fascinating work in which new relations have been found between some simple quantum field theories and a very poorly understood and exotic theory, known as Vassiliev theory (a theory that has more fields than a field theory but fewer than a string theory);
• Raphael Bousso, on his recent attempts to prove the so-called “covariant entropy bound”, another relation between entropy and geometry, that Bousso conjectured over a decade ago;
• Henrietta Elvang, on the resolution of a puzzle involving the relation between a supersymmetric field theory and a gravitational description of that same theory;
• Nima Arkani-Hamed, about his work on the amplituhedron, a set of geometric objects that allow for the computation of particle scattering in various quantum field theories (and who related how one of Polchinski’s papers on quantum field theory was crucial in convincing him to stay in the field of high-energy physics);
• Yours truly, in which I quickly reviewed my papers with Polchinski relating string theory and quantum field theory, emphasizing what an amazing experience it is to work with him; then I spoke briefly about my most recent Large Hadron Collider [LHC] research (#1,#2), and concluded with some provocative remarks about what it would mean if the LHC, having found the last missing particle of the Standard Model (i.e. the Higgs particle), finds nothing more.

The lectures have been recorded, so you will soon be able to find them at the KITP site and listen to any that interest you.

There were also two panel discussions. One was about the tremendous impact of Polchinski’s 1995 work on D-branes on quantum field theory (including particle physics, nuclear physics and condensed matter physics), on quantum gravity (especially through black hole physics), on several branches of mathematics, and on string theory. It’s worth noting that every talk listed above was directly or indirectly affected by D-branes, a trend which will continue in most of Friday’s talks.  There was also a rather hilarious panel involving his former graduate students, who spoke about what it was like to have Polchinski as an advisor. (Sorry, but the very funny stories told at the evening banquet were not recorded. [And don't ask me about them, because I'm not telling.])

Let me relate one thing that Eric Gimon, one of Polchinski’s former students, had to say during the student panel. Gimon, a former collaborator of mine, left academia some time ago and now works in the private sector. When it was his turn to speak, he asked, rhetorically, “So, how does calculating partition functions in K3 orientifolds” (which is part of what Gimon did as a graduate student) “prepare you for the real world?” How indeed, you may wonder. His answer: “A sense of pertinence.” In other words, an ability to recognize which aspects of a puzzle or problem are nothing but distracting details, and which ones really matter and deserve your attention. It struck me as an elegant expression of what it means to be a physicist.

Filed under: Quantum Field Theory, Quantum Gravity, String Theory Tagged: DoingScience, fields, QuantumGravity, StringTheory

### The Great Beyond - Nature blog

South Korean Supreme Court confirms Hwang’s sentence

Posted on behalf of David Cyranoski and Soo Bin Park.

The South Korean Supreme Court has upheld a 2010 ruling that sentences disgraced cloning expert Woo Suk Hwang to a one-and-a-half-year prison term for embezzlement and violation of the country’s bioethics law. The term comes with a two-year probation, however, and if Hwang does not commit a crime during that period, he will not have to serve jail time at all. This is the final judgment on a trial that started in 2006 and reached its first verdict in 2009 after 43 hearings involving 60 witnesses.

In a separate judgment also handed down yesterday — one that might be more troubling for Hwang — the Supreme Court overturned an earlier decision that would have forced Seoul National University (SNU) to reinstate Hwang.

Hwang received a doctorate degree in veterinary medicine from SNU in 1982 and had been a professor there since 1986 (see ‘Timeline of controversy‘). In March 2006, in the wake of the finding that his team had fabricated data in the two human therapeutic cloning papers and Hwang’s admission that he had ordered some of that fabrication, SNU fired Hwang. (Other members of the team, including as Curie Ahn and BC Lee, retained their positions.)

In 2006 Hwang sued to get his position back. He lost that initial court battle, but in November 2011 a Seoul court of appeals decided in favour of Hwang, saying that firing him was excessive given uncertainty over details of Hwang’s role in the fraud.

The Supreme Court has now annulled that judgment, leaving Hwang without claim to the professorship. The court noted Hwang’s responsibility as leader of the group that fabricated data and his role in ordering some of that fabrication. It added that such discipline is necessary to restore the public’s confidence in the university.

Hwang, who is now the head of increasingly visible animal-cloning institute in Seoul, did receive happier news earlier this month when the US patent office granted his application for a patent on methods used to create cloned human embryonic stem cells. All of stem cell lines that Hwang claimed to have made using that process were found to be faked.

### CERN Bulletin

RÉSULTATS DU QUESTIONNAIRE : MÉTHODES D’ANALYSE
Examen quinquennal des conditions d’emploi L’article S V 1.02 du Statut de personnel stipule que « Le Conseil examine et fixe périodiquement les conditions financières et sociales des membres du personnel. Ces examens périodiques consistent en un examen quinquennal général des conditions financières et sociales ; […] » dont la « méthode utilisée par le Conseil [… est] précisée au § I de l'Annexe A 1 ». Puis, en se référant à la partie pertinente de ladite Annexe A 1 on trouve que : « L’objet de l’examen quinquennal est d’assurer que les conditions financières et sociales offertes par l’Organisation permettent à celle-ci d’engager, dans tous ses États membres, et de retenir en son sein les titulaires nécessaires à l’exécution de sa mission. […] Ces titulaires doivent être de la plus haute compétence et de la plus grande intégrité. » Quant au menu d’un tel examen nous avons : « L’examen quinquennal inclut, à titre obligatoire, les traitements de base et, à titre facultatif, toute autre condition financière ou sociale». Le prochain examen quinquennal doit se conclure en décembre 2015, lorsque le Conseil se prononcera sur les propositions mises en avant par le Directeur général après consultation de l'Association du personnel et des États membres dans le cadre du Comité de concertation permanent (CCP) et le Forum tripartite sur les conditions d’emploi (TREF). En fait, la liste des sujets à prendre en considération doit être définie par le Conseil du CERN en Juin 2014, sur proposition du Directeur général. Par conséquent, en automne de l'année n-2 avant la fin de chaque cycle d'examen quinquennal l'Association du personnel organise une enquête auprès du personnel pour connaître son souhait concernant les éléments qu'il aimerait voir examinés sous la rubrique « autre condition financière ou sociale » (sachant que l’examen des traitements de base est obligatoire). Ainsi, en novembre 2013, nous avons organisé une telle enquête, dont les principaux résultats ont été présentés lors de réunions publiques en Février 2014. Les résultats complets sont également disponibles sur le web. Dans Écho 189 nous avons présenté quelques statistiques sur la participation, qui montre que les 55 % du personnel qui ont répondu à l'enquête étaient représentatifs de l'ensemble de la population du personnel du CERN. Dans les numéros futurs d’Écho, nous irons à travers les résultats des différents chapitres de l'enquête en détail. Mais d'abord, dans ce numéro, nous expliquons les méthodes que nous avons utilisées pour analyser les réponses dans les 1383 questionnaires entièrement remplis (Fig. 1). Fig. 1 : Analyse des données du questionnaire Méthode d’analyse mathématique : coefficients de corrélation Cette 1ère méthode tente de répondre à la question « qui pense quoi ? » en calculant, à l'aide du programme Mathematica, les corrélations pour les types de réponses, sur la base des données personnelles disponibles. Pour chaque réponse individuelle toutes les données personnelles (dimension 1) sont couplées avec toutes les réponses possibles dans les différents chapitres de l'enquête, c.-à-d., les priorités, la politique des contrats, le MARS, l’aménagement du temps de travail et la politique familiale (dimension 2). Chacune des réponses possibles pour les dimensions 1 et 2 a reçu un identificateur unique (code de hachage), de sorte que nous obtenons N {= dimension 1*dimension 2} points possibles (x,y) pour une réponse donnée. Cette opération pour toutes les réponses génère environ un million de points. Notez que certaines paires spécifiques (x,y) peuvent apparaître plusieurs fois (par exemple, les « hommes » peuvent répondre de la même manière à certaines questions). Par conséquent, nous codons ce nombre d'occurrences dans une 3e dimension, générant ainsi un ensemble tridimensionnel de points pour chaque chapitre dans l'enquête : p3d = (x : hash-info-personnel, y : hash-réponse-mars, z : compteur d’occurrence). Le calcul de l'ensemble de ces points est assez fastidieux, il est calculé une fois et enregistré pour une utilisation ultérieure dans l'analyse. On peut maintenant choisir une paire de données personnelles (par ex. « hommes » et « femmes »), correspondant à deux coupes à travers la surface de p3d, et calculer la similarité de ces deux courbes. Le résultat est le coefficient de corrélation de Pearson, qui mesure la similitude de la forme des courbes en question, à savoir, le degré de similitude dans les réponses données par les « hommes » et « femmes » concernant leur opinion sur le MARS existant, ou leurs souhaits pour une future politique de contrat, etc., pour tous les chapitres de l'enquête. Le tableau 1 montre un exemple d'une telle analyse pour les questions relatives à l'acceptation du système MARS actuel, avec seulement un sous-ensemble de lignes affichées. Ces coefficients de corrélation sont plutôt des mesures relatives de la similitude, et l'information est contenue dans la comparaison entre eux. Si la corrélation est proche de 1 (vert dans le tableau 1), les réponses des deux sous-groupes sélectionnés sont similaires. Plus la valeur s'écarte de 1, plus les opinions divergent (brun clair à brun foncé dans le tableau). Tableau 1 : Exemple de coefficients de corrélation pour les réponses « la formule actuelle du MARS me convient »   Dans le tableau 1, nous constatons que les opinions des hommes et des femmes sont assez semblables (ligne 1). Les titulaires en milieu de carrière (dans leur quarantaine ou cinquantaine) ont des vues similaires (ligne 2), tandis que ceux en début de carrière (avant 30 ans) et les étapes ultérieures (après 60 ans) de leur carrière ont des opinions assez différentes (ligne 3). De même, une augmentation de la différence de niveau de scolarité (lignes 4 et 5) semble résulter en de plus grandes différences d'opinion. Le personnel des filières de carrière E et F ont une vue similaire sur le MARS (ligne 6), ceux de filières de carrière F et G un peu moins (ligne 7), alors qu'il n'y a aucune corrélation entre les opinions des titulaires dans les filières de carrière A et G (ligne 8). Il semble également y avoir une grande différence d'opinion entre les titulaires avec un contrat à durée limitée (LD) et ceux avec un contrat de durée indéterminée (IC) (voir ligne 9). Cette méthode suppose que toutes les réponses sont indépendantes, ce qui est approximativement vrai. En outre, le nombre d'entrées dans un échantillon donné (par ex. les titulaires en filières A de la ligne 8 dans le tableau 1) peut être assez petit pour que la valeur du coefficient de corrélation calculé ai une incertitude non négligeable. Par conséquent, une interprétation informé des coefficients de corrélation devra prendre compte de ces limites. Méthode d’analyse graphique La 2ème méthode d'analyse utilise une représentation graphique des réponses et cherche des différences dans les motifs des réponses en fonction des critères personnels. Pour faciliter l'inspection visuelle, les réponses possibles ont été groupées en trois catégories : «globalement d’accord» (vert dans la Fig. 2, regroupant «Tout à fait d’accord» et D’accord »), « globalement pas d’accord » (rouge dans la Fig. 2 , regroupant « Peu d’accord » et «Pas du tout d’accord») et «Sans opinion» (noir dans la Fig. 2). Nous pouvons ainsi facilement observer toute variation dans le motif vert-rouge pour les différentes populations. Par exemple, dans la Fig. 2, qui est concerné par la question : «La formule actuelle du MARS me convient», nous voyons que le personnel de moins de 40 ans, avec une formation technique supérieure, et en filière de carrière C semble avoir une opinion un peu moins négative que les autres populations. La seule différence significative est entre titulaires avec un contrat LD ou IC. Notez que cette méthode ne fournit que des indications visuelles. Ainsi, les différences observées doivent être étudiées plus en détail par la méthode des coefficients de corrélation décrite ci-dessus.   Fig 2: Analyse graphique d’une réponse Structurer les commentaires Pour compléter les informations obtenues par l’analyse des réponses aux questions, nous avons également étudié en détail plus d'une centaine de pages de commentaires. Chaque commentaire dans l'enquête porte sur une question spécifique. Pour permettre une analyse plus systématique, nous avons d'abord regroupé les questions en « thèmes » et nous rapportons tous les commentaires à l'un des thèmes choisis (par ex., la Fig. 3 montre les six thèmes que nous avons choisis pour le chapitre MARS). En plus de cela, pour un thème donné, nous classons chaque commentaire en termes de « sentiment » (jugement). Nous extrayons également d'éventuelles propositions ou critiques des commentaires. Enfin, nous structurons l'information graphiquement en regroupant les commentaires selon qu'ils répondent à certaines questions spécifiques (Fig. 4).     Fig. 3 : Analyse des commentaires du chapitre MARS     Fig. 4 : Vue graphique des commentaires concernant la promotion et l’avancement Conclusion Depuis novembre 2013, après la clôture de  la période de participation à l’enquête, les membres du Conseil du personnel, et en particulier ceux qui sont actifs dans la Commission « Conditions d'emploi », ont été à pied d'œuvre pour analyser les réponses en utilisant l'approche à trois volets de la Fig. 1 et décrite dans cet article. L’analyse détaillée des réponses à l’enquête continue et elle sera complétée par les commentaires reçus au cours de nos réunions publiques récentes. Vous, en tant que membre du personnel, pouvez également continuer à nous fournir des informations complémentaires en contactant l'un de vos délégués au Conseil du personnel ou en envoyant un courriel à staff.association@cern.ch. Sur la base de tous ces éléments vos représentants de l'Association du personnel pourront avoir une image plus claire de la volonté de l'ensemble du personnel lors de leurs discussions avec la Direction et les États membres dans les mois à venir.

### CERN Bulletin

A Nobel laureate's formula for the universe

A Nobel laureate and a blackboard at CERN is all you need to explain the fundamental physics of the universe. At least, that's what François Englert convinced us of on his visit to CERN on 21 February 2014. Englert shared the 2013 Nobel prize in Physics with Peter Higgs "for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles". In the video below, he explains how he and Higgs manipulated equations containing mathematical constructs called scalar fields to predict the existence of the Brout-Englert-Higgs field.

### Quantum Diaries

B Decays Get More Interesting

While flavor physics often offers a multitude of witty jokes (read as bad puns), I think I’ll skip one just this time and let the analysis speak for itself. Just recently, at the Lake Louise Winter Institute, a new result was released for the analysis looking for $$b\to s\gamma$$ transitions. Now this is a flavor changing neutral current, which cannot occur at tree level in the standard model. Therefore, the the lowest order diagram which this decay can proceed by is the one loop penguin shown below to the right.

One loop penguin diagram representing the transition $$b \to s \gamma$$.

From quantum mechanics, photons can have either left handed or right handed circular polarization. In the standard model, the photon in the decay $$b\to s\gamma$$ is primarily left handed, due to spin and angular momentum conservation. However, models beyond the standard model, including some minimally super symmetric models (MSSM) predict a larger than standard model right handed component to the photon polarization. So even though the decay rates observed for $$b\to s\gamma$$ agree with those predicted by the standard model, the photon polarization itself is sensitive to new physics scenarios.

As it turns out, the decays $$B^\pm \to K^\pm \pi^\mp \pi^\pm \gamma$$ are well suited to explore photon polarizations after playing a few tricks. In order to understand why, the easies way is to consider a picture.

Picture defining the angle $$\theta$$ in the analysis of $$B^\pm\to K^\pm \pi^\mp \pi^\pm \gamma$$. From the Lake Louise Conference Talk

In the picture to the left, we consider the rest frame of a possible resonance which decays into $$K^\pm \pi^\mp \pi^\pm$$. It is then possible to form the triple product of $$p_\gamma\cdot(p_{\pi,slow}\times p_{\pi,fast})$$. Effectively, this defines the angle $$\theta$$ defined in the picture to the left.

Now for the trick: Photon polarization is odd under parity transformation, and so is the triple product defined above. Defining the decay rate as a function of this angle, we find:

$$\frac{d\Gamma}{d \cos(\theta)}\propto \sum_{i=0,2,4}a_i cos^i\theta + \lambda_i\sum_{j=1,3} a_j \cos^j \theta$$

This is an expansion in Legendre Polynomials up to the 4th order. The odd moments are those which would contribute to photon polarization effects. The lambda is the photon polarization. Therefore, by looking at the decay rate as a function of this angle, we can directly access the photon polarization. However, another way to access the same information is by taking the asymmetry between the decay rate for events where theta is above the plane and those where theta is below the plane. This is then proportional to the photon polarization as well and allows for direct statistical calculation. We will call this the up-down asymmetry, or $$A_{ud}$$. For more information, a useful theory paper is found here.

Enter LHCb. With the 3 fb$$^{-1}$$ collected over 2011 and 2012 containing ~14,000 signal events, the up-down asymmetry was measured.

Up-down asymmetry for the analysis of $$b\to s\gamma$$. From the Lake Louise Conference Talk

In bins of invariant mass of the $$K \pi \pi$$ system, we see the asymmetry is clearly non-zero, and varies across the mass range given. As seen in the note posted to the arXiv, the shapes of the fit of the Legendre moments are not the same in differing mass bins, either. This corresponds to a 5.2$$\sigma$$ observation of photon polarization in this channel. What this means for new physics models, however, is not interpreted, though I’m sure that the arXiv will be full of explanations given about a week.

### Lubos Motl - string vacua and pheno

Stringless, recursive calculations in string theory
Rutger H. Boels and Tobias Hansen of DESY, Hamburg released a very interesting 66-page-long hep-th preprint today,
String theory in target space
They are effectively generalizing the Britto-Cachazo-Feng-Witten (BCFW) techniques from the case of quantum field theory (gauge theory) to the case of perturbative string theory.

Slightly off-topic. Congratulations to Joe Polchinski who will be 60 in the spring but who already has the Joefest now; see Clifford Johnson and Matt Strassler. If you need a higer-res version of the picture, let me know.

One may say – and they probably say – that they're returning to the mode of reasoning that existed shortly after the birth of string theory (the publication of the Veneziano amplitude) before it was shown that Veneziano's amplitude followed from a theory of strings.

Once this "constructive" derivation of the amplitudes was found by the co-fathers of string theory such as Nambu, Susskind, and Nielsen, string theorists largely abandoned the string theory's roots in the "S-matrix program". This program – whose original emergence traces back to Werner Heisenberg – became unfashionable once both the QCD and string theory's amplitudes were derived from a very particular set of degrees of freedom (quarks and gluons; strings) and a particular Lagrangian.

Against the spirit of the S-matrix program and the bootstrap paradigm, the internal consistency ceased to be "fundamental"; it could have been derived from the defining conditions.

However, these two researchers in Germany show that many still unknown or overlooked "not constructive" or "S-matrix-consistency-related" insights about string theory may be waiting – a whole machinery to reinterpret all the calculations in which strings and their 2D world sheets don't look central at all. If true, these are the stringy counterparts of the S-matrix flavor that was revived by the recent twistor minirevolution in the context of gauge theories.

Their main technical foci of interest are roots (zeroes) of the scattering amplitudes, effectively coming from the poles of the Gamma function in the amplitudes' denominators. Note that normally we only know where the poles of the amplitudes, i.e. poles of the Gamma functions in the numerator, come from. They come from virtual particles in the processes.

But they use some Plahte's 1970 insights about the monodromies of the amplitudes under the permutations of the external particles, see e.g. a 2010 paper by Boels et al., to write down the amplitudes differently.

They show the unitarity of the S-matrix in a new light. Unitarity relates the loop amplitudes to the tree-level amplitudes, among other things. It's possible to prove it (and the no-ghost theorem about the positivity of residues etc.) using the world sheet construction – a strategy of the proof that isn't really useful for a direct, alternative calculation of the loop diagrams. But they effectively achieve the same outcome without any world sheets, using BCFW-like tricks extended from field theory to string theory. And they reduce the proofs of some mathematical claims about the positivity of some sums that don't seem to be "manifestly stringy" in any sense.

There are many recursion relations that one may write down for the stringy scattering amplitudes. They're not quite independent and they argue that the old-fashioned string duality (currently known as the "world-sheet duality") relates these different recursion relations with each other. Closed string amplitudes are mostly obtained by the KLT (closed equals open squared) relations in this paper.

I don't quite understand how it works so far but I do believe that it may be a good idea for experts among the TRF readers to spend some time with this preprint and its references. It seems to me that in the future, people will understand the power of internal consistency much more intimately than today. They will know that whole S-matrices or their subsets (by the number of external particles or loops or other labels) follow from consistency and a few assumptions. Ideally, sometime in the future, people should be able to rigorously, mathematically prove that a "consistent theory of quantum gravity" and "string/M-theory" are really two descriptions of the same thing. I view S-matrix-program-based advances such as this paper to be steps in this direction.

eXciting dark matter

Just two more sentences. Dark matter top experts Douglas Finkbeiner and Neal Weiner released An X-Ray Line from eXciting Dark Matter where they try to explain the 3.5 keV line not by a 7 keV sterile neutrino but by a decay $$\chi^*\to \chi+\gamma$$ between two nearly degenerate states $$\chi,\chi^*$$ of the dark matter, while the decay is mediated by the field $$\phi$$.

The particle may also be a 7-keV string-theoretical modulus, i.e. scalar field describing the shape of extra dimensions etc., according to a paper released tomorrow.

## February 27, 2014

### The Great Beyond - Nature blog

Publisher reacts to fake-paper-gate

It’s general practice in research publishing to issue retractions for papers that must be withdrawn. But what to do when the papers in question are not merely flawed science, but utter gibberish generated by a computer programme?

Springer found itself tackling this unusual situation after Nature News revealed on Monday that it had published 16 fake articles as conference proceedings.

Its solution, in a statement today: “We are in the process of taking the papers down as quickly as possible. This means that they will be removed, not retracted, since they are all nonsense. A placeholder notice will be put up once the papers have been removed.”

Springer adds that it is reviewing its procedures to find out what happened. “When flaws are detected by us, or brought to our attention by members of the scientific community, we aim to correct them transparently and as quickly as possible,” the publisher says.

But the US Institute of Electrical and Electronic Engineers (IEEE), which has published more than 100 fake papers in a variety of conference proceedings, took a different route. It wiped article records from its database last year, and again this year, without making that fact clear to subscribers.

In statements made to Nature News last week, the IEEE said: “It was brought to our attention over a year ago that there might have been some conference papers published in our IEEE Xplore digital library that did not meet our quality standards.  We took immediate action to remove those papers, and also refined our processes to prevent papers not meeting our standards from being published in the future.”

The issue first came to light when French computer scientist Cyril Labbé, who detected the fakes, told the IEEE of a batch of 85 nonsense papers in 2012. It subsequently removed them from its database without public comment. Readers attempting to access those articles on the IEEE website reach only a “page not found” notice, with no placeholder statement acknowledging their withdrawal.

Labbé informed the IEEE of a further batch of fake papers in December 2013; but for two months, the publisher left these papers online. After being contacted last week by Nature News, the IEEE removed this second batch of papers from its database. Again, only a “page not found” notice greets curious visitors.

A list of papers the IEEE has removed is posted here: IEEE-wiped-articles.pdf. (Some duplicates can be found elsewhere on the internet).

### Sean Carroll - Preposterous Universe

God/Cosmology Debate Videos

Here is the video from my debate with William Lane Craig at the 2014 Greer-Heard Forum. Enough talking from me, now folks can enjoy for themselves. First is the main debate and Q&A:

And here, from the next day, is the concluding panel discussion, including the four speakers from that day as well:

When the individual Saturday talks come online I will just add them here. Speaking of which, here’s Tim Maudlin on “Cosmology, Theology, and Meaning.”

### CERN Bulletin

Another donation of computer equipment

On Thursday 27 February, CERN was pleased to donate computer equipment to a physics institute in the Philippines.

H.E. Leslie J. Baja and Rolf Heuer.

Following donations of computer equipment to institutes in Morocco, Ghana, Bulgaria, Serbia and Egypt, CERN is to send 50 servers and 4 network switches to the National Institute of Physics at the University of the Philippines Diliman.

CERN’s Director-General Rolf Heuer and the Ambassador of the Philippines to Switzerland and Lichtenstein, H.E. Leslie J. Baja, spoke of their enthusiasm for the project during an official ceremony.

The equipment will be used for various high energy physics research programmes in the Philippines and for the University’s development of digital resources for science.