# Particle Physics Planet

## May 19, 2013

### astrobites - astro-ph reader's digest

UR#6: Neutrinos and ICM Magnetic Fields

Hi all, and welcome to the return of the undergrad research posts! For those who don’t remember this series: this is where we feature the research that you’re doing. If you’ve missed the previous installments, you can find them under the “Undergraduate Research” category here.

What does this series mean for you? We want to hear from you! Whether you’ve done an REU project, you’re working on your senior thesis, or you’ve recently started a research project in between homework sets — if you’re an undergrad doing research, we’d love to hear about it.

You can share what you’re doing by clicking on the “Your Research” tab above (or by clicking here) and using the form provided to submit a brief (fewer than 200 words) write-up of your work. The target audience is one familiar with astrophysics but not necessarily your specific subfield, so write clearly and try to avoid jargon. Feel free to also include either a visual regarding your research or else a photo of yourself!

We look forward to hearing from you!

************

Halston Lim and Jason Liang
Halston and Jason did this work jointly at the North Carolina School of Science and Mathematics.

Neutrinos, fundamental particles of the Standard Model of particle physics, can provide unique information about the internal processes of opaque high-energy astrophysical events. The ability of neutrinos to travel vast distances through matter is a crucial advantage of neutrino astronomy over optical astronomy. We demonstrated how neutrinos can be used to study the properties of failed supernovae (fSN) and black hole-neutron star mergers (BHNSM), thus providing a valuable contribution to neutrino astronomy. fSN neutrino detection would result in the first observation of black hole formation, while neutrinos from BHNSM could be used to determine if BHNSM are progenitors of short-period gamma-ray bursts, some of the most energetic events in the universe. By calculating the observed neutrino signal in various current and proposed detectors, we determined the detectability of fSN and BHNSM and demonstrated how the observed neutrino signal can provide information about the temperature and average energy of the neutrinos at the source. We also showed how these emission characteristics can then provide further information about the production of heavy metals in fSN and BHNSM. Our results confirm that neutrino observations of galactic fSN and BHNSM are feasible and provide fundamental groundwork for future research on fSN and BHNSM.

Andrew Emerick
Andrew is a graduating senior at the University of Minnesota. He worked on this project for his Honors thesis under Dr. Tom Jones and Dr. David Porter, using the resources of the Minnesota Supercomputing Institute. Andrew will be entering graduate school this fall at Columbia University, pursuing a doctorate in Astronomy with an intended focus in computational astrophysics.

Galaxy clusters are the largest gravitationally bound objects in the Universe, containing hundreds to thousands of individual galaxies. A majority of the baryonic matter in a cluster is contained within the intracluster medium (ICM): a hot, diffuse plasma that is interspersed throughout the galaxy cluster. The ICM is host to many phenomena, some of which can be used as key diagnostics, such as its often strong X-ray and radio emission. By studying the radio emission, we know that the ICM contains weak, cluster wide magnetic fields, but we do not understand well where they came from, or how they grew to the strength that is observed. One means to study the problem is to simulate the detailed microphysics of the interactions between the magnetic field and the “weather” of the ICM. We study the evolution of a weak, non-uniform magnetic field in a turbulent plasma, focusing on the details as to how turbulence amplifies a magnetic field. We focus primarily on the early evolution, and concern ourselves with how various magnetic field conditions can affect how the magnetic field grows over time, while fixing the nature of the turbulence. This study provides insight which can improve the accuracy of cosmological scale models of galaxy clusters. In addition, we know at some point information of the magnetic field conditions will be erased in the course of the ICM’s evolution. This study serves to help pinpoint exactly when that occurs, and thus if it could be possible to extract that information from potential observations.

************

Many thanks to Halston, Jason and Andrew, as well as everyone else who has recently submitted contributions! Look for more undergraduate research posts in the future — this series will continue once a month.

## May 18, 2013

### Geraint Lewis - Cosmic Horizons

A peculiar faint satellite in the remote outer halo of M31
The Pan-Andromeda Survey (PAndAS) continues to be a gold-mine for science. We're squeezing it hard to get out key results, but next year, the data will become public and everyone can have a looksie and write their own paper.

Here we have another paper by ANU astronomer, Dougal Mackey. Dougal's expertise is understanding the globular clusters orbiting the Andromeda galaxy, especially the distant clusters. He published a really nice piece of work recently which showed that these distant globulars are not just scattered randomly about Andromeda, but are more likely to be sitting on the stellar substructure we see. This substructure is the tidal debris from smaller galaxies that have fallen in and been shredded, meaning that the globulars are immigrants, having been born outside Andromeda, but joining the halo when their parent galaxy is destroyed; this is galactic cannibalism in action.

This new paper is about a particular cluster of stars orbiting Andromeda, named PAndAS-48 (who says astronomers aren't imaginative when it comes to naming things!). While this cluster was initially observed with the Canada-France-Hawaii Telescope (CFHT) as part of PAndAS, this paper presents new observations with the Hubble Space Telescope.

While the CFHT, at 3.6m, is larger than Hubble (2.5m), the lack of an atmosphere means we get much sharper images, and hence can see a lot fainter. Here's images from CFHT (left) compared to Hubble (right).
Nice! We actually observed the cluster in a couple of photometric bands with Hubble, which allowed us to make a colour-magnitude diagram; as you know, stars are not randomly scattered in such a picture, but sit on sequences that are driven by stellar evolution. What do we see?
For those in the know, yes, the faintest stars in there are around 28th magnitude!

In there, we can see the Red Giant Branch and Horizontal Branch, and that allows us to understand lots of things about the globular, such as how far away it is and what stage it is at in terms of its evolution.

We can also measure the distribution of stars, and measure the shape of the clusters.
So, what is this cluster of stars? Is it a dwarf galaxy, dominated by dark matter? or a globular cluster, which are thought not to contain dark matter? It's actually very hard to tell. This piccy illustrates the issue.
The picture is pretty self-explanatory; size is along the bottom in parsecs, and brightness is up the side. The dots are colour-coded in terms of how elliptical they are.  The squares on the right are dwarf galaxies; they tend to be big and elliptical. The dots on the left are globular clusters, which tend to be small and circular (but notice that they can be of the same brightness as the dwarfs).

Where's PAndAS-48? It's the point with a circle around it, stubbornly right between the two populations! In fact, the ultimate conclusion is that we don't know what it is. If it is one or the other, then there are problems. But that's cool too!

It is worth noting that PAndAS-48 appears to sit on the vast thin plane of satellites orbiting Andromeda, which makes it even more intriguing, but we haven't got it's velocity so can't confirm if it is orbiting in the same sense. But if it is, it will be extra cool.

As ever, the more we learn, the more questions we have. Yay!!

Well done Dougal!

We present Hubble Space Telescope imaging of a newly-discovered faint stellar system, PAndAS-48, in the outskirts of the M31 halo. Our photometry reveals this object to be comprised of an ancient and very metal-poor stellar population with age > 10 Gyr and [Fe/H] < -2.3. Our inferred distance modulus of 24.57 +/- 0.11 confirms that PAndAS-48 is most likely a remote M31 satellite with a 3D galactocentric radius of 149 (+19 -8) kpc. We observe an apparent spread in color on the upper red giant branch that is larger than the photometric uncertainties should allow, and briefly explore the implications of this. Structurally, PAndAS-48 is diffuse, faint, and moderately flattened, with a half-light radius rh = 26 (+4 -3) pc, integrated luminosity Mv = -4.8 +/- 0.5, and ellipticity = 0.30 (+0.08 -0.15). On the size-luminosity plane it falls between the extended globular clusters seen in several nearby galaxies, and the recently-discovered faint dwarf satellites of the Milky Way; however, its characteristics do not allow us to unambiguously class it as either type of system. If PAndAS-48 is a globular cluster then it is the among the most elliptical, isolated, and metal-poor of any seen in the Local Group, extended or otherwise. Conversely, while its properties are generally consistent with those observed for the faint Milky Way dwarfs, it would be a factor ~2-3 smaller in spatial extent than any known counterpart of comparable luminosity.

### Christian P. Robert - xi'an's og

detachment

One of the movies I watched during my hospitalisation is detachment, by Tony Kaye, with Adrian Brody as the lead actor. My daughter brought it to me as she remembered I was interested in it. detachment is a strong and highly original movie about the U.S. school system and the complete lack of prospects for the students in deprived suburbs. I have seen several movies of that kind in the past, some of them rather good and keeping away from the fairy tale that an exceptional teacher is enough to rescue a class cohort or even a single student from a bleak future. This one is however the most pessimistic of all, with no happy ending of any sort (except for the last minute that should have been cut). The plot is not flawless, e.g. the main teacher redemption of the young prostitute being just too unrealistic, but the burnout of the teachers, the newspeak preaching of the administration, the nihilism of the high school students, the bullying of unusual students, and the complete absolute absence of the parents (unless I am confused we only see one [screaming] mother once, no parent shows up at parents’ night and the bullying father is only a voice…) make up for those flaws. Adrian Brody is delivering a superb performance in a great movie, sadly about a terrible issue with our educational system(s)…

Filed under: Books, Kids Tagged: Adrian Brody, detachment, high school, movie review

### Clifford V. Johnson - Asymptotia

Ok, Here Goes
It has been a while since I shared a snippet of the book project with you, so here's an update: Yesterday I completed a short burst of activity in which I re-did two pages in a story that were just horrible to behold. This is a panel form one of the pages. I'm pleased [...]

### Lubos Motl - string vacua and pheno

Ways to discover matrix string theory
...more precisely screwing string theory...

The 5,250+ TRF blog entries discuss various topics, mostly scientific ones, including minor advances. However, there isn't any text on this website that would talk about matrix string theory (inpendently found 2 months later by a herald who inaugurated the new Dutch king and an ex-co-author of mine along with two twins).

If you search for the closest topic, you will find one article about Matrix theory published a year ago and a supplement about membranes in Matrix theory that was added a week later.

But now we want to talk about matrix string theory. It's a version of Matrix theory. Much like Matrix theory – or M(atrix) Theory – describes M-theory in 11 dimensions (which has no strings), matrix string theory describes type IIA or heterotic $$E_8\times E_8$$ string theory in $$d=10$$. So it's a stringy version of Matrix theory; or string theory formulated in a matrix form.

The discovery of matrix string theory was important for several reasons. First, it was an important confirmation of the ability of the Matrix theory concept to define the dynamics of string/M-theory in many situations; and it was the first time when we had a complete, non-perturbative definition of a string theory.

What do I mean by this comment? Before Matrix theory, all calculations in string theory would be organized as Taylor expansions in $$g_s$$, the string coupling. All amplitudes would be written as $$A_0 + A_1 g_s + A_2 g_s^2\dots$$, and so on. However, not every function may be expanded in this way and the general amplitudes in quantum field theory or string theory can't. For example, $$\exp(-C/g_s^2)$$ has a Taylor expansion whose terms vanish (because all higher-order derivatives of this function at $$g_s=0$$ vanish) even though the function was non-vanishing.

In this sense, a complete definition was absent. One could have even believed that the existence or consistency of string theory was just a perturbative illusion. Matrix string theory was the first "constructive proof" that string theory is well-defined even non-perturbatively. In the type IIA case, one had a definition for any $$g_s$$. In the $$g_s\to\infty$$ limit, one could easily show that the theory reduces to Matrix theory, the matrix model for M-theory; in the $$g_s\to 0$$ limit, one could prove – and this is the main achievement of the matrix string theory founding papers – that the dynamics reproduces the states and interactions of type IIA string theory as we had known them from the perturbative approaches.

Formal and informal derivations of the matrix string Lagrangian

Matrix theory is formulated in terms of the following Hamiltonian$H = P^- = \frac{N}{2} {\rm Tr}\zav{ \Pi_i^2 - [X_i,X_j]^2 +{\rm fermionic} }$ which is interpreted as a light-cone component $$P^- = (P^0-P^{10})/\sqrt{2}$$ of the spacetime energy-momentum vector. Well, the original Matrix theory paper by BFSS (Banks, Fischler, Shenker, Susskind) talked about the "infinite momentum frame" and various "highly boosted limits". But one could easily go to the limit and rewrite the quantities in the light-cone gauge. I was always baffled how a paper by Lenny could have become well-known just because it made this self-evident point. My papers (written before Susskind) always took the light-cone gauge as an obvious fact, for granted, and I am confident that everyone who followed the Green-Schwarz machinery from the early 1980s (these physicists preferred to calculate things in the light-cone gauge at that time) had to immediately see that the more natural and more right way to interpret the BFSS model was the light-cone gauge and not just some half-baked "infinite momentum frame".

But let me avoid these discussions. I will assume that the reader has no problem with null combinations of spacelike and timelike components of the energy-momentum vector and realizes that they are often natural combinations to consider.

The Hamiltonian above also contains fermionic, Yukawa-like terms of the form $${\rm Tr}(\theta\gamma_i [X_i,\theta])$$ needed for supersymmetry (and various related crucial cancellations) and all the fields are $$N\times N$$ matrices chosen for the matrix model to respect the $$U(N)$$ gauge symmetry; yes, all physically allowed states must be invariant under the whole $$U(N)$$ group.

In the previous articles, I tried to explain why this quantum mechanical model whose fields are "large matrices", generalizations of the usual non-relativistic operators $$X_i,P_i$$, contains multi-graviton states, their superpartners, and large membranes: it has all the objects it needs to agree with the physical spectrum of M-theory in 11 dimensions.

Now, we want to compactify M-theory on a circle. M-theory on $$S^1\times \RR^{10}$$ has been known to be equivalent to type IIA string theory in 10 dimensions (from the very first paper by Witten that introduced M-theory: the equivalence of the low-energy limits had been known for 10 years before that Witten's paper). What do we have to do with the matrix model to see all the physics of type IIA string theory?

There was some confusion about this question in the original BFSS paper on Matrix theory. The authors tended to believe that their exact Hamiltonian contains "the whole Hilbert space" of string/M-theory in all of its backgrounds. However, it wasn't the case. The moduli are modes with $$P^-=0$$ and they correspond to excitations of the $$U(0)$$ matrix model. The BFSS matrix model has no degrees of freedom for $$N=0$$ so there are no ways to change the moduli. Consequently, the model may only describe one particular superselection sectors – the states of string/M-theory that respect the asymptotic form of the spacetime that looks like one in 11-dimensional M-theory (with one light-like direction compactified on a "long" circle).

To see type IIA string theory, i.e. the states in a different superselection sector of string/M-theory, we need to construct a different matrix model. What is it?

At the end of 1997, Ashoke Sen and especially Nathan Seiberg proposed a straightforward way to derive the BFSS matrix model and its compactifications from a limiting procedure combined with some widely believed dualities in string/M-theory. It's a clever (and superior) derivation that allows us to derive matrix models that are gauge theories; as well as matrix models that aren't just "ordinary" gauge theories but their novel UV completions such as the $$(2,0)$$ theory in $$d=6$$ and little string theory.

However, if we want to find a matrix model for a compactification of M-theory on $$T^k$$ and the dimension $$k$$ of the torus isn't greater than three, it's enough to use the formal "gauge theory assuming" derivation I used at the beginning of 1997. How does it work?

One develops (your humble correspondent developed) a more general procedure to "orbifold a matrix model". The compactification on a circle is an orbifold by the group isomorphic to $$\ZZ$$ composed of translations by $$2\pi R n$$ in the direction of the circular dimension. To find the matrix description of the orbifold, we need to enhance $$N$$ sufficiently and constrain the matrices of this "enhanced BFSS model" in a way that says that "the matrices transformed by elements of the orbifold group are gauge conjugations of the original ones".

This may sound complicated but the example of the compactification, an important one, makes it rather clear what I mean. The BFSS model has matrices with elements such as $$X^i_{mn}$$ where $$m,n=1,2,\dots N$$ are the gauge indices. We need the set of values of these indices to be infinitely greater. So we replace these matrix degrees of freedom by $$X^i_{mn}(\sigma,\sigma')$$ where $$\sigma\in(0,2\pi)$$ with periodic boundary conditions (a circular set of possible values of this "index") is a continuous counterpart of the index $$m$$ and similarly for $$\sigma'$$ and $$n$$.

Now the group $$\ZZ$$ of the translations in the direction $$X^9$$ has a generator, a translation by $$2\pi R_{9}$$, and we identify it with the conjugation by $$\exp(i\sigma)$$, a gauge transformation matrix that only acts on the continuous $$\sigma$$ indices. Because the translation doesn't physically act on the bosons $$X^1\dots X^8$$ and their momenta $$\Pi^i$$, the condition "physical transformation equals gauge transformation" says that these matrices are simply functions of one $$\sigma$$ because they impose $$\sigma=\sigma'$$, or demand $$\delta(\sigma-\sigma')$$ in the kernel, along the way. Similarly, $$X^9$$ has an extra $$\delta'(\sigma-\sigma')$$ term on the right hand side so this matrix gets promoted to the covariant derivative $$D_\sigma$$. Again, what used to be the degrees of freedom in $$X^9(\sigma)$$ get reinterpreted as the component $$A_\sigma$$ of a gauge field.

It may sound incomprehensible or difficult or abstract but I don't find it constructive to spend too much time with that. When you do these operations properly, you will find out that the matrix model for type IIA string theory is a 1+1-dimensional gauge theory with the same group $$U(N)$$ as the BFSS model compactified on $$S^1\times\RR$$ where the $$S^1$$ part of the infinite cylinder arises from the $$\sigma$$ "continuous index" we had to add. This 1+1-dimensional gauge theory has a dimensionful parameter $$g_{YM}^2$$. The formal procedure "physical transformation defining the orbifold equals gauge transformation of the matrices" even tells us how the coupling $$g_{YM}^2$$ depends on the length of the circle $$2\pi R_9$$ in the compactification of M-theory. Together with some analyses of the interactions in the resulting matrix model, we may derive that $$R_9/l_{Pl,11}\sim g_s^{3/2}$$.

But let's not be too acausal. So far, we have derived the matrix model for type IIA string theory. It looks like the integral of the BFSS Hamiltonian over the circle $$\sigma$$ except that the component $$X^9$$ of the bosonic fields is replaced by the covariant derivative $$D_9$$ involving the 1+1-dimensional gauge field. The original BFSS matrix model may be viewed as the compactification of the 10-dimensional (non-renormalizable) supersymmetric gauge theory to 0+1 dimensions. When we're compactifying the dimensions of the M-theory we want to describe by a matrix model, we must decompactify the spatial dimensions that were dimensionally reduced in the BFSS matrix model to start with. For type IIA string theory in ten dimensions, we must decompactify one (add the single "continuous index" $$\sigma$$). This operation is the opposite of dimensional reduction and because in chemistry, the opposite of reduction is oxidation, this procedure to construct higher-dimensional versions of the BFSS model to describe lower-dimensional vacua of M-theory is sometimes jokingly called the dimensional oxidation. ;-)

Minimizing the energy

Just to be sure: we have "derived" that type IIA string theory in ten dimensions at any coupling is completely equivalent to the maximally supersymmetric $$U(N)$$ gauge theory in 1+1 dimensions whose "world volume" has one infinite timelike dimension and one circular, compact spacelike dimension. To get rid of the effects of the compactification of the light-like dimension, we need to take the large $$N$$ limit.

In some sense, this is a very modest generalization or variation of the original BFSS claim. I became totally certain that this matrix model is the right one. This certainty is probably necessary for one to be sufficiently motivated to study its physics a bit more closely. So I started with that.

If the 1+1-dimensional gauge theory is the full type IIA string theory, including its D-branes, type IIA supergravity at low energies, black holes, and many other things, it should contain what type IIA string theory is known to contain. For example, it must contain the strings. They must also be able to split and join.

Diagonal in a basis that may change

A general Hamiltonian defines the energy in a quantum mechanical model. All states may be written as superpositions of energy eigenstates. However, some states are more interesting than others: the low-energy eigenstates of the Hamiltonian. Because energy tends to dissipates, physical systems generally like to "drop" to their low-lying states. That's why the low-lying states, starting from the ground state (lowest-eigenvalue eigenstate of the Hamiltonian), are the most important ones.

In other words, the first step in trying to understand the physics of a Hamiltonian in a quantum mechanical theory is to try to help Nature to minimize the energy. How do we do it with the matrix model for matrix string theory?

Let's consider the bosons only; the fermions add additional degrees of freedom, terms in the zero-point energy (that mostly cancel some bosonic terms that would destroy a consistent spacetime interpretation of the physics if they remained uncancelled), and other details. If you assume that fermions play this peaceful, calming, generalizing role, you may say that the important physics is already contained in the bosons.

How do we minimize the energy carried by the bosonic parts of the Hamiltonian? The matrix string Hamiltonian contains $$\int \dd \sigma\,{\rm Tr}(\Pi_i^2)$$ times a coefficient. Clearly, this is minimized if the momenta $$\Pi_i(\sigma)$$ are zero. More realistically, these matrices may be approximately diagonal and the diagonal entries $$\Pi^i_{nn}(\sigma)$$ will behave as the degrees of freedom $$\pi_i(\sigma)$$ defined on a Green-Schwarz string. Soon we will see what happens with the extra $$n$$ etc.

The off-diagonal entries of $$\Pi^i$$ as well as the same entries of $$X^i$$ behave like W-bosons of a sort, massive degrees of freedom, and at low energies, the wave function is almost required to be proportional to the ground states wave function as a function of these off-diagonal entries.

More interestingly, we want to minimize the term $${\rm Tr}\zav{-[X_i,X_j]^2}$$ in the energy, too. The minus sign has to be there because for each $$i,j$$, the commutator is anti-Hermitian so its square is negatively definite, not positively definite. How do we minimize it? Clearly, it will be smaller if the eight matrices $$X^i$$ commute with each other. (Quantum mechanically, the wave function will be concentrated near the points on the configuration space where they commute with each other.)

If they commute with each other, it means that we can simultaneously diagonalize them. In other words, we can write$X^i(\sigma) = U(\sigma) X^i_{\rm diag}(\sigma) U^{-1}(\sigma).$ The matrix $$U$$ may be assumed to be unitary because Hermitian matrices are diagonalized in an orthonormal basis. The matrix with the "diag" subscript on the right hand side is diagonal. But an important detail is that $$U(\sigma)$$ must be allowed to be arbitrary because the energy minimization tells us nothing about the basis in which all the $$X^i$$ matrices are diagonal.

And that makes a difference because $$U(\sigma)$$ doesn't have to be periodic with the period of $$2\pi$$. Only the total field $$X^i(\sigma)$$ of the gauge theory has to be periodic. However, the transformation $$U(\sigma)$$ to the basis in which $$X^i(\sigma)$$ is diagonal may undergo a nontrivial monodromy if we change $$\sigma$$ by $$2\pi$$. The matrix $$X^i_{\rm diag}(0)$$, for example, was constrained by our rules to be diagonal but the matrix $$U(0)$$ that (via conjugation) brings a given $$X^i(\sigma)$$ to the diagonal form is "almost unique" but not quite. First, one may add some $$N$$ phases on the diagonal of $$U$$.

Second, and this is more important here, the matrix $$U$$ may be multiplied by a permutation matrix! If a matrix is diagonal in a certain basis, it is diagonal in a permutation of this basis, too! So we must consider more general matrices $$U(\sigma)$$ that are continuous functions of $$\sigma$$ but that obey$U(\sigma+2\pi) = U(\sigma) P$ where $$P$$ is a permutation matrix. In combination with some continuous but also aperiodic diagonal matrices $$X^i_{\rm diag}$$, such a unitary matrix may still produce an energy-minimizing, periodic field $$X^{i}(\sigma)$$. This is the key subtlety not to be overlooked if you want to understand physics of matrix string theory.

What is this fact good for?

It's easy to see how the $$U(N)$$ matrix model, the two-dimensional gauge theory, contains $$N$$ "short strings". The degrees of freedom of each such short string is carried by the diagonal entries of $$X^i(\sigma)$$. There are $$N$$ such entries along the diagonal. However, we also need "long strings"; the length of the $$\sigma$$ coordinate space has been known to be proportional to the light-cone momentum $$P^+$$ to everyone who was familiar with the light-cone gauge string theory.

This $$P^+$$ is quantized, equal to $$N/R$$, because the null coordinate $$X^-$$ is compactified on a circle of radius $$R$$ (we want to send $$R\to\infty$$ to get rid of this semi-unphysical compactification which also forces us to send $$N\to\infty$$ to keep $$P^+$$ fixed). And we know how to find strings with $$P^+=1/R$$ i.e. with the $$N=1$$ unit of the light-like longitudinal momentum.

However, the permutation business tells us how to find the "long strings" with $$P^+=N/R$$ for any positive integer $$N$$. You pick an eigenvalue of $$X^i$$ along the diagonal; trace it as you continuously change $$\sigma$$ from $$0$$ to $$2\pi$$; and when you reach $$\sigma=2\pi$$, this eigenvalue doesn't connect to the original one at $$\sigma=0$$. Instead, it will connect to a different one and only if you increase $$\sigma$$ by $$2\pi N$$, you may return to the original function because $$N$$ basis vectors participate in a cycle of the permutation (used in the boundary conditions for $$U(\sigma)$$.

(The "long strings" were also called "screwing strings" by your humble correspondent because the monodromy bringing the eigenvalue to a new level every time you get around the circle looks like a screw. I didn't know what the verb "screw" had meant informally. But this informal meaning of "screwing" is one of the reasons why the incorrect name "matrix string theory" became more frequently used than the technically correct name "screwing string theory". Incidentally, note that "matrices" and "nuts [waiting for screws]" are translated by the same Czech word, "matice".)

Because every permutation may be decomposed into a product of circular cycles, we see that every low-energy state in matrix string theory is composed of several strings with arbitrary values of $$P^+=N/R$$. The permutation defines a "sector" of matrix string theory. The decomposition into the sector is just an artifact of the low-energy approximation; there is no sharp "barrier" between the sectors as they're continuously connected on the configuration space of the 1+1-dimensional gauge theory.

One may also derive the origin of some other subtle conditions. For example, the bosonic/fermionic states of the long strings obey the right statistics because the permutations that interchange the whole long strings are elements of the $$U(N)$$ gauge group that must keep all physical states invariant. However, one may also derive the $$L_0=\tilde L_0$$ condition for each separate string as the gauge invariance under the generator of \the (\ZZ_k\) cyclic group that defines the cyclical permutations associated with a given string. Well, this is really equivalent to $$L_0-\tilde L_0 \in k\ZZ$$ but for large values $$k$$, all values except for $$L_0-\tilde L_0=0$$ will correspond to string states of a high energy and will not belong to the low-energy spectrum.

Merging and splitting strings: jumping in between the permutation sectors

I have already said that in the low-energy limit, it looks like the Hilbert space is composed of sectors labeled by permutations in $$S_N\subset U(N)$$. Each cycle that such a permutation is composed of corresponds to one "long string" – an ordinary type IIA string – present in the configuration.

At the same time, matrix string theory allows you to continuously switch between different "sectors". This corresponds to changing the permutation or, equivalently, the decomposition of the total longitudinal momentum $$P^+$$ to the individual strings.

The most elementary operation changing a permutation is the composition of this permutation with an extra transposition (of two pieces of the string; or two eigenvalues). The low-energy approximation of the gauge theory's (matrix model's) Hamiltonian will involve the list of the allowed sectors and the free Hamiltonian for the individual strings that match the free type IIA string theory. However, the gauge theory isn't quite free so there will also be corrections and those may change the sector (the permutation). Those that only add one transposition will be the leading ones and they will correspond to nothing else than the usual splitting or merging of strings, a three-closed-string vertex.

We know that the gauge theory is supersymmetric so the interactions will have to preserve the same supersymmetry. DVV showed that the form of the splitting/merging leading interaction is essentially unique. But even without knowing its form, I could have derived – using a trick using the assumption that the large $$N$$ limit is universal and independent of $$R$$, the light-like radius – how the coefficient of the three-string vertex depends on the radius $$R_9$$ of the coordinate we compactified to get the matrix model of type IIA string theory out of the BFSS model for M-theory. (There are two radii compactified here which are often labeled as $$R_9$$ and $$R_{11}$$. People who don't understand the logic of matrix string theory may confuse them. The exchange of these two radii that is effectively used in the construction was also called the 9/11 flip and be sure that it was before my PhD defense on 9/11/2001.)

The DVV description of the permutations

In March 1997, DVV who were much more familiar with the standard machinery of two-dimensional conformal field theories described the free-string limit of the gauge theory by a concise term: the symmetric orbifold CFT. It means a CFT – a linear (not non-linear, in this case) sigma model on $$\RR^{8N}/S_N$$ where $$S_N$$ is the permutation group exchanging the $$N$$ copies of the 8-dimensional transverse space.

They also wrote down the explicit form of the three-string interaction vertex (leading interaction) emerging in this limit in terms of spin fields and twist fields, fixed a mistake in my not quite correct derivation of the level-matching $$L_0=\tilde L_0$$ condition, and added some comments about the appearance of the D0-branes (short strings with the electric field etc.).

Higher-order terms in the Hamiltonian

The transposition of two eigenvalues is just the simplest among the extra permutations that may change the sector. In reality, the matrix model for string theory predicts all the complicated permutations (cycles with 3 elements or any number of elements), too. One may guess a natural Ansatz how these terms look like at any order in $$g_s$$. We wrote these formulae with Dijkgraaf – a paper showing that the matrix string Hamiltonian is corrected at every order and how (these extra high-order terms produce contact terms interactions that are needed for the consistency of the light-cone gauge string theory but they may be largely circumvented in the usual covariant calculations based on moduli spaces of Riemann surfaces). This particular paper remained almost unknown, one of the numerous testimonies of the fact that in the 21st century, the interest in technical things such as "filling the gaps in the only non-perturbative definition of type IIA string theory we have" was dropping to zero. In 2003, people were already much more excited with philosophical gibberish such as the anthropic lack of principle and fabricated "technical evidence" that it applies in string theory.

I won't proof-read this text because I am afraid that its technical character will shrink its readership close to an infinitesimal number that can't justify the extra work needed for proofreading.

### Emily Lakdawalla - The Planetary Society Blog

New Horizons: Encounter Planning Accelerates
Back in 2005 and 2006, when Pluto’s second and third moons (Nix and Hydra) were discovered, searches by astronomers for still more moons didn’t reveal any. So the accidental discovery of Pluto’s fourth moon by the Hubble Space Telescope in mid-2011 raised the possibility that the hazards in the Pluto system might be greater than previously anticipated.

### John Baez - Azimuth

The Search For Budget-Conscious Life

Lisa and I had dinner with Gregory Benford and his wife when I visited U.C. Irvine a couple of weekends ago, and he raised an interesting point. So far, radio searches for extraterrestrial life have only seen puzzling brief signals – not long transmissions. But what if this is precisely what we should expect?

A provocative example is Sullivan, et al. (1997). This survey lasted about 2.5 hours, with 190 1.2 minute integrations. With many repeat observations, they saw nothing that did not seem manmade. However, they “recorded intriguing, non-repeatable, narrowband signals, apparently not of manmade origin and with some degree of concentration toward the galactic plane…” Similar searches also saw one-time signals, not repeated (Shostak & Tarter, 1985; Gray & Marvel, 2001 Gray, 2001). These searches had slow times to revisit or reconfirm, often days (Tarter, 2001). Overall, few searches lasted more than hour, with lagging confirmation checks (Horowitz & Sagan, 1993). Another striking example is the “WOW” signal seen at the Ohio SETI site…

That’s a quote from a paper Benford wrote with his brother and nephew:

• Gregory Benford, James Benford, and Dominic Benford, Searching for cost optimized interstellar beacons.

They claim the cheapest way a civilization could communicate to lots of planets is a pulsed, broadband, narrowly focused microwave beam that scans the sky. So, for anyone receiving this signal, there would be a lot of time between pulses. That might explain some of the above mysteries, or this one:

As an example of using cost optimized beacon analysis for SETI purposes, consider in detail the puzzling transient bursting radio source, GCRT J17445-3009, which has extremely unusual properties. It was discovered in 2002 in the direction of the Galactic Center (1.25° south of GC) at 330 MHz in a VLA observation and subsequently re-observed in 2003 and 2004 in GMRT observations (Hyman, 2005, 2006, 2007). It is a pulsed coherent source, with the ‘burst’ lasting as much as 10 minutes, with 77-minute period. Averaged over all observations, Hyman et al. give a duty cycle of 7% (1/14), although since some observations may have missed part of bursts, the duty cycle might be as high as 13%.

Even if these are red herrings, it seems very smart to figure out the cheapest ways to transmit signals and use that to guess what signals we should look for. We can easily make the mistake of assuming all extraterrestrial civilizations who bother to send signals through space will be willing to beam signals of enormous power toward us all the time. That could be true of some, but not necessarily all.

The cost analysis is here:

• James Benford, Gregory Benford, Dominic Benford, Messaging with cost optimized interstellar beacons.

and you can see a summary in this talk by Gregory’s brother James, who works on high-power microwave technologies:

### Geraint Lewis - Cosmic Horizons

I've lived in Australia for thirteen years, but in the way that Sting was an English Man in New York, I have never quite felt "Australian", rather, I am a Welsh Man in Sydney. Anyway, I still feel very British, and am a fan of British TV (apart from a few highlights, Australian TV is generally bilge).

Anyway, I've always loved a good murder mystery, and I like Midsomer Murders, even though they have changed the lead character (and the new chief inspector was actually a criminal in a previous episode). The premise of Midsomer's is simple; a cop in the quite fictional county of Midsomer solves murders. However, the show has been running for 15 years, and there seems to have been an awful lot of murders (although the murder rate is considerably lower than Honduras!). To keep the stories going, murders are set in, quite often, bizzarre circumstances.

A recent episode, Written in the Stars, focused on the intrigue and mystery at a research observatory at Midsomer University (up until this point, I don't think there had been mention of a university in the county). With usual stereotypical fashion, we have a mean professor, who is ready to steam-roller anybody to build his reputation, and a young genius who is writing her thesis (on the Heisenberg uncertainty principle) and threatens to dethrone the evil professor.

As part of her research, she needs to look at an eclipse (go figure) and the murder mayhem ensues. That's not the bad physics (but doesn't help).

Here's the young genius at work, presenting her work in the dome of a telescope (not sure why she is not in an office or lecture room).
Someone has gone to great effort to fill the board with lots of scientific squiggles. It's not, however, gibberish. I'm not sure if they used a text book, or wikipedia, but there are some correct things there.

However,  something annoyed me. Zooming in on the board, what do we see?
Plank's constant! Argh!! You'd think that our young genius who has written a thesis on quantum mechanics and is presenting her research to evil and nasty professor could spell Planck's name correctly. But there is more! Whoever wrote the squiggles got the symbol, h, correct, and even the value, 1.054 x 10-27, correct, but they completely screwed up the units (that's too painful to go into) and what this number actually is is ħ ,which is Planck's constant divided by 2π.

Why would they bother going to the effort of writing something semi-correct, but pay so little attention that they make a mess of it? Why not just do it right? Don't they realise that professors of astrophysics might be watching?

One other thing that annoyed me is that they did the "astronomers only do their work inside telescope domes" thing
We don't. We have offices like everyone else. And even when we are at the telescope, we are in the control room, not freezing our bottoms off in the dome.

Before finishing, I think it's worth noting that the observatory actually used in the show is actually a university observatory. It is the University of London Observatory at Mill Hill
Even though I was a student at the University of London, I never used this observatory, although I did visit there when I was looking for a PhD position. However, the observatory is not in the picturesque county of Midsomer, but is next to the A1 in North West London.
Like a lot of observatories around the world, it was build outside of a city, but the cities have grown around them.

Anyway, the murderer was not the evil astrophysicist..... It was actually the friendly professor of Quantum Physics! I'm sure his knowledge of the uncertainty principle will help him in prison.

## May 17, 2013

### Quantum Diaries

Knowledge and the Higgs Boson

This essay makes a point that is only implicit in most of my other essays–namely that scientists are arro—oops that is for another post. The point here is that science is defined not by how it goes about acquiring knowledge but rather by how it defines knowledge. The underlying claim is that the definitions of knowledge as used, for example, in philosophy are not useful and that science has the one definition that has so far proven fruitful. No, not arrogant at all.

The classical concept of knowledge was described by Plato (428/427 BCE – 348/347 BCE) as having to meet three criteria: it must be justified, true, and believed. That description does seem reasonable. After all, can something be considered knowledge if it is false? Similarly, would we consider a correct guess knowledge? Guess right three times in a row and you are considered an expert –but do you have knowledge? Believed, I have more trouble with that: believed by whom? Certainly, something that no one believes is not knowledge even if true and justified.

The above criteria for knowledge seem like common sense and the ancient Greek philosophers had a real knack for encapsulating the common sense view of the world in their philosophy. But common sense is frequently wrong, so let us look at those criteria with a more jaundiced eye. Let us start with the first criteria: it must be justified. How do we justify a belief? From the sophists of ancient Greece, to the post-modernists and the-anything-goes hippies of the 1960s, and all their ilk in between it has been demonstrated that what can be known for certain is vanishingly small.

Renee Descartes (1596 – 1960) argues in the beginning of his Discourse on the Method that all knowledge is subject to doubt: a process called methodological skepticism. To a large extend, he is correct. Then to get to something that is certain he came up with his famous statement: I think, therefore I am.  For a long time this seemed to me like a sure argument. Hence, “I exist” seemed an incontrovertible fact. I then made the mistake of reading Nietzsche[1] (1844—1900). He criticizes the argument as presupposing the existence of “I” and “thinking” among other things. It has also been criticized by a number of other philosophers including Bertrand Russell (1872 – 1970). To quote the latter: Some care is needed in using Descartes’ argument. “I think, therefore I am” says rather more than is strictly certain. It might seem as though we are quite sure of being the same person to-day as we were yesterday, and this is no doubt true in some sense. But the real Self is as hard to arrive at as the real table, and does not seem to have that absolute, convincing certainty that belongs to particular experiences. Oh, well back to the drawing board.

The criteria for knowledge, as postulated by Plato, lead to knowledge either not existing or being of the most trivial kind. No belief can be absolutely justified and there is no way to tell for certain if any proposed truth is an incontrovertible fact.  So where are we? If there are no incontrovertible facts we must deal with uncertainty. In science we make a virtue of this necessity. We start with observations, but unlike the logical positivists we do not assume they are reality or correspond to any ultimate reality. Thus following Immanuel Kant (1724 – 1804) we distinguish the thing-in-itself from its appearances. All we have access to are the appearances. The thing-in-itself is forever hidden.

But all is not lost. We make models to describe past observations. This is relatively easy to do. We then test our models by making testable predictions for future observations. Models are judged by their track record in making correct predictions–the more striking the prediction the better. The standard model of particle physics prediction of the Higgs[2] boson is a prime example of science at its best. The standard model did not become a fact when the Higgs was discovered, rather its standing as a useful model was enhanced.  It is the reliance on the track record of successful predictions that is the demarcation criteria for science and I would suggest the hallmark for defining knowledge. The scientific models and the observations they are based on are our only true knowledge. However, to mistake them for descriptions of the ultimate reality or the thing-in-itself would be folly, not knowledge.

[2] To be buzzword compliant, I mention the Higgs boson.

### The n-Category Cafe

Semantics of Proofs in Paris

There’s going to be a “thematic trimester” in Paris starting next spring:

If you like applications of category theory to logic and computer science, there should be a lot for you here!

The basic layout is this:

• Week 1 — Kick-off: Formalisation in mathematics and in computer science
• Week 3 — Workshop 1: Formalization of mathematics in proof assistants, organized by Georges Gonthier and Vladimir Voevodsky.
• Week 6 — Workshop 2: Constructive mathematics and models of type theory, organized by Thierry Coquand and Thomas Streicher.
• Week 8 — Workshop 3: Semantics of proofs and programs, organized by Thomas Ehrhard and Alex Simpson.
• Week 10 — Workshop 4: Abstraction and verification in semantics, organized by Paul-André Melliès and Luke Ong.
• Week 12 — Workshop 5, Certification of high-level and low-level programs organized by Christine Paulin and Zhong Shao.

A lot of people I know will attend parts of this, such as Jean Benabou, Marcelo Fiore, Dan Ghica, André Joyal, Samuel Mimram, and Bas Spitters. And that makes me happy, because Paul-André Melliès has invited me to spend up to a month attending this series of workshops, perhaps in two 2-week stretches. With a little luck I’ll be able to actually do this.

(My wife Lisa Raphals has gotten invited to Erlangen for the spring of 2014, meaning roughly April 1 - June 1. If she and I succeed in getting leaves of absence, I’ll go with her, and then take some trips to nearby places. Since I split my time between the Wild West and the Far East, Paris seems nearby to Erlangen to me. I also have vague invitations to IHES, Prague and Berlin which I might try to take advantage of. And if you have a luxurious villa in northern Italy or the French Riviera, let me know.)

### Christian P. Robert - xi'an's og

micro

“Indoctrinating children in proper environmental thought was a hallmark of the green movement.” M. Crichton, micro, p. ix

I believe I read most of Michael Crichton‘s novels and this posthumous version (completed by Richard Preston) is not very different in its style and pattern from the previous ones. micro delivers an efficient fast-paced techno-thriller that filled most of one afternoon when convalescing at home.  In that respect, it fills its intended role. I however feel this is one of the weakest novels in that the technological and scientific background is very poor. (The best Crichton’s novels are in my opinion The Andromeda Strain and Airframe. One of the last novels, State of Fear, carries a very anti-environmentalist and climatoskeptic  message similar to the above quote.)

“Perhaps the most important lesson to be learned by direct experience is that the natural world (…) represents a complex system and therefore we cannot understand it and we cannot predict its behavior. “ M. Crichton, micro, p. x

Indeed, the plot of micro is based on the assumption that there exists a technology that can miniaturise living and non-living objects to 1/100th of their original size without any short-term impact. I remember watching as a child Fantastic Voyage, where a miniaturised submarine goes inside a blood vessel to remove a tumor, and I sat in front of a neighbour’s TV, mesmerised by the idea more than by the (weak) plot. This was in the laste 60′s. I also remember a sci’fi’ book I read when a pre-teen, with a great cover, called The Forgotten Planet: nothing truly memorable, apart from the cover, but hey this was a 1954 book. Now, micro does not use a deeper theory to justify this miniaturisation and the remainder of the plot is just as weak: I cannot imagine  1/100th humans surviving more than a few minutes in a rain forest environment! The place is crawling with insects, all way faster and far more deadly than tiny humans with a pocket knife, but the heroes conveniently meet only one dangerous insect at a time, loosing only at most one member of the group each time (sorry for the spoiler!). (In fact, the earlier Prey was much better at involving nanotechnologies. ) The grad students are very charicaturesque as well, providing biological infodump at times when they should be frozen solid with fright. Provided they had not been eaten already. The final resolution of the thriller is just… grotesque! So wait until you are sick or recovering from being sick before embarking upon this micro and no so fantastic trip!

Filed under: Books, Kids Tagged: book review, Michael Crichton, micro, science fiction, techno-thriller

### Emily Lakdawalla - The Planetary Society Blog

Speaking engagements next week: Spacefest V and Society for Astronomical Sciences symposium
Next week I'm traveling to speak at two events. Registration is still open for both, so I hope some of you can come. I also have some commentary on women being invited to speak at public events.

### CERN Bulletin

CAS Accelerator Physics held in Erice, Italy

The CERN Accelerator School (CAS) recently organised a specialised course on Superconductivity for Accelerators, held at the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Italy from 24 April-4 May, 2013.

Photo courtesy of Alessandro Noto, Ettore Majorana Foundation and Centre for Scientific Culture.

Following a handful of summary lectures on accelerator physics and the fundamental processes of superconductivity, the course covered a wide range of topics related to superconductivity and highlighted the latest developments in the field. Realistic case studies and topical seminars completed the programme.

The school was very successful with 94 participants representing 23 nationalities, coming from countries as far away as Belorussia, Canada, China, India, Japan and the United States (for the first time a young Ethiopian lady, studying in Germany, attended this course). The programme comprised 35 lectures, 3 seminars and 7 hours of case study. The case studies were pursued with great enthusiasm and produced some excellent results. Feedback from the participants was positive, reflecting the high standard of the lectures.

In addition to the academic programme, the participants had the opportunity to take part in a one-day excursion to visit the Museum of the Nave Punica in Marsala, and the Greek temple and Hellenistic theatre at Segesta.

The next CAS course will be held in Trondheim, Norway from 18-29 August, 2013 and will be at the advanced level.  Further information on forthcoming CAS courses can be found on the CAS website.

### Emily Lakdawalla - The Planetary Society Blog

A serendipitous observation of tiny rocks in Jupiter's orbit by Galileo
A look at an older paper describing Galileo's possible sighting of individual ring particles orbiting Jupiter as companions to its inner moon Amalthea.

### astrobites - astro-ph reader's digest

Enhanced star formation in interacting galaxies: how far does it reach?

Authors: David R. Patton, Paul Torrey, Sara L. Ellison, J. Trevor Mendel, and Jillian M. Scudder

First author’s institution: Department of Physics and Astronomy, Trent University, Canada

Ok, this is no big surprise: enviroment affects star formation in galaxies. Observations have long shown that the star formation rate (SFR) is strongly enhanced when two galaxies merge or simply interact, with strongest enhancements found in the closest galaxy pairs, such as coalescing galaxies, or systems observed near to the first pericentre passage. Enhancements in star formation result in bluer colours and lower metallicities, i.e. characteristic features of young stellar populations, and spectacular objects such as luminous infrared galaxies.

However, a question is still open, as you can guess from the title of today’s astrobite: what is the orbital extent of enhanced star formation in interacting galaxies? At which projected separation of the two galaxies does it disappear? This Letter aims at investigating the enhancement of star formation as a function of the separation in galaxy pairs. The issue is addressed in two complementary ways: from an observational perspective, analyzing galaxy pairs from the Sloan Digital Sky Survey (SDSS), and from a theoretical perspective, studying the outputs of numerical simulations of galaxy mergers.

First, a large sample of ~600,000 galaxies from the SDSS is considered, which have secure spectroscopic redshift between 0.02 and 0.2, and total stellar mass estimated from photometry. For each galaxy, the closest neighbour is singled out, by requiring that it has 1) the smallest projected separation from the galaxy, 2) a rest-frame relative velocity lower than 1000 km/s, and 3) a stellar mass which is not excessively different (a factor of 10) from that of the galaxy.

Then, based on previous measurements of the SFR (see the catalogue in Brinchmann et al 2004), only star-forming galaxies are selected from the sample, without any special requirement on the SFR of their neighbours. In this way, also “mixed” galaxy pairs are included in the resulting sample, which contains ~211,000 star forming galaxies. For each of these galaxies, the authors determine a statistical “control sample” which matches each galaxy in both physical properties (stellar mass, redshift) and environment (local density, isolation), but does not necessarily contain star forming galaxies. The details of the procedure adopted to identify such control samples are deferred to a subsequent paper.

Figure 1 (from Patton et al 2013). Mean SFR enhancement (top panel) and mean SFR (bottom panel) versus projected separation of galaxy pairs. The error bars are the standard error in the mean. Blue is for galaxy pairs from SDSS; red is for their statistical control samples. The dashed horizontal line represents zero enhancement of star formation.

The bottom panel of Figure 1 shows, as a function of projected distance, the mean SFR of all the paired galaxies (blue) and of their statistical control samples (red). The ratio of these two quantities, which is defined as the “enhancement in star formation” is plotted in the top panel, where the inset plot shows its behaviour at even larger values of the projected separation. This figure nicely shows that star formation is enhanced in interacting galaxies, that such enhancement is stronger at the smallest separations, especially less than 20 kpc, and finally that the enhancement in SFR extends to larger separations than what was previously thought, being visible out to projected separations of ~ 150 kpc. In particular, it is found that the 66% of the enhanced star formation in galaxy pairs occurs at separations greater than 30 kpc.

Takeaway message: an enhancement in star formation is not only limited to strongly interacting galaxies with a very close companion, but also to wide galaxy pairs.

Now, are these findings consistent with the predictions from numerical simulations of interacting galaxies? In order to answer to this question, the authors investigate a suite of ad-hoc simulations of galaxy mergers run with the N-body/SPH code GADGET.

The simulated galaxy pairs are simple binary systems, where the stellar masses of the two initial galaxies is set to match the median stellar mass and mass-ratio of the observed SDSS sample. The simulated mergers span a significant set of five values of orbital eccentricities, five values of impact parameters, and three values of merger disc orientation, not limiting the galaxy orbits to low values of eccentricities and to small values of impact parameters. In total, 75 (5 x 5 x 3) orbital configurations for galaxy mergers are explored, and each one can be observed from a random set of viewing angles and at random times during the orbital evolution.

The authors compute the mean SFR over the 75 orbital configurations, observing each orbit from random orientations and at random moments during the merging history. Of course, these random times imply many different values of projected separations. This measurement of SFR is then translated into a measurement of SFR enhancement by normalizing by the SFR of the same galaxy evolved in isolation.

Figure 2 (from Patton et al 2013). Mean SFR enhancement as a function of projected separation in galaxy pairs from SDSS (blue) and numerical simulations of mergers (black).

Figure 2 shows the mean enhancement in star formation rate computed from galaxy merger simulations (black), and the extremely small error bars are due to the average over many orbit orientations. The curve showing the same data derived from galaxy pairs in SDSS is overlaid in blue. Remarkably, the two curves, hence the two different approaches, yield a similar result: an enhancement in SFR is observed out to large projected distances ~150 kpc, though stronger in the SDSS data. In the simulations, the enhancement is a result of starburst activity triggered at the first pericentre passage, which persists as the galaxies move to wider separations.

Hence, the authors can safely conclude that interaction-induced star formation is not only limited to those galaxies which have a close companion, but rather it affects a larger variety of galaxies.

### Symmetrybreaking - Fermilab/SLAC

A banner day at the LHC

An artist honors the people and science of the CMS collaboration.

There’s a new splash of color at Point Five, the home of CMS detector on the Large Hadron Collider. Five vivid banners drape the gray walls of the complex, lending the warehouse a cathedral-like atmosphere. Arranged in a line, they pull the viewer’s gaze from panel to panel to land on a true-to-scale photo of the detector itself, magnificently displayed on the back wall.

### Peter Coles - In the Dark

All models are wrong

I’m back in Cardiff for the day, mainly for the purpose of attending presentations by a group of final-year project students (two of them under my supervision, albeit now remotely).  One of the talks featured a famous quote by the statistician George E.P. Box:

Essentially, all models are wrong, but some are useful.

I agree with this, actually, but only if it’s not interpreted in a way that suggests that there’s no such thing as reality and/or that science is just a game.  We may never achieve a perfect understanding of how the Universe works, but that’s not the same as not knowing anything at all.

A familiar example that nicely illustrates my point  is the London Underground or Tube map. There is a fascinating website depicting the evolutionary history of this famous piece of graphic design. Early versions simply portrayed the railway lines inset into a normal geographical map which made them rather complicated, as the real layout of the lines is far from regular. A geographically accurate depiction of the modern tube network is shown here which makes the point:

A revolution occurred in 1933 when Harry Beck compiled the first “modern” version of the map. His great idea was to simplify the representation of the network around a single unifying feature. To this end he turned the Central Line (in red) into a straight line travelling left to right across the centre of the page, only changing direction at the extremities. All other lines were also distorted to run basically either North-South or East-West and produce a much more regular pattern, abandoning any attempt to represent the “real” geometry of the system but preserving its topology (i.e. its connectivity).  Here is an early version of his beautiful construction:

Note that although this a “modern” map in terms of how it represents the layout, it does look rather dated in terms of other design elements such as the border and typefaces used. We tend not to notice how much we surround the essential things with embellishments that date very quickly.

More modern versions of this map that you can get at tube stations and the like rather spoil the idea by introducing a kink in the central line to accommodate the complexity of the interchange between Bank and Monument stations as well as generally buggering about with the predominantly  rectilinear arrangement of the previous design:

I quite often use this map when I’m giving popular talks about physics. I think it illustrates quite nicely some of the philosophical issues related with theoretical representations of nature. I think of theories or models as being like maps, i.e. as attempts to make a useful representation of some  aspects of external reality. By useful, I mean the things we can use to make tests. However, there is a persistent tendency for some scientists to confuse the theory and the reality it is supposed to describe, especially a tendency to assert there is a one-to-one relationship between all elements of reality and the corresponding elements in the theoretical picture. This confusion was stated most succintly by the Polish scientist Alfred Korzybski in his memorable aphorism :

The map is not the territory.

I see this problem written particularly large with those physicists who persistently identify the landscape of string-theoretical possibilities with a multiverse of physically existing domains in which all these are realised. Of course, the Universe might be like that but it’s by no means clear to me that it has to be. I think we just don’t know what we’re doing well enough to know as much as we like to think we do.

A theory is also surrounded by a penumbra of non-testable elements, including those concepts that we use to translate the mathematical language of physics into everday words. We shouldn’t forget that many equations of physics have survived for a long time, but their interpretation has changed radically over the years.

The inevitable gap that lies between theory and reality does not mean that physics is a useless waste of time, it just means that its scope is limited. The Tube  map is not complete or accurate in all respects, but it’s excellent for what it was made for. Physics goes down the tubes when it loses sight of its key requirement, i.e. to be testable, and in order to be testable it has to be simple enough to calculate things to be compared with observations. In many cases that means a simplified model is perfectly adequete.

Another quote by George Box expands upon this point:

Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

In any case, an attempt to make a grand unified theory of the London Underground system would no doubt produce a monstrous thing so unwieldly that it would be useless in practice. I think there’s a lesson there for string theorists too…

Many modern-day physicists are obsessed with the idea of a “Theory of Everything” (or TOE). Such a theory would entail the unification of all physical theories – all laws of Nature, if you like – into a single principle. An equally accurate description would then be available, in a single formula, of phenomena that are currently described by distinct theories with separate sets of parameters. Instead of textbooks on mechanics, quantum theory, gravity, electromagnetism, and so on, physics students would need just one book. But would such a theory somehow be  physical reality, as some physicists assert? I don’t think so. In fact it’s by no means clear to me that it would even be useful..

### The Great Beyond - Nature blog

Co-discoverer of ozone hole dies

Joe Farman, one of three British scientists who discovered a ‘hole’ in the ozone layer, died on 11 May (see obituaries in the Guardian and the Telegraph).

It was exactly 28 years ago yesterday (on 16 May 1985) that Joe Farman, Brian Gardiner and Jonathan Shanklin published their finding in Nature. It prompted global action to ban chlorofluorocarbons, or CFCs, the man-made chemicals that were breaking down ozone high in the atmosphere. The ozone hole still appears above Antarctica every spring, but it is on the mend and scientists hope that it will be completely healed in the next century.

The paper is the subject of episode two of our new podcast series, The Nature PastCast.

In the podcast, Farman’s colleague Jonathan Shanklin recalls sifting through a backlog of ozone data from the British Antarctic Survey’s station at Halley Bay. At first, he remembers, Farman thought that the springtime dip in ozone was a one-off. Shanklin says he was the ‘little voice’ in the background that convinced Farman that the dip in ozone had happened every spring for several years, demonstrating a systematic decline.

Unfortunately, Farman himself was too unwell to be interviewed for our podcast, but his version of events can be heard in an interview published by the British Library, as part of their Oral Histories project.

### Matt Strassler - Of Particular Significance

A Few Items of Interest

I was sent or came across a few interesting links that relate to things covered on this blog and/or of general scientific interest.

It was announced yesterday that the European Physical Society 2013 High Energy Physics Prize was awarded to the collaboration of experimental physicists that operate the ATLAS and CMS experiments that discovered a type of Higgs particle, with special mention to Michel Della Negra, Peter Jenni, and Tejinder Virdee, for their pioneering role in the development of ATLAS and CMS.  Jenni and Virdee are both at the LHCP conference in Barcelona, which I’m also attending, and it has been a great pleasure for all of us here to be able to congratulate them in person .

One thing that came up a couple of times regarding weather forecasting (for instance, in forecasting the path of Hurricane Sandy) is that the European weather forecasters are doing a much better job of predicting storms a week in advance than U.S. forecasters are.  And I was surprised to learn that one of the the main reasons is simple: U.S. forecasters have less computing power than their European counterparts, which sounds (and is) ridiculous.  The new director of the U.S. National Weather Service, Louis Uccellini, has been successful in his goal of improving this situation, as reported here[Thanks to two readers for pointing me to this article.]

One of the possible interpretations of the new class of high-energy neutrinos reported by IceCube (see yesterday’s post) is that they come from the slow decay of a small fraction of the universe’s dark matter particles, assuming those particles have a mass of a couple of million GeV/c². [That's much heavier than the types of dark matter particles that most people are currently looking for, in searches that I discussed in a recent article.]  I didn’t immediately mention this possibility (which is rather obvious to an expert) because I wanted a couple of days to think about it before generating a stampede or press articles.  But, not surprisingly, people who were paying more attention to what IceCube has been up to had recently written a paper on this subject[Here's an older, related paper, but at much lower energy; maybe there are other similar papers that I don't know about?]  At the time these authors wrote this paper, only the two highest energy neutrinos — which have energies that, within the uncertainties of the measurements, might be equal (see Figure 2 of yesterday’s post) — were publicly known.  In their paper, they predicted that (just as any expert would guess) in addition to a spike of neutrinos, all at about 1.1 million GeV, one would also find a population of lower-energy neutrinos, similar to those new neutrinos that IceCube has just announced. So yes, among many possibilities, it appears that it is possible that the new neutrinos are from decaying dark matter.  If more data reveals that there really is a spike of neutrinos with energy around 1.1 million GeV, and the currently-observed gap between the million-GeV neutrinos and the lower-energy ones barely fills in at all, then this will be extremely strong evidence in favor of this idea… though it will be another few years before the evidence could become convincing.  Conversely, if IceCube observes any neutrinos near but significantly above 1.1 million GeV, that would show there isn’t really a spike, disfavoring this particular version of the idea.

Regarding yesterday’s post, it was pointed out to me that when I wrote “The only previous example of neutrinos being used in astrophysics occurred with the discovery of neutrinos from the relatively nearby supernova, visible with the naked eye, that occurred in 1987,” I should also have noted that neutrinos were and are used to understand the interior of the sun (and vice versa).  And you could even perhaps say that atmospheric neutrinos have been used to understand cosmic rays (and vice versa.)

In sad news, in the “all-good-things-must-come-to-an-end” category, the Kepler spacecraft, which has brought us an unprecedented slew of discoveries of planets orbiting other stars, may have reached the end of the line (see for example here), at least as far as its main goals.  It’s been known for some time that its ability to orient itself precisely was in increasing peril, and it appears that it has now been lost.  Though this has occurred earlier than hoped, Kepler survived longer than its core mission was scheduled to do, and its pioneering achievements, in convincing scientists that small rocky planets not unlike our own are very common, will remain in the history books forever.  Simultaneous congratulations and condolences to the Kepler team, and good luck in getting as much as possible out of a more limited Kepler.

Filed under: Astronomy, LHC News, Particle Physics, Science and Modern Society Tagged: astronomy, cms, DarkMatter, Higgs, LHC, neutrinos, weather

### The Great Beyond - Nature blog

Italy may rein in rogue stem-cell therapy

Posted on behalf of Alison Abbott.

A controversial decree allowing severely ill patients to continue treatment with an unproven, and possibly unsafe, stem-cell therapy may be amended, if the Italian parliament’s Chamber of Deputies has its way.

Yesterday (16 May) the Chamber’s social affairs committee unanimously passed amendments to the decree which would allow the Brescia-based Stamina Foundation, which developed the therapy, to continue administering it. However, Stamina would be required to do so within regular clinical trials, under the oversight of regulatory agencies, and using cells manufactured according to Good Manufacturing Practice (GMP). A supervisory ‘observatory’ comprising experts and patient representatives would oversee clinical trial procedures.

The proposal is intended to defuse the tensions between, on one side, terminally ill patients and their families, who believe the Stamina treatment is their only hope, and on the opposite side,  scientists and regulators who believe it to be dangerous and almost certainly not efficacious.

Stamina claims to have treated in the last six years more than 80 patients with diseases ranging from Parkinson’s disease to muscular dystrophy. Many of the patients have been young children. In the therapy, mesenchymal stem cells are extracted from the bone marrow of the patients, manipulated in the laboratory and re-infused into the patients.

According to the committee’s proposed amendments to the decree, the government would make €3 million available for the clinical trials over the next 18 months. The plenary Chamber is expected to vote in favour of the amendments on Monday.

But that won’t be the end of the story. The upper house, the Senate, will then have to approve the Chamber’s amendments, and in the continuing emotional heat, its final decision is hard to predict. Patient groups wearing tee-shirts with the slogan ‘Yes to Stamina. Yes to Life’ demonstrated against the amendments in Rome yesterday. Stamina’s charismatic president Davide Vannoni was among the demonstrators. Vannoni claims that the Chamber had been influenced by the interests of the pharmaceutical industry.

If no final political decision is made by 25 May, the decree will automatically expire.

### Axel Maas - Looking Inside the Standard Model

What could the Higgs be made of?
One of the topics I am working on is how the standard model of particle physics can be extended. The reason is that it is, intrinsically, but not practically, flawed. Therefore, we know that there must be more. However, right now we have only very vague hints from experiments and astronomical observations how we have to improve our theories. Therefore, many possibilities are right now explored. The one I am working on is called technicolor.

A few weeks ago, my master student and I have published a preprint. By the way, a preprint is a paper which is in the process of being reviewed by the scientific community, whether it is sound. They play an important role in science, as they contain the most recent results. Anyway, in this preprint, we have worked on technicolor. I will not rehearse too much about technicolor here, this can be found in an earlier blog entry. The only important ingredient is that in a technicolor scenario one assumes that the Higgs particle is not an elementary particle. Instead, just like an atom, it is made from other particles. In analogy to quarks, which build up the protons and other hadrons, these parts of the Higgs are called techniquarks. Of course, something has to hold them together. This must be a new, unknown force, called techniforce. It is imagined to be again similar, in a very rough way, to the strong force. Consequently, the carrier of this fore are called technigluons, in analogy to the gluons of the strong force.

In our research we wanted to understand the properties of these techniquarks. Since we do not yet know if there is really technicolor, we can also not be sure of how it would eventually look like. In fact, there are many possibilities how technicolor could look like. So many that it is not even simple to enumerate them all, much less to calculate for all of them simultaneously. But since we are anyhow not sure, which is the right one, we are not yet in a position where it makes sense to be overly precise. In fact, what we wanted to understand is how techniquarks work in principle. Therefore, we just selected out of the many possibilities just one.

Now, as I said, techniquarks are imagined to be similar to quarks. But they cannot be the same, because we know that the Higgs behaves very different from, say, a proton or a pion. It is not possible to get this effect without making the techniquarks profoundly different from the quarks. One of the possibilities to do so is by making them a thing in between a gluon and a quark, which is called an adjoint quark. The term 'adjoint' is referring to some mathematical property, but these are not so important details. So that is what we did: We assumed our techniquarks should be adjoint quarks.

The major difference is now what happens if we make these techniquarks light and lighter. For the strong force, we know what happens: We cannot make them arbitrarily light, because they gain mass from the strong force. This appears to be different for the theory we studied. There you can make them arbitrarily light. This has been suspected since a long time from indirect observations. What we did was, for the first time, to directly investigate the techniquarks. What we saw was that when they are rather heavy, we have a similar effect like for the strong force: The techniquarks gain mass from the force. But once they got light enough, this effect ceases. Thus, it should be possible to make them massless. This possibility is necessary to make a Higgs out of them.

Unfortunately, because we used computer simulations, we could not really go to massless techniquarks. This is far too expensive in terms of the time needed to do computer simulations (and actually, already part of the simulations were provided by other people, for which we are very grateful). Thus, we could not make sure that it is the case. But our results point strongly in this direction.

So is this a viable new theory? Well, we have shown that a necessary condition is fulfilled. But there is a strong difference between necessary and sufficient. For a technicolor theory to be useful it should not only have a Higgs made from techniquarks, and no mass generation from the techniforce. It must also have more properties, to be ok with what we know from experiment. The major requirement is how strong the techniforce is over how long distances. There existed some indirect earlier evidence that for this theory the techniforce is not quite strong enough for sufficiently far distances to be good enough. Our calculations have again a more direct way of determining this strength. And unfortunately, it appears that we have to agree with this earlier calculations.

Is this the end of technicolor? Certainly not. As I said above, technicolor is foremost an idea. There are many possibilities how to implement this idea, and we have just checked one. Is it then the end of this version? We have to agree with the earlier investigations that it appears so in this pure form. But, in fact, in this purest form we have neglected a lot, like the rest of the standard model. There is still a significant chance that a more complete version could work. After all, the qualitative features are there, it is just that the numbers are not perfectly right. Or perhaps just a minor alteration may already do the job. And this is something where people are continuing working on.

### Tommaso Dorigo - Scientificblogging

The Quote Of The Week - No New Physics Now Conceivable
"New Physics can appear at any moment but it is now conceivable that no new physics will show up at the LHC"

Guido Altarelli, LHC Nobel Symposium, May 15th 2013

It is funny reading the above quote if you are one who "conceived" that the LHC could find no new physics 7 years ago, as demonstrated by where I put my money...

### Christian P. Robert - xi'an's og

i-like[d the] workshop

Indeed, I liked the i-like workshop very much. Among the many interesting talks of the past two days (incl. Cristiano Varin’s ranking of Series B as the top influential stat. journal!) , Matti Vihola’s and Nicolas Chopin’s had the strongest impact on me (to the point of scribbling in my notebook). In a joint work with Christophe Andrieu, Matti focussed on evaluating the impact of replacing the target with an unbiased estimate in a Metropolis-Hastings algorithm. In particular, they found necessary and sufficient conditions for keeping geometric and uniform ergodicity. My question (asked by Iain Murray) was whether they had derived ways of selecting the number of terms in the unbiased estimator towards maximal efficiency. I also wonder if optimal reparameterisations can be found in this sense (since unbiased estimators remain unbiased after reparameterisation).

Nicolas’ talk was about particle Gibbs sampling, a joint paper with Sumeet Singh recently arXived. I did not catch the whole detail of their method but/as I got intrigued by a property of Marc Beaumont’s algorithm (the very same algorithm used by Matti & Christophe). Indeed, the notion is that an unbiased estimator of the target distribution can be found in missing variable settings by picking an importance sampling distribution q on those variables. This representation leads to a pseudo-target Metropolis-Hastings algorithm. In the stationary regime, there exists a way to derive an “exact” simulation from the joint posterior on (parameter,latent). All the remaining/rejected latents are then distributed from the proposal q. What I do not see is how this impacts the next MCMC move since it implies generating a new sample of latent variables. I spoke with Nicolas about this over breakfast: the explanation is that this re-generated set of latent variables can be used in the denominator of the Metropolis-Hastings acceptance probability and is validated as a Gibbs step. (Incidentally, it may be seen as a regeneration event as well.)

Furthermore, I had a terrific run in the rising sun (at 5am) all the way to Kenilworth where I was a deer, pheasants and plenty of rabbits. (As well as this sculpture that now appears to me as being a wee sexist…)

Filed under: Running, Statistics, Travel, University life Tagged: ABC, empirical likelihood, i-like, likelihood-free methods, Metropolis-Hastings algorithms, Padova, pseudo-target, simulation, University of Warwick

### Marco Frasca - The Gauge Connection

CMS harbors new physics beyond the Standard Model

In these days is ongoing LHCP 2013 (First Large Hadron Collider Physics Conference) and CMS data seem to point significantly toward new physics. Their measurements on the production modes for WW and ZZ are agreeing with my recent computations (see here) and overall are deviating slightly from Standard Model expectations giving

$\frac{\sigma}{\sigma_SM}=0.80\pm 0.14$

Note that Standard Model is alive and kicking yet but looking at the production mode of WW you will read

$\frac{\sigma_{WW}}{\sigma_{WW\ SM}}=0.68\pm 0.20$

in close agreement with results given in my paper and improved respect to Moriond that was $0.71\pm 0.21$. The reason could be that: Higgs model is a conformal one. Data from ZZ yield

$\frac{\sigma_{ZZ}}{\sigma_{ZZ\ SM}}=0.92\pm 0.28$

that is consistent with the result for WW mode, though. I give here the full table from the talk

For the sake of completeness I give here also the same results from ATLAS at the same conference that, instead, seems to go the other way round obtaining overall $1.30\pm 0.20$ and this is already an interesting matter.

At CMS, new physics beyond the Standard Model is peeping out and, more inteestingly, the Higgs model tends to be a conformal one. If this is true, supersymmetry is an inescapable consequence (see here). I would like to conclude citing the papers of other people working on this model and that will be largely cited in the foreseeable future (see here and here).

Marco Frasca (2013). Revisiting the Higgs sector of the Standard Model arXiv arXiv: 1303.3158v1

Marco Frasca (2010). Mass generation and supersymmetry arXiv arXiv: 1007.5275v2

T. G. Steele, & Zhi-Wei Wang (2013). Is Radiative Electroweak Symmetry Breaking Consistent with a 125 GeV
Higgs Mass? Physical Review Letters 110, 151601 arXiv: 1209.5416v3

Krzysztof A. Meissner, & Hermann Nicolai (2006). Conformal Symmetry and the Standard Model Phys.Lett.B648:312-317,2007 arXiv: hep-th/0612165v4

Filed under: Particle Physics, Physics Tagged: ATLAS, CERN, CMS, Conformal Standard Model, Higgs particle, High-energy physics conferences

### CERN Bulletin

Interfon
www.interfon.fr Rendez-vous sur notre site pour toutes les « News » Interfon « News » AVANT VOS VACANCES… Profitez de nos offres  spéciales « Fioul et Volailles » Semaine du fioul Du lundi 10 au vendredi 14 juin 2013 Aux bureaux INTERFON de : – St-Genis de 13 h 30 à 17 h 30 – Au CERN de 12 h 30 à 15 h 30 Venez passer vos commandes de fioul et bénéficiez de notre tarif promotionnel « spécial Interfon ». * * * * * Volailles de Bresse Production agricole à la ferme À commander au plus tôt et avant le jeudi 6 juin. Tarifs au kilo : – Pintade fermière        11,95 €    Préparée de 1,5 Kg à 2 Kg – Poulet de Bresse (AOC) 11.90 €    Préparé de 1,500 Kg à 2 Kg – Poulet de Bresse  18,00 €    Roulé, effilé de 2 Kg à 3Kg – Canette Fermière   11,20 €    Préparée de 1,8 Kg à + 2 Kg – Canard fermier   10,60 €    Préparé de 2,8 Kg à + 3,5 Kg Livraison chez Interfon le vendredi 14 juin (entre 13 h 30 et 15 h 30). Renseignements chez Interfon 04 50 42 28 93 - info@interfon.fr Commandes uniquement par courrier, mail ou auprès de nos bureaux de St-Genis ou CERN. – Information bureau du CERN (Bât.504) – Coopérative : tél. 73339 (tous les jours de  12 h 30 à 15 h 30) e-mail : interfon@cern.ch Mutuelle : tél. 73939 (tous les jeudis de 13 h 00 à 16 h 30) e-mail : interfon@adrea-paysdelain.fr – Au Technoparc :  (du lundi au vendredi de 13 h 30 à 17 h 30) Coopérative : tél. 04 50 42 28 93 Mutuelle : tél. 04 50 42 74 57

### CERN Bulletin

Yoga Club
L'Assemblée générale du club de yoga aura lieu le 28 mai à 12 heures sur la mezzanine du bâtiment 504, en face des salles de yoga. Les points suivants seront traités : Rapport d'activité Comptes Bureau Révision des statuts Divers Nous vous attendons nombreux.

### CERN Bulletin

Cine-Club
Thursday 23 May 2013 at 20:00 CERN Council Chamber Larks on a String (Skrivánci na Niti) Directed by Jiří Menzel (Czechoslovakia, 1969) Original version Czech; English subtitles; 100 minutes The film shows the absurdities of a system that faces clear difficulties in justifying itself. The story takes place in a scrapyard where a bunch of harmless intellectuals are held as a forced labour for crimes against the regime. Beautifully set, with an incredible humanity, men and women, living and working separately will always profit of some small pieces of freedom for simply living, being happy, and turning authorities to an innocent derision. Shot in 1969, it will only be released in 1990. * * * * * Thursday 30 May 2013 at 20:00 CERN Council Chamber Ucho (The Ear) Directed by Karel Kachyńa (Czechoslovakia, 1970) Original version Czech; English subtitles; 94 minutes Ucho (The Ear), shot in 1970 by Karel Kachyńa, was held by communist authorities before a release in the late 80’s. The plot is mainly set during the night in the house of a ministry official and his wife when they return home after a meeting with the other members of the Party where discussions took place about a reorganisation of the overall hierarchy. A paranoiac tension is slowly invading the space as they discover they are being watched and their house searched by communist authorities. Suspicion raises all through the film while the ministry official is getting more and more self-persuaded of his own imminent arrest. http://cineclub.web.cern.ch/Cineclub/

### The n-Category Cafe

The Propositional Fracture Theorem

Suppose $X$ is a topological space and $U\subseteq X$ is an open subset, with closed complement $K=X\setminus U$. Then $U$ and $K$ are, of course, topological spaces in their own right, and we have $X=U\bigsqcup K$ as a set. What additional information beyond the topologies of $U$ and $K$ is necessary to enable us to recover the topology of $X$ on their disjoint union?

Recall that the subspace topologies of $U$ and $K$ say that for each open $V\subseteq X$, the intersections $V\cap U$ and $V\cap K$ are open in $U$ and $K$, respectively. Thus, if a subset of $X$ is to be open, it must yield open subsets of $U$ and $K$ when intersected with them. However, this condition is not in general sufficient for a subset of $X$ to be open — it does define a topology on $X$, but it’s the coproduct topology, which may not be the original one.

One way we could start is by asking what sort of structure relating $U$ and $K$ we can deduce from the fact that both are embedded in $X$. For instance, suppose $A\subseteq U$ is open. Then there is some open $V\subseteq X$ such that $V\cap U=A$. But we could also consider $V\cap K$, and ask whether this defines something interesting as a function of $A$.

Of course, it’s not clear that $V\cap K$ is a function of $A$ at all, since it depends on our choice of $V$ such that $V\cap U=A$. Is there a canonical choice of such $V$? Well, yes, there’s one obvious canonical choice: since $U$ is open in $X$, $A$ is also open as a subset of $X$, and we have $A\cap U=A$. However, $A\cap K=\varnothing$, so choosing $V=A$ wouldn’t be very interesting.

The choice $V=A$ is the smallest possible $V$ such that $V\cap U=A$. But there’s also a largest such $V$, namely the union of all such $V$. This set is open in $X$, of course, since open sets are closed under arbitrary unions, and since intersections distribute over arbitrary unions, its intersection with $U$ is still $A$.

Let’s call this set ${i}_{*}\left(A\right)$. In fact, it’s part of a triple of adjoint functors ${i}_{!}⊣{i}^{*}⊣{i}_{*}$ between the posets $O\left(U\right)$ and $O\left(X\right)$ of open sets in $U$ and $X$, where ${i}^{*}:O\left(X\right)\to O\left(U\right)$ is defined by ${i}^{*}\left(V\right)=V\cap U$, and ${i}_{!}:O\left(U\right)\to O\left(X\right)$ is defined by ${i}_{!}\left(A\right)=A$. Here $i$ denotes the continuous inclusion $U↪X$.

Now we can consider the intersection ${i}_{*}\left(A\right)\cap K$, which I’ll also denote ${j}^{*}{i}_{*}\left(A\right)$, where $j:K↪X$ is the inclusion. It turns out that this is interesting! Consider the following example, which is easy to visualize:

• $X={ℝ}^{2}$.
• $U=\left\{\left(x,y\right)\mid x<0\right\}$, the open left half-plane.
• $K=\left\{\left(x,y\right)\mid x\ge 0\right\}$, the closed right half-plane.

If an open subset $A\subseteq U$ “doesn’t approach the boundary” between $U$ and $K$, such as the open disc of radius $1$ centered at $\left(-2,0\right)$, then it’s fairly easy to see that ${i}_{*}\left(A\right)=A\cup \left\{\left(x,y\right)\mid x>0\right\}$, and therefore ${j}^{*}{i}_{*}\left(A\right)=\left\{\left(x,y\right)\mid x>0\right\}$ is the open right half-plane.

On the other hand, consider some open subset $A\subseteq U$ which does approach the boundary, such as

$A=\left\{\left(x,y\right)\mid {x}^{2}+{y}^{2}<1\phantom{\rule{thickmathspace}{0ex}}\text{and}\phantom{\rule{thickmathspace}{0ex}}x<0\right\}$

the intersection with $U$ of the open disc of radius $1$ centered at $\left(0,0\right)$. A little thought should convince you that in this case, ${i}_{*}\left(A\right)$ is the union of the open right half-plane with the whole open disc of radius $1$ centered at $\left(0,0\right)$. Therefore, ${j}^{*}{i}_{*}\left(A\right)$ is the open right half-plane together with the strip $\left\{\left(0,y\right)\mid -1.

This example suggests that in general, ${j}^{*}{i}_{*}\left(A\right)$ measures how much of the “boundary” between $U$ and $K$ is “adjacent” to $A$. I leave it to some enterprising reader to try to make that precise. Here’s another nice exercise: what can you say about ${i}^{*}{j}_{*}\left(B\right)$ for an open subset $B\subseteq K$?

Let us however go back to our original question of recovering the topology of $X$. Suppose $A\subseteq U$ and $B\subseteq K$ are open such that $A\cup B$ is open in $X$; how does this latter fact manifest as a property of $A$ and $B$? Note first that $\left(A\cup B\right)\cap U=A$. Thus, since ${i}_{*}\left(A\right)$ is the largest $V$ such that $V\cap U=A$, we have $A\cup B\subseteq {i}_{*}\left(A\right)$, and therefore $B={j}^{*}\left(A\cup B\right)\subseteq {j}^{*}{i}_{*}\left(A\right)$. Let me say that again:

$B\subseteq {j}^{*}{i}_{*}\left(A\right).$

This is a relationship between $A$ and $B$ which is expressed purely in terms of the topological spaces $U$ and $K$ and the function ${j}^{*}{i}_{*}:O\left(U\right)\to O\left(K\right)$, which we have just shown is necessary for $A\cup B$ to be open in $X$.

In fact, it is also sufficient! For suppose this to be true. Since $B$ is open in $K$, there is some open $C\subseteq X$ such that $C\cap K=B$. Given such a $C$, the union $C\cup U$ also has this property, since $U\cap K=\varnothing$. Note that in fact $C\cup U=B\cup U$, and also $B\cup U={j}_{*}\left(B\right)$, the largest open subset of $X$ whose intersection with $K$ is $B$. (Since $K$, unlike $U$, is not open, there may not be a smallest such, but there is always a largest such.) Now I claim we have

$A\cup B={j}_{*}\left(B\right)\cap {i}_{*}\left(A\right)$

To show this, it suffices to show that the two sides become equal after intersecting with $U$ and with $K$. For the first, we have

$\left({j}_{*}\left(B\right)\cap {i}_{*}\left(A\right)\right)\cap U={j}_{*}\left(B\right)\cap \left({i}_{*}\left(A\right)\cap U\right)={j}_{*}\left(B\right)\cap A=A=\left(A\cup B\right)\cap U$

and for the second we have

$\left({j}_{*}\left(B\right)\cap {i}_{*}\left(A\right)\right)\cap K=\left({j}_{*}\left(B\right)\cap K\right)\cap {i}_{*}\left(A\right)=B\cap {i}_{*}\left(A\right)=B=\left(A\cup B\right)\cap K$

using the assumption at the step $B\cap {i}_{*}\left(A\right)=B$.

In conclusion, the topology of $X$ is entirely determined by

• the induced topology of an open subspace $U\subseteq X$,
• the induced topology on its closed complement $K=X\setminus U$, and
• the induced function ${j}^{*}{i}_{*}:O\left(U\right)\to O\left(K\right)$.

Specifically, the open subsets of $X$ are those of the form $A\cup B$ — or equivalently, by the above argument, ${i}_{*}\left(A\right)\cap {j}_{*}\left(B\right)$ — where $A\subseteq U$ is open in $U$, $B\subseteq K$ is open in $K$, and $B\subseteq {j}^{*}{i}_{*}\left(A\right)$.

An obvious question to ask now is, suppose given two arbitrary topological spaces $U$ and $K$ and a function $f:O\left(U\right)\to O\left(K\right)$; what conditions on $f$ ensure that we can define a topology on $X≔U\bigsqcup K$ in this way, which restricts to the given topologies on $U$ and $K$ and induces $f$ as ${j}^{*}{i}_{*}$? We may start by asking what properties ${j}^{*}{i}_{*}$ has. Well, it preserves inclusion of open sets (i.e. $A\subseteq A\prime ⇒{j}^{*}{i}_{*}\left(A\right)\subseteq {j}^{*}{i}_{*}\left(A\prime \right)$) and also finite intersections (${j}^{*}{i}_{*}\left(A\cap A\prime \right)={j}^{*}{i}_{*}\left(A\right)\cap {j}^{*}{i}_{*}\left(A\prime \right)$), including the empty intersection (${j}^{*}{i}_{*}\left(U\right)=K$). In other words, it is a finite-limit-preserving functor between posets. Perhaps surprisingly, it turns out that this is also sufficient: any finite-limit-preserving $f:O\left(U\right)\to O\left(K\right)$ allows us to glue $U$ and $K$ in this way; I’ll leave that as an exercise too.

Okay, that was some fun point-set topology. Now let’s categorify it. Open subsets of $X$ are the same as 0-sheaves on it, i.e. sheaves of truth values, or of subsingleton sets, and the poset $O\left(X\right)$ is the (0,1)-topos of 0-sheaves on $X$. So a certain sort of person immediately asks, what about $n$-sheaves for $n>0$?

In other words, suppose we have $X$, $U$, and $K$ as above; what additional data on the toposes $\mathrm{Sh}\left(U\right)$ and $\mathrm{Sh}\left(K\right)$ of sheaves (of sets, or groupoids, or homotopy types, etc.) allows us to recover the topos $\mathrm{Sh}\left(X\right)$? As in the posetal case, we have adjunctions ${i}_{!}⊣{i}^{*}⊣{i}_{*}$ and ${j}^{*}⊣{j}_{*}$ relating these toposes, and we may consider the composite ${j}^{*}{i}_{*}:\mathrm{Sh}\left(U\right)\to \mathrm{Sh}\left(K\right)$.

The corresponding theorem is then that $\mathrm{Sh}\left(X\right)$ is equivalent to the comma category of ${\mathrm{Id}}_{\mathrm{Sh}\left(K\right)}$ over ${j}^{*}{i}_{*}$, i.e. the category of triples $\left(A,B,\varphi \right)$ where $A\in Sh\left(U\right)$, $B\in \mathrm{Sh}\left(K\right)$, and $\varphi :B\to {j}^{*}{i}^{*}\left(A\right)$. This is true for 1-sheaves, $n$-sheaves, $\infty$-sheaves, etc. Moreover, the condition on a functor $f:\mathrm{Sh}\left(U\right)\to \mathrm{Sh}\left(K\right)$ ensuring that its comma category is a topos is again precisely that it preserves finite limits. Finally, this all works for arbitrary toposes, not just sheaves on topological spaces. I mentioned in my last post some applications of gluing for non-sheaf toposes (namely, syntactic categories).

One new-looking thing does happen at dimension 1, though, relating to what exactly the equivalence

$\mathrm{Sh}\left(X\right)\simeq \left({\mathrm{Id}}_{\mathrm{Sh}\left(K\right)}↓{j}^{*}{i}_{*}\right)$

looks like. The left-to-right direction is easy: we send $C\in \mathrm{Sh}\left(X\right)$ to $\left({i}^{*}C,{j}^{*}C,\varphi \right)$ where $\varphi :{j}^{*}C\to {j}^{*}{i}_{*}{i}^{*}C$ is ${j}^{*}$ applied to the unit of the adjunction ${i}^{*}⊣{i}_{*}$. But in the other direction, suppose given $\left(A,B,\varphi \right)$; how can we reconstruct an object of $\mathrm{Sh}\left(X\right)$?

In the case of open subsets, we obtained the corresponding object (an open subset of $X$) as $A\cup B$, but now we no longer have an ambient “set of points” in which to take such a union. However, we also had the equivalent characterization of the open subset of $X$ as ${i}_{*}\left(A\right)\cap {j}_{*}\left(B\right)$, and in the categorified case we do have objects ${i}_{*}\left(A\right)$ and ${j}_{*}\left(B\right)$ of $\mathrm{Sh}\left(X\right)$. We might initially try their cartesian product, but this is obviously wrong because it doesn’t incorporate the additional datum $\varphi$. It turns out that the right generalization is actually the pullback of ${j}_{*}\left(\varphi \right)$ and the unit of the adjunction ${j}^{*}⊣{j}_{*}$ at ${i}_{*}\left(A\right)$:

$\begin{array}{ccc}C& \to & {j}_{*}\left(B\right)\\ ↓& & {↓}^{{j}^{*}\left(\varphi \right)}\\ {i}_{*}\left(A\right)& \to & {j}_{*}{j}^{*}{i}_{*}\left(A\right)\end{array}$

In particular, any object $C\in \mathrm{Sh}\left(X\right)$ can be recovered from ${i}^{*}C$ and ${j}^{*}C$ by this pullback:

$\begin{array}{ccc}C& \to & {j}_{*}{j}^{*}C\\ ↓& & ↓\\ {i}_{*}{i}^{*}C& \to & {j}_{*}{j}^{*}{i}_{*}{i}^{*}C\end{array}$

Now let’s shift perspective a bit, and ask what all this looks like in the internal language of the topos $\mathrm{Sh}\left(X\right)$. Inside $\mathrm{Sh}\left(X\right)$, the subtoposes $\mathrm{Sh}\left(U\right)$ and $\mathrm{Sh}\left(K\right)$ are visible through the left-exact idempotent monads ${i}_{*}{i}^{*}$ and ${j}_{*}{j}^{*}$, whose corresponding reflective subcategories are equivalent to $\mathrm{Sh}\left(U\right)$ and $\mathrm{Sh}\left(K\right)$ respectively. In the internal type theory of $\mathrm{Sh}\left(X\right)$, ${i}_{*}{i}^{*}$ and ${j}_{*}{j}^{*}$ are modalities, which I will denote ${I}_{U}$ and ${J}_{U}$ respectively. Thus, inside $\mathrm{Sh}\left(X\right)$ we can talk about “sheaves on $U$” and “sheaves on $K$” by talking about ${I}_{U}$-modal and ${J}_{U}$-modal types (or sets).

Moreover, these particular modalities are actually definable in the internal language of $\mathrm{Sh}\left(X\right)$. Open subsets $U\subseteq X$ can be identified with subterminal objects of $\mathrm{Sh}\left(X\right)$, a.k.a. h-propositions or “truth values” in the internal logic. Thus, $U$ is such a proposition. Now ${I}_{U}$ is definable in terms of $U$ by

${I}_{U}\left(C\right)=\left(U\to C\right)$

I’m using type-theorists’ notation here, so $U\to C$ is the exponential ${C}^{U}$ in $\mathrm{Sh}\left(X\right)$. The other modality ${J}_{U}$ is also definable internally, though a bit less simply: it’s the following pushout:

$\begin{array}{ccc}U×C& \to & C\\ ↓& & ↓\\ U& \to & {J}_{U}\left(C\right)\end{array}.$

In homotopy-theoretic language, ${J}_{U}\left(C\right)$ is the join of $C$ and $U$, written $U*C$. And if we identify $\mathrm{Sh}\left(U\right)$ and $\mathrm{Sh}\left(K\right)$ with their images under ${i}_{*}$ and ${j}_{*}$, then the functor ${j}^{*}{i}_{*}:\mathrm{Sh}\left(U\right)\to \mathrm{Sh}\left(K\right)$ is just the modality ${J}_{U}$ applied to ${I}_{U}$-modal types.

Finally, the fact that $\mathrm{Sh}\left(X\right)$ is the gluing of $\mathrm{Sh}\left(U\right)$ with $\mathrm{Sh}\left(K\right)$ means internally that any type $C$ can be recovered from ${I}_{U}\left(C\right)$, ${J}_{U}\left(C\right)$, and the induced map ${J}_{U}\left(C\right)\to {J}_{U}\left({I}_{U}\left(C\right)\right)$ as a pullback:

$\begin{array}{ccc}C& \to & {J}_{U}\left(C\right)\\ ↓& & ↓\\ {I}_{U}\left(C\right)& \to & {J}_{U}\left({I}_{U}\left(C\right)\right)\end{array}$

Now recall that internally, $U$ is a proposition: something which might be true or false. Logically, ${I}_{U}\left(C\right)=\left(U\to C\right)$ has a clear meaning: its elements are ways to construct an element of $C$ under the assumption that $U$ is true.

The logical meaning of ${J}_{U}$ is somewhat murkier, but there is one case in which it is crystal clear. Suppose $U$ is decidable, i.e. that it is true internally that “$U$ or not $U$”. If the law of excluded middle holds, then all propositions are decidable — but of course, internally to a topos, the LEM may fail to hold in general. If $U$ is decidable, then we have $U+¬U=1$, where $¬U=\left(U\to 0\right)$ is its internal complement. It’s a nice exercise to show that under this assumption we have ${J}_{U}\left(C\right)=\left(¬U\to C\right)$.

In other words, if $U$ is decidable, then the elements of ${J}_{U}\left(C\right)$ are ways to construct an element of $C$ under the assumption that $U$ is false. In the decidable case, we also have ${J}_{U}\left({I}_{U}\left(C\right)\right)=1$, so that $C={I}_{U}\left(C\right)×{J}_{U}\left(C\right)$ — and this is just the usual way to construct an element of $C$ by case analysis, doing one thing if $U$ is true and another if it is false.

This suggests that we might regard internal gluing as a “generalized sort of case analysis” which applies even to non-decidable propositions. Instead of ordinary case analysis, where we have to do two things:

• assuming $U$, construct an element of $C$; and
• assuming not $U$, construct an element of $C$

in the non-decidable case we have to do three things:

• assuming $U$, construct an element of $C$;
• construct an element of the join $U*C$; and
• check that the two constructions agree in $U*\left(U\to C\right)$.

I have no idea whether this sort of generalized case analysis is useful for anything. I kind of suspect it isn’t, since otherwise people would have discovered it, and be using it, and I would have heard about it. But you never know, maybe it has some application. In any case, I find it a neat way to think about gluing.

Let me end with a tantalizing remark (at least, tantalizing to me). People who calculate things in algebraic topology like to work by “localizing” or “completing” their topological spaces at primes, since it makes lots of things simpler. Then they have to try to put this “prime-by-prime” information back together into information about the original space. One important class of tools for this “putting back together” is called fracture theorems. A simple fracture theorem says that if $X$ is a $p$-local space (meaning that all primes other than $p$ are inverted) and some technical conditions hold, then there is a pullback square:

$\begin{array}{ccc}X& \to & {X}_{p}^{\wedge }\\ ↓& & ↓\\ {X}_{ℚ}& \to & \left({X}_{p}^{\wedge }{\right)}_{ℚ}\end{array}$

where $\left(-{\right)}_{p}^{\wedge }$ denotes $p$-completion and $\left(-{\right)}_{ℚ}$ denotes “rationalization” (inverting all primes). A similar theorem applies to any space $X$ (with technical conditions), yielding a pullback square

$\begin{array}{ccc}X& \to & \prod _{p}{X}_{\left(p\right)}\\ ↓& & ↓\\ {X}_{ℚ}& \to & \left(\prod _{p}{X}_{\left(p\right)}{\right)}_{ℚ}\end{array}$

where $\left(-{\right)}_{\left(p\right)}$ denotes localization at $p$.

Clearly, there is a formal resemblance to the pullback square involved in the gluing theorem. At this point I feel like I should be saying something about $\mathrm{Spec}\left(ℤ\right)$. Unfortunately, I don’t know what to say! Maybe some passing expert will enlighten us.

### Lubos Motl - string vacua and pheno

String theory = Bayesian inference?
The following paper by Jonathan Heckman of Harvard is either wrong, or trivial, or revolutionary:
Statistical Inference and String Theory
I don't understand it so far but Jonathan claims that one may derive the equations of general relativity – and, in fact, the equations of string theory – from something as general as Bayesian inference by a collective of agents.

It sounds really bizarre because the Bayesian inference seems to be a totally generic framework that may be applied anywhere and that says nothing else about "what the theories should look like" while general relativity and string theory are completely rigid, specific, well-defined theories. How could they be equivalent?

Jonathan considers a collective of agents who are ordered along a $$d$$-dimensional grid. Each of them tries to reconstruct the probabilistic distribution for events that they observe experimentally. Collectively, these distributions define an embedding of a manifold in another manifold and Jonathan rather quickly states that various conditional probabilities we know from the Bayesian inference may be written as the Feynman path integrals with the actions that include $$\sqrt{\det G}$$, $$\sqrt{\det h}$$, and similar things!

Again, I don't understand it so far but needless to say, a proof that string theory is the same thing as rational thinking – and not just a subset of rational thinking – would be extraordinarily important. ;-) I will keep on reading it.

## May 16, 2013

### astrobites - astro-ph reader's digest

Using General Relativity to Measure Properties of Binary Pulsars

Title: A Shapiro delay detection in the binary system hosting the millisecond pulsar PSRJ1910-5959A
Authors: A. Corongiu, M. Burgay, A. Possenti et al.
First Author’s Institution: INAF (National Institute for Astrophysics in Italy)

Some History
Shapiro time delays are one of the four tests of general relativity possible in the solar system. Because mass curves spacetime, light traveling close to a massive object must take a longer path to reach a target than if spacetime were flat, as this video and animation show. Irwin Shapiro was the first to test this phenomenon by bouncing radar signals off Venus and Mercury in the 1960s. The time delay for these signals was only about 200 microseconds.

This paper measures a Shapiro delay in a binary pulsar system called PSRJ1910-5959A. This pulsar has been previously studied but the results here include more data that allows for a more refined analysis. (See here and here for previous Astrobites posts on pulsars). This pulsar has a spin period, how long it takes the pulsar to spin about its axis, of 3.27 ms. The companion star to the pulsar is a helium white dwarf, determined by independent spectroscopic observations with the ESO Very Large Telescope and the Hubble Space Telescope. The white dwarf orbits the pulsar with an orbital period of 0.84 days. Though this pulsar visually appears to be part of globular cluster NGC 6752, it is a matter of debate whether this is actually true or is just an illusion. If the pulsar is part of the globular cluster, this represents the first time a Shapiro delay has been detected for a pulsar in a globular cluster and offers important insights into the history of the cluster.

As the white dwarf passes between our line of sight and the pulsar, there is a slight delay in the pulses from the pulsar. Since pulsars pulse so regularly, any irregularity is a sign that something interesting is happening. This delay is on the order of microseconds; it took observations spanning 10 years to detect it. Finding a Shapiro delay is exciting because it allows for very tight constraints on the mass of the companion star and the pulsar, as well as the inclination of the system.

How They Did It
To detect this delay, the research team used the 64m Parkes Radio Telescope located in Australia. For over 10 years, they regularly monitored this pulsar to detect times of arrival for the pulses. To accurately time a pulsar, astronomers fold the data on itself at the pulse period to increase the signal to noise. This yielded the team ∼1000 usable pulse timings. Check out this post for more details on the methods radio astronomers use to measure pulsar timing.

The research team used a model called the DD binary model to precisely measure the expected time of arrival for each pulse and the residuals for each detected pulse, the amount each pulse varies from the best fit. The DD binary model includes two parameters called the range and the shape that are related to the companion mass (the white dwarf) and the orbital inclination of the system. Check out Table 1 in the paper to see all the parameters that were measured or derived for this fit, and specifically note how amazing it is that pulsar periods can be measured to thirteen places past the decimal point!
To detect the Shapiro delay, a fit to the residuals is then determined, shown in the figure to the left. By finding the best fit to all the parameters of the model, then setting the companion mass to 0 and the orbital inclination to 90 degrees, a fit is determined and the remaining residuals can be seen in the top left of the figure. The team then binned and averaged the results to find an obvious harmonic in the fit of this data (bottom left). By  fitting again and removing the parameters related to the Shapiro delay,  they form the plots on the right side of the figure. Binning again brings out another harmonic, called the third harmonic, seen in the bottom right. Placing the binary companion in an elliptical orbit can explain this first harmonic, but the third harmonic can only be due to a Shapiro delay present in the data. The solid line in the figure shows the theoretical prediction of the harmonics, which matches the data well.

Once the Shapiro delay was determined, the team used these results to determine the inclination of the system and the mass of the white dwarf. They determined a companion mass of 0.180 ± 0.018M and an inclination of at least 88 degrees. Recall that an inclination of 90 degrees is defined as a perfectly edge-on orbit. The mass of the pulsar can then be determined, yielding a mass of 1.33 ± 0.11 M. It is interesting to compare these results to those presented in the previous papers that used photometric and spectroscopic data to determine the inclination and companion mass. The results are consistent, which give credence to both methods as ways to determine these parameters.

Other Thoughts
The proper motion of this pulsar must be measured to determine once and for all if it is part of the globular cluster. If it is, this system will prove very useful in understanding mass-radius relationships for helium white dwarfs. It is difficult to determine the mass and the radius of white dwarfs using optical observations. Doing so requires white dwarf spectral models to estimate the surface gravity and effective temperature, then infer the mass and radius. Observations that do not rely on these models are needed so we can understand the interaction of these fundamental properties better.

### John Baez - Azimuth

Quantum Techniques for Chemical Reaction Networks

The summer before last, I invited Brendan Fong to Singapore to work with me on my new ‘network theory’ project. He quickly came up with a nice new proof of a result about mathematical chemistry. We blogged about it, and I added it to my book, but then he became a grad student at Oxford and got distracted by other kinds of networks—namely, Bayesian networks.

So, we’ve just now finally written up this result as a self-contained paper:

• John Baez and Brendan Fong, Quantum techniques for studying equilibria in chemical reaction networks.

Check it out and let us know if you spot mistakes or stuff that’s not clear!

The idea, in brief, is to use math from quantum field theory to give a somewhat new proof of the Anderson–Craciun–Kurtz theorem.

This remarkable result says that in many cases, we can start with an equilibrium solution of the ‘rate equation’ which describes the behavior of chemical reactions in a deterministic way in the limit of a large numbers of molecules, and get an equilibrium solution of the ‘master equation’ which describes chemical reactions probabilistically for any number of molecules.

The trick, in our approach, is to start with a chemical reaction network, which is something like this:

and use it to write down a Hamiltonian describing the time evolution of the probability that you have various numbers of each kind of molecule: A, B, C, D, E, … Using ideas from quantum mechanics, we can write this Hamiltonian in terms of annihilation and creation operators—even though our problem involves probability theory, not quantum mechanics! Then we can write down the equilibrium solution as a ‘coherent state’. In quantum mechanics, that’s a quantum state that approximates a classical one as well as possible.

All this is part of a larger plan to take tricks from quantum mechanics and apply them to ‘stochastic mechanics’, simply by working with real numbers representing probabilities instead of complex numbers representing amplitudes!

I should add that Brendan’s work on Bayesian networks is also very cool, and I plan to talk about it here and even work it into the grand network theory project I have in mind. But this may take quite a long time, so for now you should read his paper:

• Brendan Fong, Causal theories: a categorical perspective on Bayesian networks.

### Emily Lakdawalla - The Planetary Society Blog

Connecting scientist mentors with students who have the desire to learn
Caleph Wilson provides examples and guidance to scientists wishing to mentor students in science, technology, engineering, and math outreach programs.

### Emily Lakdawalla - The Planetary Society Blog

Brief update with good news on Kiera Wilmot
Two weeks ago I wrote about Kiera Wilmot, a teen girl who was expelled from her school and charged with two felonies for unsupervised messing around with a chemical reaction on school grounds. Yesterday the Orlando Sentinel reported that no charges are being filed against her, which removes the greatest threat to her future.

### The Great Beyond - Nature blog

US Senate approves Moniz for energy post and advances EPA nominee

US President Barack Obama’s science team gained a new member on 16 May as the Senate confirmed physicist Ernest Moniz as head of the Department of Energy. Lawmakers also voted to advance the nomination of Gina McCarthy, Obama’s choice to lead the Environmental Protection Agency (EPA).

The unanimous vote to approve Moniz, director of the Energy Initiative at the Massachusetts Institute of Technology in Cambridge, came after Senator Lindsay Graham (Republican, South Carolina) withdrew his objection to the nomination. Graham had blocked the full Senate from voting on Moniz for nearly a month, citing a White House proposal to cut US$200 million in funding for a plutonium-processing plant in South Carolina. Moniz, who now takes the helm of a sprawling agency with an annual budget of roughly$27 billion, is no stranger to Washington DC.  He served as an associate director of the White House Office of Science and Technology Policy under president Bill Clinton, and for the past four years he has served on the President’s Council of Advisors on Science and Technology.

Known for his strong support of natural gas and nuclear power, Moniz replaces fellow physicist Steven Chu, who left the energy department in April for a post at Stanford University in Palo Alto, California.

Meanwhile, the Senate Committee on Environment and Public Works approved McCarthy’s nomination by a 10–8 vote, clearing the way for a confirmation vote by the full Senate.

Unlike Moniz, who has received broad support from both major political parties, McCarthy — who currently heads the EPA’s air-quality office — has faced strong opposition from Republicans. They boycotted a scheduled committee vote on her nomination last week, effectively blocking the process. The highest ranking Republican on the Committee on Environment and Public Works, David Vitter (Louisiana), said that he and his colleagues attended Thursday’s vote because the EPA had agreed to address Republican concerns about the agency’s policies on information access and transparency.

But McCarthy’s path to consideration by the full Senate is not yet clear: another Republican senator, Roy Blunt (Missouri), is blocking a final vote on McCarthy’s nomination until the Obama administration provides more details about a plan to install new pumping stations along the Mississippi River in southern Missouri.

### The Great Beyond - Nature blog

Scientists join journal editors to fight impact-factor abuse

If enough eminent people stand together to condemn a controversial practice, will that make it stop?

That’s what more than 150 scientists and 75 science organizations are hoping for today, with a joint statement called the San Francisco Declaration on Research Assessment (DORA). It deplores the way some metrics — especially the notorious Journal Impact Factor (JIF) — are misused as quick and dirty assessments of scientists’ performance and the quality of their research papers.

“There is a pressing need to improve the ways in which the output of scientific research is evaluated,” DORA says.

Scientists routinely rant that funding agencies and institutions judge them by the impact factor of the journal they publish in — rather than by the work they actually do. The metric was introduced in 1963 to help libraries judge which journals to buy (it measures the number of citations the average paper in a journal has received over the past two years). But it bears little relation to the citations any one article is likely to receive, because only a few articles in a journal receive most of the citations. Focus on the JIF has changed scientists’ incentives, leading them to be rewarded for getting into high-impact publications rather than for doing good science.

“We, the scientific community, are to blame — we created this mess, this perception that if you don’t publish in Cell, Nature or Science, you won’t get a job,” says Stefano Bertuzzi, executive director of the American Society for Cell Biology (ACSB), who coordinated DORA after talks at the ACSB’s annual meeting last year. “The time is right for the scientific community to take control of this issue,” he says. Science and eLife also ran editorials on the subject today.

It has all been said before, of course. Research assessment “rests too heavily on the inflated status of the impact factor”, a Nature editorial noted in 2005; or as structural biologist Stephen Curry of Imperial College London put it in a recent blog post: “I am sick of impact factors and so is science”.

Even the company that creates the impact factor, Thomson Reuters, has issued advice that it does not measure the quality of an individual article in a journal, but rather correlates to the journal’s reputation in its field. (In response to DORA, Thomson Reuters notes that it’s the abuse of the JIF that is the problem, not the metric itself.)

But Bertuzzi says: “The goal is to show that the community is tired of this. Hopefully this will be a cultural change.” It’s notable that those signing DORA are almost all from US or European institutions, even though the ACSB has a website where anyone can sign the declaration.

(Nature Publishing Group, which publishes this blog, has not signed DORA: Nature’s editor-in-chief, Philip Campbell, said that the group’s journals had published many editorials critical of excesses in the use of JIFs, “but the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to”.)

DORA makes 18 recommendations to funders, institutions, researchers, publishers and suppliers of metrics. Broadly, these involve phasing out journal-level metrics in favour of article-level ones, being transparent and straightforward about metric assessments and judging by scientific content rather than publication metrics where possible.

The report does include a few contentious ideas: one, for example, suggests that organizations that supply metrics should “provide the data under a licence that allows unrestricted reuse, and provide computational access to the data”.

Thomson Reuters sells its Journal of Citation Reports (JCR) as a paid subscription and doesn’t allow unrestricted reuse of data, although the company notes in response that many individual researchers use the data with the firm’s permission to analyse JCR metrics. “It would be optimal to have a system which the scientific community can use,” says Bertuzzi cautiously when asked about this.

And Bertuzzi acknowledges that journals have different levels of prestige, meaning an element of stereotypical judgement based on where you publish would arise even if the JIF were not misused. But scientists should be able to consider which journal suits the community they want to reach, rather than thinking “let’s start from the top [impact-factor] journal and work our way down,” he says. “The best of all possible outcomes would be a cultural change where papers are evaluated for their own scientific merit.”

### Symmetrybreaking - Fermilab/SLAC

Moniz confirmed as Energy Secretary

The US Senate has unanimously confirmed MIT physics professor Ernest Moniz as the next Secretary of Energy.

Ernest Moniz, an MIT physics professor with extensive experience with particle accelerators and national energy policies, has been confirmed in a unanimous vote by the US Senate as the next Secretary of Energy.

The Department of Energy is the single largest supporter of particle physics, and of basic research in the physical sciences, in the United States.

### Tommaso Dorigo - Scientificblogging

Higgs Decays To B-Quarks From CMS
Finally the decay of Higgs bosons to b-quark pairs is emerging from LHC data, too.

### Matt Strassler - Of Particular Significance

Possible Important Discovery at IceCube

IceCube, the big high-energy neutrino experiment cleverly embedded into the ice at the South Pole, announced a very interesting result yesterday, following on an already interesting result from a few weeks ago, one that I failed to cover properly. They have seen the highest-energy neutrinos ever observed, ones that, unlike previously observed high-energy neutrinos, appear not to be generated by cosmic rays hitting the top of the atmosphere. Instead, they apparently come from new sources far out in space. And as such, it tentatively appears that they’ve opened up, as long anticipated, a new era in neutrino astronomy, in which high-energy neutrinos will be used to understand astrophysical phenomena!

[The only previous example of neutrinos being used in astrophysics occurred with the discovery of neutrinos from the relatively nearby supernova, visible with the naked eye, that occurred in 1987. But those neutrinos had energies millions of times smaller than the ones discussed here.  And there was hope that IceCube might see neutrinos specifically from gamma-ray bursts, including the one that occurred just two weeks ago; but that appears not to have happened.]

I don’t understand certain details well enough yet to give you a careful explanation — that will probably come next week — but here’s an early description (and expert readers are strongly encouraged to correct any errors.)

At present, there are various sources, one known and others suspected, of high-energy neutrinos (and anti-neutrinos) coming from the sky, as illustrated in Figure 1, taken from the IceCube talk that announced the result.

1. When cosmic rays (mostly high-energy protons and some atomic nuclei created in natural particle accelerators in outer space) hit atoms in the atmosphere, they produce showers of hadrons, some of which are pions and kaons.  Some of these in turn decay to muons (and anti-muons) and neutrinos (and anti-neutrinos). These “atmospheric neutrinos” take a wide range of energies, and (just like the cosmic rays that make them) become increasingly rare the higher-energy you go, the number falling like 1/(energy)3.7. They should be detectable by IceCube out to energies of a million GeV or so (the black curve in Figure 1), and in the early days of IceCube were already detected out to about 300,000 GeV (the blue dots in Figure 1).
2. The cosmic rays give a second source of neutrinos that may be observable around a hundred thousand to a million GeV, from the production of charm quarks, which can create a small number of neutrinos that fall off more slowly with energy than do the neutrinos from other hadrons. One prediction for how many “prompt atmospheric” or “charm atmospheric” neutrinos should be present is the red curve in Figure 1.

Fig. 1: The number of neutrinos (and anti-neutrinos), multiplied by their energy-squared (to make the plot easier to read but harder to interpret), per unit angular area on the sky, versus the energy of those neutrinos. Older data from IceCube is the blue dots. Predictions for four different sources of neutrinos (see text) are given by the four curves. Note the green line for astrophysical neutrinos could in fact be lower than shown. 1 TeV = 1000 GeV.  Plot taken from the IceCube talk.

3. Neutrinos produced when the very highest-energy cosmic rays, which, above a certain energy, collide with photons from the cosmic microwave background, and (mainly through the process proton + photon –> “Delta” [an excited version of the proton] –> neutrino + pion, followed by pion –> anti-muon + neutrino, and also by anti-muon –> anti-electron + neutrino + anti-neutrino). These are called “GZK neutrinos” or “cosmogenic neutrinos”. Since the number and energy of high-energy cosmic rays is roughly measured, the number and energy of these GZK neutrinos can be roughly predicted.
4. High-energy “astrophysical neutrinos” produced directly inside extremely energetic astrophysical objects, perhaps including the objects that make gamma-ray bursts. Since little is known about what objects are out there and how they work, the only clear thing that can be said about these neutrinos is that there can’t be too many of them (or we’d see more high-energy cosmic rays than we do). It is expected that the number of neutrinos from such sources will decrease as 1/(energy)2; since the plot in Figure 1 shows not the number of neutrinos but the number of neutrinos times their energy-squared, the (very rough) prediction from astrophysical sources is a flat green line in Figure 1.  We don’t know how many of these neutrinos to expect, so the location of that line, though it cannot be higher than shown, but could well be lower.

Recently, in their data from 2010-2012, IceCube reported, in a pre-publication paper that appeared a few weeks ago, that they observed two neutrinos with energies of about one million GeV. [For some reason I don't know, they amusingly decided to call these neutrinos Bert and Ernie.] These are unusually energetic for atmospheric neutrinos, yet not energetic enough to be GZK neutrinos. This makes it likely (but not certain) that they are from new astronomical sources! But with just two events, it’s hard to say anything else about them.  Until yesterday.

Yesterday, IceCube reported that, by using a technique that reduces the number of atmospheric neutrinos in their data, they were able to look for neutrinos from other sources at somewhat lower energies. They expected something like 10 (more precisely 10.6+4.5-3.9, or, including the charm atmospheric neutrinos in some model [??] 12.1±3.4 ) — about 5 from atmospheric neutrinos and 6 from muons from cosmic rays that give fake signals of neutrinos. But, as shown in Figure 2, they observed 28, including the two I mentioned in the previous paragraph. (To be clear, this means that 10 to 20 of them are probably neither atmospheric neutrinos nor fakes.) This is strong evidence (4.3 standard deviations) that IceCube is observing neutrinos that are not from atmospheric neutrinos… but they aren’t GZK neutrinos either.

Fig. 2: The 28 observed neutrino candidates, as a function of their energy in TeV (1 TeV = 1000 GeV) and their angle relative to the horizon. Some of these are probably atmospheric neutrinos and processes that give fake neutrinos, but many are apparently real neutrinos from a new source. Note the two highest energy neutrinos (“Bert” and “Ernie”) are out at 1000 TeV = 1 million GeV, while the next highest-energy neutrino is at 300 TeV; I suspect the gap is probably just a statistical fluke that will go away when more data is collected.

What are these things? Are they astrophysical neutrinos, from some new, unknown class of sources? Well, it’s hard to say with so few of these neutrinos observed so far. On the one hand,

• They have some features expected from astrophysical neutrinos… they are consistent with coming uniformly across the sky (though with so few neutrinos it’s hard to tell), and their numbers appear to decrease with energy more slowly than atmospheric neutrinos do across the range of 20,000 – 1,200,000 GeV.
• There’s no sign that these neutrinos are associated with other particles simultaneously coming out of the sky, as would be expected for overhead cosmic rays that make atmospheric neutrinos.
• And, unlike the atmospheric neutrinos which are more often muon-neutrinos and muon-antineutrinos, these neutrinos seem to be more evenly distributed among the three types of neutrinos and antineutrinos.

But on the other hand, there are some possible challenges for this interpretation.

• If their numbers really decrease with as 1/(energy)2, as naively expected for most astrophysical sources, then IceCube should also have seen some additional neutrinos (something like five to ten of them) well above 1,000,000 GeV.
• Moreover, these neutrinos (which should have traveled straight across space from their source) don’t point back toward any known object (such as an active galaxy or a recent gamma-ray burst) so we don’t have any way to know what type of object may be producing them.

In short, these neutrinos appear to be from a new, unidentified, and perhaps unexpected type of source!

We must remain somewhat cautious about any new result that comes from a single experiment and involves so few neutrinos.  But if IceCube’s result continues to hold up with more data and is confirmed by other similar neutrino experiments, or if in future this class of neutrinos can be linked with specific astrophysical objects, I suspect it will be seen as a major discovery — one that opens up the era of neutrino astronomy, and whose implications can today only be guessed at.

Filed under: Astronomy, Particle Physics Tagged: astronomy, neutrinos

### ZapperZ - Physics and Physicists

You Can Teach Yourself To Think Like A Scientist - Part 3
{You Can Teach Yourself To Think Like A Scientist - Part 2}

This entry deals with two separate issues, but both are related to the same 'event'.

In Part 2, I stated the technique of going back to the central, generalized principle. People often state their reasons for their actions or decision because they are abiding by some general principle. Realizing what this general principle is is crucial because it often clarifies the boundary of the argument, and one can also use that as a counter argument if the principle is not applied consistently.

In this part, I will attempt to show a specific example, and application, of this technique. Furthermore, I will also use the example to change the subject a bit (thus, the two separate issues) and presumptuously tell you how you should elect your political representatives. Yes, I know how pompous that sounds.

Let's start with the first part, which is applying the technique of investigating the generalized principle. During the height of the last US Presidential election, Senator Marco Rubbio of Florida was, at some point, considered as a potential vice presidential candidate for the Republican party. He wasn't, of course, but he is still in the US Senate. So who he is and what he stands for are still relevant. During this period of active political event, GQ magazine conducted an interview of Senator Rubio. One of the questions asked caught my attention:

GQ: How old do you think the Earth is?

Marco Rubio: I'm not a scientist, man. I can tell you what recorded history says, I can tell you what the Bible says, but I think that's a dispute amongst theologians and I think it has nothing to do with the gross domestic product or economic growth of the United States. I think the age of the universe has zero to do with how our economy is going to grow. I'm not a scientist. I don't think I'm qualified to answer a question like that. At the end of the day, I think there are multiple theories out there on how the universe was created and I think this is a country where people should have the opportunity to teach them all. I think parents should be able to teach their kids what their faith says, what science says. Whether the Earth was created in 7 days, or 7 actual eras, I'm not sure we'll ever be able to answer that. It's one of the great mysteries.

OK, so before I apply the "look for the general principle" method, let's get this very clear. If he is referring to Science, there are NO multiple theories on the age of the universe, and there is no issue at all on the age of the Earth. While there may be some uncertainty in the EXACT age (as is the case when we produce numbers for quantity such as this), we certainly are NOT making a mistake between 6000 years, versus 3 billion years! We do not make such magnitude of errors, and there's nothing to suggest that we are off by that much! It is not a great mystery.

So now, let's get back to applying the general principle argument. Reading his response, what kind of "general principle" is he abiding? I can see at least a couple: (i) he lives by the principle that if he isn't qualified in something, he then has no answers to questions in that area, nor does he have a strong-enough opinion about it to answer such questions; and (ii) if an issue isn't related to our "economic growth", he isn't interested in it or does not think that it is that important to receive an answer.

OK so far? Did you see anything else that we can extract as his overriding general principle?

So now, as we did in Part 2, let's adopt these two principles, and see what consequences they lead us to.

1. No qualification or expertise in an area, so don't have any answers, or won't answer, or don't have any strong opinion.

Now, this is strange. Senator Rubio has a law degree (like many politicians in the US). So his area of expertise is actually rather narrow. Does that mean that he only has an opinion in the area of law and nothing else? Does that mean that he won't answer questions about other issues, or can't make a decision on other issues? After all, he decides on stuff related to the US economy all the time. Is he claiming that he is an expert on various economic theories, ideas, principles, etc.? When he votes on the various bills and legislation, he obviously has opinions on those to arrive at his decisions. Is he then an expert in those areas?

Of course, things don't work that way. Politicians have staffers who are supposed to do the dirty work and research things. At some point, they also have people who advise them on issues. I'll deal with this more in detail later on. However, in this part, I'm pointing out the absurdity of not answering the question simply because he is claiming that he is not an expert (not a scientist) to answer that question. Yet, other questions where, presumably, he does not have an academic expertise in, are answered. This is another example of selective application of a general principle.

2. Not interested in issues unrelated to the "economic growth".

Apply this principle, we would expect Senator Rubio to abstain from voting on issues such as gay marriages. After all, what possible significant "economic growth" impact can that have? So has he disqualified himself in dealing with such issues throughout his political career?

The inconsistent application of the general principle is very common, especially in politics. People justify their actions by appealing to some general principle that they live by. When you understand what that principle is and state it in its direct form, you can then apply it, and see how, in many instances, they ignore that principle. As I had mentioned in Part 2, this means that there is often a more overriding principle that they are not stating, or trying to hide.

So now comes the related by separate issue part. I mentioned in #1 that politicians have to decide on a lot of issues, and practically all of them are outside of their area of expertise. This is where it matters to consider how they decide what opinion to listen to. Sen. Rubbio may not be a scientist, but does he listen to the consensus of scientists regarding the age of the earth? He appears to know about the biblical age of the universe, so why didn't he say "I'm not a theologian. I'm not qualified to answer that question". He didn't say that. Instead, he qualified that he's not a scientist. Does that mean that he will accept the opinions of scientists, even if it contradicts his biblical understanding? After all, he is implying that to be able to answer such a question, one needs to be a scientist.

There is also a puzzling effect here if one examines this closely. There are things we expect almost everyone to know, not because they are "experts" in such-and-such a field, but because as a citizen of the world, and as a citizen of a particular country, there are just certain level of knowledge that everyone is expected to know. What if I asked Sen. Rubio to point on a map the location of Washington DC, or Afganistan? Is he going to say he can't answer it because he's not an expert in geography? There are just things that we expect people to know. Sen. Rubio may not be scientist, and he may not know the scientific consensus of the exact age of the earth, but he should be AWARE of the orders of magnitude, and also the widely-conflicting discrepancy between that, and his biblical understanding. Maybe he's afraid that the interviewer would ask him how he would deal with such discrepancy, so he chose not to answer that question. Is this better than answering that he knows the scientific and biblical age of the earth, and is aware of the discrepancy? Personally, I prefer the latter. Simply not answer the question by claiming that he's not an expert makes him appear to be ignorant of something a knowledgeable person should know. Do we want an ignorant to be our political representative? I'd rather have someone who has the knowledge, and who is aware that there are discrepancies between what he "accepts" as part of his beliefs, versus what is accepted by experts in a certain areas. It is like being an alcoholic. You have to be aware of the problem FIRST before you seek help. If you deny there is a problem, you won't become better. Ignorance is not bliss.

So how am I telling you how to elect your political representatives? First of all, I will immediately tell you that my suggestion will never work and will never take hold. Very few people will agree with this methodology because most people will NOT vote this way.

Most of us choose our political candidates to vote for based on his/her stand on various issues. Maybe there are one or two issues that we consider to be extremely important, and so, we tend to prefer candidates who happen to also have the same opinion as us on those issues. We may overlook other smaller, less important issues that those candidates may or may not have the same opinions as us. But what it boils down to is that we choose candidates based on their agreement with what we believe in or what we feel strongly about. In other words, we want someone who holds our opinion on certain matters.

I consider this to be a very poor way of electing a political official. When someone is elected to a political office, he/she is faced with many different scenarios, variations, events, etc. that often change over time. Market crashes, war happens, disasters occur, etc. What looks good back during a political campaign may not look good now, especially in the climate of politics where you are dealing with other politicians, and with the progression of time and other events, even outside of one's country or immediate area. To rigidly hold on to some issues often does not work, and what end up happening is that most politicians have to compromise somewhat in varying degree to try to get the job done. This is why we then accuse them of "lying" to us, because they had to renegade on their promises to do certain things. We tend to hold them to the items they promised, rather than hold them to do their jobs, which is to take care of the country in the best way they know how.

So I propose that we elect politicians not based on what they believe or based on the compatibility of their opinions, but rather on their ABILITY TO THINK!

Now, think about it for a second. It is a revolutionary concept! :)

I want someone who has a rational and sensible way to think things through. I want someone who knows that when he/she doesn't know something, he/she would find reputable sources to learn about those things. I want someone who has the analytical ability to know that he/she is using some general principle, and to be aware when he/she isn't being consistent to those principles. I want someone who has the analytical ability to analyze a problem, who where to seek knowledge and information, and then find a sensible solution. Nowhere in there is there any requirement that this person agrees with my opinion on this or that.

This elected person will be faced with a mountain of issues, and often, things come up very unexpectedly. Many things occur that cannot be predicted. I want someone who has the ability to evaluate all of these, to analyze them systematically, to seek proper advice and sources, and then to arrive at a decision. I do not want someone who is stuck and rigid with a certain ideology, while the rest of Rome is burning down around him/her. The inability to think and rationalize a problem systematically means that decision that comes out of this person may easily be flawed.

This is why I'd rather Sen. Rubio said that he knows about the scientific age of the universe, and is aware of the discrepancy between his Christian beliefs and the scientific facts. It would have shown that he is a man of knowledge, and that he is not ignorant. It shows that he is aware of the issues, and it is something he hasn't reconciled yet. I'd rather have someone like that, who obviously have thought of things, rather than someone who ignores things but STILL has no qualms on making decisions based on things he/she doesn't know much about.

But of course, this will never happen.

:)

Zz.

Edit 5/16/2013: It appears that Sen. Marco Rubio must be an expert in biometric scans, because he didn't hesitate to give his opinion on the matter, if we were to apply his principle:

Sen. Marco Rubio, a Gang of Eight member who voted for the amendment, expressed his disappointment after the senators rejected the proposal. “Immigration reform must include the best exit system possible because persons who overstay their authorized stay are a big reason we now have so many illegal immigrants,” his statement read. “Senator Rubio will fight to add biometrics to the exit system when the bill is amended on the Senate floor. Having an exit system that utilizes biometric information will help make sure that future visitors to the United States leave when they are supposed to.”

### ZapperZ - Physics and Physicists

From CERN To Goldman Sachs

This news article is describing the case of a CERN physicist being hired by Goldman Sachs, thus changing his career from high energy physics (presumably) to quantitative finance.

Ryan Buckingham, a particle physicist with a PhD from Oxford University, spent three and a half years at CERN before joining Goldman Sachs in London as an associate in the credit and mortgage structuring team earlier this month. He declined to speak to us and Goldman didn’t return our request for comment, but it seems that the path from CERN to investment banking is a well trodden one.

“CERN is the place to find top PhDs in physical sciences and computing,” said Dominic Connor, head of quantitative finance recruitment firm P&D Quant Recruitment. “Working at CERN is one step up from having any old PhD. There a lot of people who have doctoral degrees, but you know that if someone has worked at CERN they will be very good indeed.”

Buckingham isn’t the only CERN alumni working in finance. Alexey Afonin, a vice president in strats and modelling at Morgan Stanley used to work there too. So did Anne Richards, the chief investment officer at Aberdeen Asset Management. So did Nikolaos Prezas, a quantitative researcher at J.P. Morgan and plenty of others. Most people seem to work at CERN early in their careers, and then move into finance.

Which is the reason I am puzzled at why this latest "acquisition" by the financial world making it into the news. Especially here in the US where funding for high energy physics is so crappy, a lot of PhDs in this field have to go look for employment elsewhere. Most of the people who work at CERN are not guaranteed at a long-term employment. Postdocs, for example, don't get to stay for as long as they want. And with their knowledge in statistical analysis and computational analysis skills, it is not a surprise that the field of quantitative analysis would swallow these people up.

Zz.

### arXiv blog

Terahertz Image Reveals Goya's Hidden Signature in Old Master Painting

Darkened varnish obscures Goya’s signature in a 1771 masterpiece, according to a new analysis using terahertz waves

### Peter Coles - In the Dark

Proletarian Democracy Eurovision Song Contest Preview (Part 1)

The Eurovision Song Contest, cultural Marxism's flagship spectacle, is a highlight in every communist's calendar, or should be. We proudly present part 1 of the official Proletarian Democracy preview of all the entries. The following score system applies.

1: Austria - Natalia Kelly - Shine

When hurt is all you’re feeling, your heart is slowly bleeding
The only memories to hold on to…

As we approach the evening of interminable tedium that is the Eurovision Song Contest, it's refreshing to stumble across a Blog post that reveals the competitions true political and cultural significance...

### Clifford V. Johnson - Asymptotia

Steinn has a nice post about the sudden ending of the Kepler mission, due to a crucial component failure. As he notes:
"Kepler has discovered almost 3,000 planetary candidates, of which about 100 have been confirmed through a variety of techniques, and, statistically, most of the rest are likely to be real planets. Kepler has not quite found earth like planets in the habitable zone, yet. It is heartbreakingly close to doing so."
Sad to see, especially at a time when science is being hurt so badly by continued [...]

## May 15, 2013

### The Great Beyond - Nature blog

Laser images hint at archaeological discoveries

CANCUN, Mexico — By bombarding a patch of the Honduran rainforest with laser pulses, archaeologists have discovered structures that could be a part of a lost city — or two.

In spring 2012, scientists from the National Center for Airborne Laser Mapping (NCALM), based at the University of Houston, loaded a plane with a state-of-the-art lidar system and took it down to Honduras. Lidar bounces billions of laser pulses off of the forest and measures the time they take to return. Though most of the pulses reflect off vegetation, some small fraction reaches the ground. Researchers can thus build up a map of the surface by mathematically stripping away the canopy of tree leaves (shown at right).

Lidar has been used to calculate biomass in the Amazon and to hunt for extra structures at Stonehenge. In the dense forests of Central America, though, lidar “is like rewriting history,” says Christopher Fisher, an archaeologist at Colorado State University in Fort Collins. “We have just huge black holes on the map about which we know very little.”

The NCALM survey flew right over one of those black holes. Directed by Los Angeles filmmaker Steve Elkins, who is making a documentary about the project, the lidar plane visited four targets in the Mosquitia rainforest. All were possible locations of a long-sought ruin known as the Ciudad Blanca or White City.

“The White City is quite the legend in Honduras,” says NCALM scientist Juan Carlos Fernandez Diaz, a native Honduran. Explorers have sought this “lost city” for decades, although many archaeologists believe it may be a myth or perhaps an amalgam of other Mesoamerican cities.

In May 2012, the NCALM team presented its findings to the Honduran government. On 15 May 2013, at the American Geophysical Union Meeting of the Americas in Cancun, Elkins showed previously unreleased images from the survey. They include regularly spaced mounds and other linear features that make up at least two Mesoamerican cities, says Fisher. (Shown, at right, are features at the site known as T3; the rectangular shape in the lower centre is approximately 50 metres long.)

“We’re trying to identify the densest amount of features so we can go there and look at them,” says Stephen Leisz, a geographer at Colorado State. Elkins is planning to helicopter into the area in November to target places to send archaeologists on the ground.

Lidar surveys are expensive: The team, funded by filmmaker Bill Benenson, has already spent close to half a million dollars on the Honduran project. And there’s no guarantee what the team might find when they do visit; they don’t even know the age of the potential structures they have spotted. The group is keeping the location a secret.

But Fisher, at least, has reason to hope that lidar surveys might become a common tool for archaeologists. He worked for years at the Mesoamerican city of Angamuco, in central Mexico. He eventually coughed up 38,000 for a lidar survey of 9 square kilometres, and it revealed more than 20,000 architectural features in the city’s urban core. They include a pyramid that Fisher had missed by just 10 metres on a previous ground survey when he walked right past the jungle-covered structure. Images: UTL Scientific, LLC ### ZapperZ - Physics and Physicists Neil deGrasse Tyson Prefers Star Trek Over Star Wars Hey, you can't win 'em all. Famous astrophysicist Neil deGrasse Tyson, in an interview, indicates that he prefer "Star Trek" over "Star Wars" because, in his own words, Star Wars "... made no attempt to portray real physics. At all.... " Don't shoot me, I'm only the messenger. You can read and hear the rest of the interview at the link above. Zz. ### astrobites - astro-ph reader's digest Mysterious Gas Clouds between M31 and M33 Title: Discrete clouds of neutral gas between the galaxies M31 and M33 Nature May 9, 2013 Authors: Spencer A. Wolfe, D. J. Pisano, Felix J. Lockman, Stacy S. McGaugh & Edward J. Shaya First author’s institution: Department of Physics, West Virginia University Figure 1 – Artist’s conception of the region between M31 and M33 with an image of the new high resolution observations of the clouds in between the two galaxies (inside box) Astronomers recently found seven clouds of neutral hydrogen gas (HI – “H” and roman numeral one) spread out between the galaxies M31 and M33. Could these clouds have condensed around dark matter-rich filaments, or are they leftover gas strewn across intergalactic space from a galaxy interaction event that occurred billions of years ago? Wolfe et al. use new high resolution radio observations from the Green Bank Telescope (GBT) to sort out the origin of these mysterious clouds. The presence of neutral hydrogen gas in the region between M31 and M33 was confirmed last year with the GBT by Lockman et al. (2012). The velocity of the gas is similar to the systemic velocities of M31 and M33, confirming that it is not Milky Way gas. But the sensitivity of these initial observations was not very high. Longer integration time were needed to get sensitive high resolution images of the gas. The high resolution determines whether the gas is diffuse or clumpy. This is important for determining the origin of the gas – intergalactic filament or debris from tidal interaction between the two galaxies. Intergalactic filaments between galaxies can serve as a bridge to funnel gas into galaxies. This has been proposed as a mechanism to fuel further star formation in spiral galaxies for a few more billion years from the gas in the intergalactic medium. However the gas seen in the space between the galaxies could have come from a tidal interaction event. When M31 and M33 came much closer together a few billion years ago, the gravitational force of the two galaxies could have stretched gaseous material between them in a tidal tail. Figure 2 – Illustration of the spin-flip transition that gives rise to the 21 cm line. The HI observations of the clouds were made using the 21 cm line of neutral hydrogen. This line arises from the spin alignment transition in the ground state. As illustrated in Figure 2, when the spin alignment of the proton and the electron switch from aligned parallel to anti-parallel, the atom emits a photon corresponding to the small change in energy states – one with wavelength of 21 cm. The 21 cm line is extremely useful for mapping hydrogen gas, since the symmetric H2 molecule does not emit strongly in the radio. Wolfe et al. observed the region of the HI gas again with the GBT, this time with higher sensitivity and higher resolution. They found the HI gas formed seven distinct dense clumps (see Figure 3). About 50% of the HI in the region is in the clouds. The clouds are about the size of dwarf galaxies. However, there are no stellar overdensities in the region, so they are not thought to be dwarf galaxies. Figure 3 – Map of the 21 cm emission detected by the GBT in between M31 and M33. Six of the seven clouds are visible in this image (labeled numerically). The seventh is visible when the data is smoothed to a lower resolution. The directions to M31 and M33 are marked by the arrows. Figure 4 – This position-velocity plot shows the angular distance from M31 (x-axis) vs. the velocity (y-axis) for M31, M33 (blue squares), high velocity clouds (red circles), and the new clouds detected with the GBT (black plus signs). The new clouds occupy a distinct region in position-velocity space compared to other Local Group objects. There are several factors that make these clouds look distinctly different from the high velocity clouds (HVCs) that surround M31 and M33. First, they are much further away from either galaxy than any of the HVCs. The plot of position vs. velocity (Figure 4) shows the HVCs are clustered around their host galaxies in the position axis, while the gas clouds occupy a space distinctly their own. The velocities of the clouds also distinguish them from HVCs. The new clouds have velocities similar to M31 and M33, while the HVCs tend to have a much larger spread in velocity. The authors identify several possible origins of the clouds between M31 and M33. 1) The clouds are primordial, gas-rich objects, like dwarf galaxies. • We already discussed several reasons above why the clouds don’t look like dwarf galaxies or HVCs. This possibility also does not explain why the clouds seem to lie along a connecting line. 2) The gas has accreted onto local overdensities of dark matter. • If the gas came from a tidal interaction between the two galaxies, the velocity of the gas would be too high to be accreted. 3) The clouds could be tidal dwarf galaxies – a type of irregular dwarf galaxy that form in tidal tails after a tidal galaxy interaction. • This would explain the position and velocity of the clouds, making some of our previous arguments against them being dwarf galaxies invalid. However, this does not explain the lack of stars or why they have low internal velocity distributions compared to other tidal dwarf galaxies. 4) The clouds are transient objects that condensed from an intergalactic filament. • This explains the location of the clouds and the lack of stars in the region. This scenario has been shown to be possible from simulations by Fernandez et al. (2012). The authors prefer this last scenario. If there is a galactic filament connecting the two galaxies, it can funnel gas into the galaxies and fuel star formation for a few more billion years. ### Peter Coles - In the Dark How to make a knotted vortex ring Not long ago I posted a short item about the physics of vortex rings. More recently I stumbled across this video that shows how University of Chicago physicists have succeeded in creating a vortex knot—a feat akin to tying a smoke ring into a knot. Linked and knotted vortex loops have existed in theory for more than a century, but creating them in the laboratory had previously eluded scientists. I stole that bit shamelessly from the blurb on Youtube, by the way. I’m not sure whether knotting a vortex tube has any practical applications, but then I don’t really care much about that because it’s fun! ### Matt Strassler - Of Particular Significance Not As Painless As They’d Have You Believe I’m still seeing articles in the news media (here’s one) that say that the majority of Americans think the recent sequester in the US federal budget isn’t affecting them. These articles implicitly suggest that maybe the sequester’s across-the-board cuts aren’t really doing any serious damage. Well, talk to scientists, and to research universities and government laboratories, if you want to hear about damage. I haven’t yet got the stomach to write about the gut-wrenching destruction I’m hearing about across my own field of particle physics — essential grants being cut by a quarter, a third, or altogether; researchers being forced to lay off long-standing scientific staff whose expertise, of international importance, is irreplaceable; the very best postdoctoral researchers considering leaving the field because hard-hit universities across the country won’t be hiring many faculty anytime soon… There’s so much happening simultaneously that I’m not sure how I can get my head around it all, much less convey it to you. But meanwhile, I would like to point you to a strong and strongly-worded article by Eric Klemetti, a well-known blogger and professor who writes at WIRED about volcanoes. Please read what he wrote, and consider passing it on to those you know. Everyone needs to understand that the damage that’s being done now across the U.S. scientific landscape, following a period of neglect that extends back many years before the recession, will last a generation or more, if it’s not addressed. These deep, broad and sudden cuts are a short-sighted way of saving money. Not only do they waste a lot of money already spent, the long-term cost of the permanent loss of expertise, and of future science and technology, is likely to exceed what we’ll save. It’s not a good approach to reducing a budget. So tell your representatives in Congress, and anyone who will listen: Scientific research isn’t excess fat to be chopped off crudely with a cleaver; it’s fuel for the nation’s future, and it needs wiser management than it’s receiving. Filed under: Science and Modern Society Tagged: press, ScienceAndSociety ### Tommaso Dorigo - Scientificblogging The Plot Of The Week - Pick Your Favourite μ Supersymmetry, the extension of the Standard Model of particle physics that was once sold as an almost certain discovery that the LHC experiments would bump into upon starting to collect proton-proton collisions, is not in a very healthy situation these days. read more ### arXiv blog First Quantum Memory That Records The Shape of a Single Photon Unveiled in China The world’s first quantum memory that stores the shape and structure of single photons has been built in a Chinese lab ### Peter Coles - In the Dark Lines Composed upon the Relegation of Wigan Athletic from the Premiership So farewell, then, Wigan Athletic. You weren’t Athletic enough, Apparently. Keith’s mum says Wigan is not In the Midlands. But she’s wrong. Obviously. by Peter Coles (aged nearly 50). ### Lubos Motl - string vacua and pheno Richard Dawid: String Theory and the Scientific Method Richard Dawid is a philosopher of science who was trained as a high-energy theoretical physicist and his new book that you may pre-order – it will be released at the end of June – isn't another addition to the rants by endless rows of populist crackpots, jerks, and imbeciles who try to criticize string theory without a glimpse of a rational justification (those extraordinarily stupid and dishonest books peaked about 7 years ago). Instead, it is a philosopher's attempt to identify and localize, name, summarize, articulate, and present the reasons why string theory could have become the definition of status quo in the state-of-the-art theoretical physics despite the fact that the most natural conditions that string theory has something "new and direct" to say about seem to be inaccessible far from the currently doable experiments. For this reason and others, the book was endorsed by big shots such as John Schwarz and David Gross. The expensive yet short, 210-page-long book is divided to 7 chapters. The first one is an extended introduction to string theory (technical; sociology of non-experts talking about string theory; three contextual arguments in favor of ST); the second one is on the general conceptual framework of physical theories; next one on underdetermination applied to string theory (including some Bayesian reasoning); dynamics in high-energy physics; underdetermination in physics and beyond; whether or not one may claim that ST is a final theory; changes proposed for "scientific realism". In the segments about sociology, the author describes both the near-certainty of the practitioners about string theory's validity; as well as the cynicism by many of the non-experts. The more stupid and ignorant you are, the more cynical about string theory – the unifying pillar of the 21st century physics – you may become. The cynics appear in various adjacent, next-to-adjacent, and unrelated disciplines – despite the fact that, as Dawid points out, string theory has helped to transform the way how people think and talk about pretty much all of theoretical physics and all of high-energy physics. The three reasons behind the near-certainty about the theory's validity are: • the non-existence of alternatives • the surprising emergence of coherent explanations within string theory • extrapolation of the previous successes in high-energy physics: the Standard Model was also conceived because of largely theoretical reasons, had no alternatives, led to a nontrivial, surprisingly consistent unification of our descriptions of many things, and therefore had to be right Concerning the first argument, it is the actual explanation why the top bright theoretical physicists focus this high percentage of their intellectual skills on string theory. They simply divide their mental powers to all promising ideas, with the weight given by the degree to which they are promising. Because one may approximately say that there aren't any other promising "big ideas" outside string theory, people can't work on them. It is easy to misunderstand – and deliberately obfuscate – these facts. There exist "trademarks" that are marketed as competitors of string theory. But nothing really works over there. There exist no signs that these theories are on the right track. The people associated with these directions know that but some of them try to mislead the laymen about these facts. Some of them simply want to help themselves personally; others may be less egotist but they want a "greater diversity of ideas" than what the available evidence suggests as the right degree of diversity. And yet another group is just incompetent. Concerning the second argument, it is a theoretical argument but a very powerful one. If string theory were a wrong theory of Nature, one would have no explanation why it has taught us about so many mechanisms that unify previously different concepts in physics and that retain their complete consistency, despite all kinds of a diseases that would have surely killed a generic wrong theory many times. The deep association between string theory and the laws that everything in the Universe obeys seems to be the only explanation of this coherence and unifying power, the ability to produce unexpected links, relationships, and transitions while avoiding any inconsistency. Of course, one could argue that string theory is this coherent, powerful, and "willing to teach us" because of a different reason: it could be just a coherent mathematical structure that doesn't form the skeleton of the foundations of physics. If you wish, it could be the Devil who is constantly tempting us rather than God. But such an alternative theory would apparently predict that there will already be a demonstrable incompatibility between the highly constraining principles of string theory and some of the numerous (understatement!) insights we have already learned about the physical Universe. There aren't any inconsistencies of this kind, either: at least as the first sketch, string theory agrees with all the general features (types of fields and interactions etc.) we know from particle physics and cosmology. There's a lot of evidence that string theory is both very deep and very physical. The last argument is probably a good way to describe the actual reason why I disagree with the suggestions elsewhere in the book that one needs to redefine the scientific method or do similar things. It seems obvious to me that the reasons that make string theorists near-certain that string theory is the right description of Nature have been used by physicists at least for 50 years and, in some respects, much longer than that. Around 1974, string theory was identified as a candidate theory of quantum gravity – the only consistent one in $$d=4$$ or higher so far. This already implies that its characteristic effects in which it shows its muscles in their full glory can't be directly measured in the experiments (already Max Planck was able to calculate that the Planck length was $$10^{-35}$$ meters or so). I knew this was almost certainly the case when I was 10 years old or so. This inaccessibility by direct experiments is a defining feature of any theory of quantum gravity. Despite this knowledge, I wasn't repelled by string theory. If we can't "touch" something, it doesn't mean that we can't scientifically study it. Atoms became a part of science well before people "saw" them (because of the mixing ratios in chemistry and many other reasons). Physics of the 20th century brought us many more examples like that. Physics is really working like that most of the time today! When I was 10, I didn't know that almost 30 years later, a new kind of Inquisition would hysterically try to prevent people from applying the scientific method to energy scales that can't be directly tested. String theory is really using the same kind of thinking about the possible deeper levels of explanation that were employed – and turned out to be successful – in the advances associated with quantum field theory. Any criticism of these argumentative patterns seems totally unjustifiable to me: it's really the only way how to think about these matters scientifically. The only plausible alternative is not to think about the unification in physics and the fundamental scale at all. I just think that the mankind would become a horde of uncultural barbarian apes if it decided it doesn't want to think about these issues – if it wanted to prevent a fraction of its intellectual resources from thinking about these fundamental issues. Some people love revolutions and permanent revolutions. Am I among them? It depends on what you mean a revolution. I surely oppose any attempt to replace rational arguments in science by irrational ones (e.g. ad hominem ones or slogans that have nothing to do with the actual technical research); or to "ban" any kind of an argument that is obviously rational. Every solid enough argument and line of reasoning or inference, however indirect, should be used when we are forming our opinions about scientific questions. When this is done correctly in the case of fundamental physics, we reach a near-certainty that string theory is a valid (and probably the final) theory of Nature. It's possible despite the experimental inaccessibility of the Planck scale because direct experiments are very far from being the only tool how we are learning the truth about physics in the 20th and 21st century. If you prefer the cheaper books by crackpots, you may buy books by Faggott (early August) and Unzicker (late July) instead. ### ZapperZ - Physics and Physicists The Future Of Fermilab A video of a briefing to the community of the future of Fermilab. Here is the synopsis accompanying the video: On Thursday, May 9, 2013, Fermilab invited elected officials and leaders from local communities to hear Director Pier Oddone lay out his vision of the laboratory's future. The presentation was held in Wilson Hall, and included both short-term (NOvA, Muon g-2) and long-term (LBNE, Project X) experiments, as well as an overall look at the direction of the laboratory's impact on Chicagoland. For further information on these projects see www.fnal.gov, http://www-nova.fnal.gov, http://muon-g2.fnal.gov, darkenergysurvey.org, http://lbne.fnal.gov, http://projectx.fnal.gov It is interesting that Pier Oddone is presenting HIS vision of the lab future, considering that he is leaving Fermilab! :) Still, with the dismal funding of high energy physics in the US, the future of Fermilab is really uncertain at this point. Many of the long-term projects being presented do not have a certain funding picture yet. Zz. ## May 14, 2013 ### The n-Category Cafe Bounded Gaps Between Primes Guest post by Emily Riehl Whether we grow up to become category theorists or applied mathematicians, one thing that I suspect unites us all is that we were once enchanted by prime numbers. It comes as no surprise then that a seminar given yesterday afternoon at Harvard by Yi Tang Zhang of the University of New Hampshire reporting on his new paper “Bounded gaps between primes” attracted a diverse audience. I don’t believe the paper is publicly available yet, but word on the street is that the referees at the Annals say it all checks out. What follows is a summary of his presentation. Any errors should be ascribed to the ignorance of the transcriber (a category theorist, not an analytic number theorist) rather than to the author or his talk, which was lovely. ### Prime gaps Let us write ${p}_{1},{p}_{2},\dots$ for the primes in increasing cardinal order. We know of course that this list is countably infinite. A prime gap is an integer ${p}_{n+1}-{p}_{n}$. The Prime Number Theorem tells us that ${p}_{n+1}-{p}_{n}$ is approximately $\mathrm{log}\left({p}_{n}\right)$ as $n$ approaches infinity. The twin primes conjecture, on the other hand asserts that $\underset{n\to \infty }{\mathrm{liminf}}\left({p}_{n+1}-{p}_{n}\right)=2$ i.e., that there are infinitely many pairs of twin primes for which the prime gap is just two. A generalization, attributed to Alphonse de Polignac, states that for any positive even integer, there are infinitely many prime gaps of that size. This conjecture has been neither proven nor disproven in any case. These conjectures are related to the Hardy-Littlewood conjecture about the distribution of prime constellations. ### The strategy The basic question is whether there exists some constant $C$ so that ${p}_{n+1}-{p}_{n} infinitely often. Now, for the first time, we know that the answer is yes…when $C=7×{10}^{7}$. Here is the basic proof strategy, supposedly familiar in analytic number theory. A subset $H=\left\{{h}_{1},\dots ,{h}_{k}\right\}$ of distinct natural numbers is admissible if for all primes $p$ the number of distinct residue classes modulo $p$ occupied by these numbers is less than $p$. (For instance, taking $p=2$, we see that the gaps between the ${h}_{j}$ must all be even.) If this condition were not satisfied, then it would not be possible for each element in a collection $\left\{n+{h}_{1},\dots ,n+{h}_{k}\right\}$ to be prime. Conversely, the Hardy-Littlewood conjecture contains the statement that for every admissible $H$, there are infinitely many $n$ so that every element of the set $\left\{n+{h}_{1},\dots ,n+{h}_{k}\right\}$ is prime. Let $\theta \left(n\right)$ denote the function that is $\mathrm{log}\left(n\right)$ when $n$ is prime and 0 otherwise. Fixing a large integer $x$, let us write $n\sim x$ to mean $x$$n<2x$. Suppose we have a positive real valued function $f$—to be specified later—and consider two sums: ${S}_{1}=\sum _{n\sim x}f\left(n\right)$ ${S}_{2}=\sum _{n\sim x}\left(\sum _{j=1}^{k}\theta \left(n+{h}_{j}\right)\right)f\left(n\right)$ Then if ${S}_{2}>\left(\mathrm{log}3x\right){S}_{1}$ for some function $f$ it follows that ${\sum }_{j=1}^{k}\theta \left(n+{h}_{j}\right)>\mathrm{log}3x$ for some $n\sim x$ (for any $x$ sufficiently large) which means that at least two terms in this sum are non-zero, i.e., that there are two indices $i$ and $j$ so that $n+{h}_{i}$ and $n+{h}_{j}$ are both prime. In this way we can identify bounded prime gaps. ### Some details The trick is to find an appropriate function $f$. Previous work of Daniel Goldston, János Pintz, and Cem Yildirim suggests define $f\left(n\right)=\lambda \left(n{\right)}^{2}$ where $\lambda \left(n\right)=\sum _{d\mid P\left(n\right),d where $\ell >0$ and $D$ is a power of $x$. Now think of the sum ${S}_{2}-\left(\mathrm{log}3x\right){S}_{1}$ as a main term plus an error term. Taking $D={x}^{\vartheta }$ with $\vartheta <\frac{1}{4}$, the main term is negative, which won’t do. When $\vartheta =\frac{1}{4}+\omega$ the main term is okay but the question remains how to bound the error term. ### Zhang’s work Zhang’s idea is related to work of Enrico Bombieri, John Friedlander, and Henryk Iwaniec. Let $\vartheta =\frac{1}{4}+\omega$ where $\omega =\frac{1}{1168}$ (which is “small but bigger than $ϵ$”). Then define $\lambda \left(n\right)$ using the same formula as before but with an additional condition on the index $d$, namely that $d$ divides the product of the primes less that ${x}^{\omega }$. In other words, we only sum over square-free $d$ with small prime factors. The point is that when $d$ is not too small (say $d>{x}^{1/3}$) then $d$ has lots of factors. If $d={p}_{1}\cdots {p}_{b}$ and $R there is some $a$ so that $r={p}_{1}\cdots {p}_{a} and ${p}_{1}\cdots {p}_{a+1}>R$. This gives a factorization $d=rq$ with $R/{x}^{\omega } which we can use to break the sum over $d$ into two sums (over $r$ and over $q$) which are then handled using techniques whose names I didn’t recognize. ### On the size of the bound You might be wondering where the number 70 million comes from. This is related to the $k$ in the admissible set. (My notes say $k=3.5×{10}^{6}$ but maybe it should be $k=3.5×{10}^{7}$.) The point is that $k$ needs to be large enough so that the change brought about by the extra condition that $d$ is square free with small prime factors is negligible. But Zhang believes that his techniques have not yet been optimized and that smaller bounds will soon be possible. ### Marco Frasca - The Gauge Connection Bad practices Today, I made a serious mistake. I have sent again a rejected paper to the same journal. The point is that this is the kind of journal that has several Editors that can manage papers. So, one could improperly think that a rejected paper sent to different Editors could in the end go through. The Editor that received my paper did not even think to an error and called for a bad practice warning the Editor in Chief of the journal. I never applied this practice. The reason is that I have currently about 70 papers published in peer-reviewed journals and so, I have the greatest respect for the work of people that permitted to achieve this result of mine. Worst, I have written more than one hundred papers and a part of them is unpublished for a reason or the other and generally I am in difficulty to get trace of all of this. Indeed, it is quite common practice to send a rejected paper to another journal. The paper I sent out was written about three years ago and I have forgotten about it. In these day, I am revisiting my computations on the scalar field theory both classically and quantum and turned back to this article. Wrongly, I thought I had not sent it to this journal before and that is it. American Physical Society obviated to this problem by producing a database, available to authors, with all their history. In other cases this is practically impossible to trace and when the number of papers is overwhelming an error can occur. So, my apologize for this and I do it publicly. Filed under: Mathematical Physics, Scientific Publishing ### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe) A day in the life There is a day-in-the life profile of me in today’s Irish Times, Ireland’s newspaper of record. I’m very pleased with it, I like the title – Labs, lectures and luring young people into scence – and the accompanying photo, it looks like I’m about to burst into song! This is a weekly series where an academic describes their working week, so I give a day-to-day description of the challenge of balancing teaching and research at my college Waterford Institute of Technology . Is this guy about to start singing? There is quite a lot of discussion in Ireland at the moment concerning the role of institutes of technology vs that of universities. I quite like the two-tier system – the institutes function like polytechnics and tend to be smaller and offer more practical programmes than the universities. However, WIT is something of an anomaly; because it is the only third level college in a largeish city and surrounding area it has been functioning rather like a university for many years (i.e. has a very broad range of programmes, quite high entry points and is resaonably research-active). The college is currently being considered for technological university status, but many others the idea of an upgrade – there are fears of a domino effect amongst the other institutes, giving Ireland far too many universities. it’s hard to know the best solution but I’m complaining – I like the broad teaching portfolio of the IoTs, and there is a lot to be said for a college where you do research if you want to, not because you have to! Update I had originally said that the institutes cater for a ‘slightly lower level of student’. Oops! This is simply not true in the case of WIT, given the entry points for many of the courses I teach, apologies Jamie and Susie. Again, I think the points are a reflection of the fact that WIT has been functioning rather like a university simply because of where it is. ### Symmetrybreaking - Fermilab/SLAC The cherry pie collider What’s the next step in particle colliders? Symmetry takes a trip into the kitchen pantry to find out. Already celebrated for bringing the world news of the Higgs boson, the Large Hadron Collider is only beginning its long journey of discoveries. Yet scientists are already planning the next big machine, the International Linear Collider, to study the LHC’s discoveries in more detail. So what’s the difference between the LHC and the proposed ILC? Why do we need both? ### ZapperZ - Physics and Physicists "Einstein's Planet" Discovered There's nothing special about this planet, other than the claim that it was discovered using Einstein's Relativity concepts. However, there appears to be slight error in this news report: The researchers capitalized on subtle effects predicted by Albert Einstein's special theory of relativity to find the planet. The first is called the "beaming" effect, and occurs when light from the parent star brightens as its planet tugs it a nudge closer to Earth, and dims as the planet pulls it away. Relativistic effects cause light particles, called photons, to pile up and become focused in the direction of the star's motion. "This is the first time that this aspect of Einstein's theory of relativity has been used to discover a planet," research team member Tsevi Mazeh of Tel Aviv University in Israel said in a statement. Additionally, gravitational tides from the orbiting planet caused its star to stretch slightly into a football shape, causing it to appear brighter when its wider side faces us, revealing more surface area. Finally, the planet itself reflects a small amount of starlight, which also contributed to its discovery. Not to be nit-picky (well, I guess I am!), but this sounds like it is relevant to the GENERAL theory of Relativity, rather than just the Special theory of relativity. I guess I will have to wait for the paper to appear (unless the preprint are floating around already) to confirm this. Zz. ### Peter Coles - In the Dark Hall and Knight (or `z + b + x = y + b + z’) This poem will be a bit of a puzzle to younger readers, so I’ll just explain that Messrs Hall & Knight mentioned in the poem were the authors of a famous textbook about algebra “Elementary Algebra for Schools” that first went into publication in the 19th Century (1885, I think) and is still in press over a century later. It’s a classic book, fully meriting a celebration in verse, even if it’s a bit tongue-in-cheek! When he was young his cousins used to say of Mr Knight: ‘This boy will write an algebra – or looks as if he might.’ And sure enough, when Mr Knight had grown to be a man, He purchased pen and paper and an inkpot, and began. But he very soon discovered that he couldn’t write at all, And his heart was filled with yearnings for a certain Mr Hall; Till, after many years of doubt, he sent his friend a card: ‘Have tried to write an Algebra, but find it very hard.’ Now Mr Hall himself had tried to write a book for schools, But suffered from a handicap: he didn’t know the rules. So when he heard from Mr Knight and understood his gist, He answered him by telegram: ‘Delighted to assist.’ So Mr Hall and Mr Knight they took a house together, And they worked away at algebra in any kind of weather, Determined not to give up until they had evolved A problem so constructed that it never could be solved. ‘How hard it is’, said Mr Knight, ‘to hide the fact from youth That x and y are equal: it is such an obvious truth!’ ‘It is’, said Mr Hall, ‘but if we gave a b to each, We’d put the problem well beyond our little victims’ reach. ‘Or are you anxious, Mr Knight, lest any boy should see The utter superfluity of this repeated b?’ ‘I scarcely fear it’, he replied, and scratched this grizzled head, ‘But perhaps it would be safer if to b we added z.’ ‘A brilliant stroke!’, said Hall, and added z to either side; Then looked at his accomplice with a flush of happy pride. And Knight, he winked at Hall (a very pardonable lapse). And they printed off the Algebra and sold it to the chaps. by E. V. Rieu (1887-1972) ### arXiv blog Game Theory and the Treatment of Cancer Thinking about cancer as an ecosystem is giving biologists access to a new armoury of mathematical tools for tackling it, such as evolutionary game theory ## May 13, 2013 ### Clifford V. Johnson - Asymptotia Final Well, I've got to say goodbye to another excellent group of students from my undergraduate electromagnetism class. We had the final today (starting at 8:00am - ack!), and given the lack of rioting, tears, and throwing of rotten fruit during the exam itself, I assume that it was not too bad an exam to sit. Of course, the real measure of what they thought will be how they did in the actual answering of questions, and I've not looked to see how that has turned out yet. Again, I feel a bit sad since it was a good group of students and it was fun to teach them this material. While it is certainly good to move on to other things (I've too many projects I want to work on, as usual), I will miss the twice weekly classes with them. Highlights this year include (in no particular order): (1) The thing I love to do when we are studying dipole radiation - taking the class outside (surprising them somewhat) to look up at the blue sky and connect why it is blue to the computation we just did, including understanding the pattern of the blueness [...] ### Sean Carroll - Preposterous Universe Templeton Redux Not much more to say about the Templeton Foundation, but in the interest of open discussion it seems fair to point to a couple of alternative viewpoints. My original post was republished at Slate, where there are over 3300 comments thus far, so apparently people like to talk about this stuff? For a more pro-Templeton point of view, here’s Jason Wright, explaining why he didn’t think it was wrong to take money from JTF. While he is a self-described atheist, he thinks that “questions like the ultimate origin of the Universe and Natural Law may be beyond scientific inquiry,” and correspondingly in favor of dialogue between science and religion. To be as clear as possible, I have no objections at all to dialogue between scientists and religious believers, having participated in such and planning on continuing to do so. I just want to eliminate any possibility that my own contribution to such a dialogue will favor any position other than “religion is incorrect.” (Obviously that depends on one’s definition of “religion,” so if you want to indulge in a boring discussion of what the proper definition should be — be my guest.) From an anti-Templeton perspective, here’s Jerry Coyne, who doesn’t accept that it’s okay to draw a line between JTF itself and distinct organizations that take money from them. (Jerry’s post is perfectly reasonable, even if I disagree with it — but a short trip down to the comment section will give you a peer into the mind of the more fervently committed.) That’s fine — I admit from the start that this is a complicated issue, and people will draw the line in different places. But let’s admit that it is a complicated issue, and not pretend that there are any straightforward and easy answers. One thing that seems to bother some people is that I agreed to be on the Board of Advisors for Nautilus, a new science magazine that takes funding from Templeton. It’s instructive to have a look at the Board of Advisors for the World Science Festival, another organization that takes funding from Templeton. It’s a long and distinguished list, and here are some of the names included: Richard Dawkins, Daniel Dennett, Lawrence Krauss, Steven Pinker, Steven Weinberg. Are these folks insufficiently sincere in their atheistic worldview? Alternatively, would the world be a better place if they all resigned? I would argue not, for the simple reason that the WSF does enormous good for the world, and is an organization well worth supporting, even if I don’t agree with all of their decisions. Refusing to have anything to do with an organization that takes money from a foundation we don’t like is easier said than done. What about, say, the University of Chicago? Here they’re taking3.7 million from Templeton for something called Expanding Spiritual Knowledge Through Science: Chicago Multidisciplinary Research Network. And here’s $5.6 million from Templeton for a program labeled New Frontiers in Astronomy and Cosmology, celebrating “a unique opportunity to honor the extraordinary vision of Sir John Templeton.” And here’s$2.2 million for a program on Understanding Human Nature to Harness Human Potential. Not to mention that the UofC has quite a prominent Divinity School (home of the best coffee shop on campus) and Seminary. (They also denied me tenure, which doubtless set the cause of reason and rationality back centuries.)

There’s no question that the University of Chicago has done much more to promote the cause of religion in the world than Nautilus has — which has been, to date, precisely nothing. One could say, with some justification, that some parts of the UofC have promoted religion, while other parts have not, and it’s okay to be involved with those other parts. But we begin to see how fuzzy the line is. Big grants like those above generally put a fraction of their funds toward “overhead,” which goes into general upkeep of the institution as a whole. Can we really be sure that, as we walk across the lawn, the groundskeeping was not partially paid for by the pernicious Templeton Foundation?

But that doesn’t mean that self-respecting atheists employed by the UofC should instantly resign. I’m sure you could play the same game with most big universities. The world would not be improved by having thousands of atheist professors abandon their posts out of principle.

It’s much more sensible to be a consequentialist rather than a deontologist when it comes to these ethical questions. I’m not going to stay away from Nautilus, or the World Science Festival, or the Foundational Questions Institute, out of some fruit-of-the-poisonous-tree doctrine according to which they have become forever tainted by accepting money from Templeton. Rather, I’m going to try to judge whether these organizations provide a net good for the world; I will complain when I think they are making a mistake; and if I think they’ve gone too far in a direction I don’t personally like, I will disengage. That’s the best I think I can do, according to my own conscience. Others will doubtless feel differently.

### Matt Strassler - Of Particular Significance

Opening of LHCP Conference

Greetings from Barcelona, where the LHCP 2013 conference is underway. I wanted to mention a couple of the opening remarks made by CERN’s Sergio Bertolucci and Mirko Pojer, both of whom spoke about the near-term and medium-term future of the Large Hadron Collider [LHC].

It’s worth taking a moment to review what happened in the LHC’s first run. During its first few years, the LHC was initially intended to run at around 14 TeV of energy in each proton-proton collision, and at a moderate collision rate. But shortly after beams were turned on, and before there were any collisions, there occurred the famous accident of September 19, 2008. The ensuing investigation of the cause revealed flaws in the connections between the superconducting magnets, as well as in the system that protects the machine against the effect of a magnet losing its superconductivity (called a “quench”; quenches are expected to happen occasionally, but they have to be controlled.) To keep the machine safe from further problems, it was decided to run the machine at 7 TeV per collision, and make up (in part) for the lower energy by running at a higher collision rate. Then:

• Late 2009: beams were restarted at 2.2 TeV per collision.
• 2010: a small number of collisions and a few new experimental results were obtained at 7 TeV per collision
• 2011: a large number of collisions (corresponding to nearly 100,000 Higgs particles per experiment [i.e. in ATLAS and CMS]) were obtained at 7 TeV per collision
• 2012: an even larger number of collisions (corresponding to over 400,000 Higgs particles per experiment) were obtained at 8 TeV per collision.

All in all, this “Run 1” of the LHC is widely viewed as enormously successful. For one thing, it showed that (excepting only the flawed but fixable magnet connections) the LHC is an excellent machine and works beautifully.  A high collision rate was indeed achieved, and this, combined with the quality of the experimental detectors and the cleverness of the experimental physicists, was sufficient for discovery of and initial study of what is now referred to as a “Standard Model-like Higgs particle”, as well as for ruling out a wide range of variants of certain speculative ideas [here are a couple of examples.]

Currently, the LHC is shut down for repairs and upgrades, in preparation for Run 2, which will begin in 2015. The machine has been warmed up to room temperature (normally its magnets have to be kept at 1.9 Kelvin, i.e 1.9 degrees Celsius above absolute zero), and, among many adjustments, all of those potentially problematic connections between magnets are being improved, to make it safer for the machine to run at higher energy per collision.

So here’s the update — I hesitate to call this “news”, since none of this very surprising to those who’ve been following events in detail. The plan, according to Bertolucci and to Pojer, includes the following

• When Run 2 starts in 2015, the energy per collision will probably be 13 TeV, with the possibility of increasing this toward the design energy of 14 TeV later in Run 2. This was more or less expected, given what was learned about the LHC’s superconducting magnets a few years ago: some of these crucial magnets may have quenches too often when operating at 14 TeV conditions, making the accelerator too inefficient at that energy.
• A big question that is still not decided (and may not be decided until direct experience is gained in 2015) is whether it is better to run with collisions every 50 nanoseconds [billionths of a second], as in 2011-2012, or every 25 nanoseconds, as was the original design for the LHC.  The latter is better for the operation of the experimental detectors and the analysis of the data, but poses more challenges for operating the LHC, and may cause the proton beams to be less stable. Studies on this question may be ongoing  throughout a good part of 2015.
• Run 2 is currently planned for 2015-2017, but as Pojer reminded us, 2015 will involve starting up the machine at a new energy and collision rate, and so a lot of time in 2015 will be spent on making the machine work properly and efficiently. Somewhat as in 2010, which was a year of pilot running before large amounts of data were obtained in 2011-2012, it is likely that 2015 will also be a year of relatively low data rate. Most of the data in the next run will appear in 2016-2017.  The bottom line is that although there will be new data in 2015, one should remember not to expect overly much news in that first year.

Of course the precise dates and plans may shift.  Life being what it is, it would not be surprising if some of the challenges are a bit worse than expected; this could delay the start of Run 2 by a few months, or require a slightly lower energy at the start. Nor would it be surprising if Run 2 extends into 2018.  But if Run 1 (and the experience at other accelerators) is any guide, then even though some things won’t go as well as hoped, others will go better than expected.

Filed under: Higgs, LHC News Tagged: atlas, cms, Higgs, LHC

### Quantum Diaries

Petite chronique d’un prof au CERN (III)

A l’occasion de l’ouverture de l’appel à candidature 2013 de “Sciences à l’Ecole” pour l’accueil d’enseignants français au CERN durant une semaine, nous publions ces jours-ci le journal quotidien plein d’humour de Jocelyn Etienne qui a suivi ce programme l’année dernière, au mois de novembre dernier.

Chambre à brouillard: la chasse aux particules commence !
Mardi 06 novembre 2012

Aujourd’hui, construction d’une chambre à brouillard, alors que le Soleil décide enfin à se montrer ! C’est l’écossais Wilson qui en a inventé le procédé en 1911 (avant de recevoir le Nobel en 1927) pour détecter la trajectoire des particules. Pour nous, de la carboglace, un peu d’isopropanol et de bricolage, et l’on voit des muons issus de particules cosmiques laisser une trace de leur passage.Oulala! (Vue en vidéo d’un muon grâce à la chambre à brouillard)

On a beau être dans un des plus grands centre de recherche fondamentale du monde, rien de vaut un tableau noir et une craie (cette dernière difficile à trouver par ici parait-il).

Les conférences du jour :

David Rousseau (IN2P3 / LAL-Orsay) nous confirme la découverte presque peut-être sûre du boson de Higgs, en tout cas, si c’est pas lui, c’est quand même quelque chose. Il travaille sur le détecteur ATLAS, il doit savoir de quoi il parle. Il y a des détecteurs sur le LHC, comme ATLAS et CMS  et chacun est un monstre de technologie et de compétences, et tous deux confirment indépendamment la détection du Higgs (c’est comme ça qu’on dit).

Julien Lesgourgues (Ecole Polytechnique Fédérale de Lausanne) nous parle de la courbure de l’espace qui en fait est plat, à moins que ce ne soit l’inverse, mais j’arrive un quart d’heure en retard…

Sylvie Rosier-Lees du CNRS/IN2P3 au laboratoire d’Annecy, s’occupe du détecteur spatial AMS (spectromètre magnétique Alpha ndlr), accroché à l’ISS. AMS s’occupe des particules cosmiques, et il y en a qui viennent de très loin ! (ici: les dernières new d’AMS ndlr).

A droite, la personne semblait coder un programme pour un traitement graphique de données, mais il basculait souvent sur son compte facebook… tsss tsss tsss… Pour les connaisseurs, son portable est sous Xubuntu.

Enfin, Corinne Berat du CNRS/IN2P3 au laboratoire de Grenoble a plus les pieds sur Terre. Son joujou se trouve en Argentine et détecte les rayons cosmiques (encore) qui arrivent au sol après avoir éclaboussé l’atmosphère d’une multitude de particules (des gerbes…). L’observatoire Pierre Auger recouvre quelque chose comme 3000 km² et se délecte des particules de haute énergie provenant peut être de collisions de galaxies ou de supernovae.

Après le repas du soir, je me rends à une conférence dans le cadre de « The 4th International Conference on Particle and Fundamental Physics in Space ». Aujourd’hui, William H. Gerstenmaier de la NASA qui nous présente in English, les recherches faites sur l’ISS. La vidéo finale (un film qui compile les plus belles vues de la Terre prises de la station) est absolument sublime.

A suivre…

Jocelyn Etienne est enseignant au lycée Feuillade de la ville de Lunel.

Pour soumettre sa candidature pour la prochaine session du stage au CERN, c’est par ici.

### arXiv blog

The Algorithm That Automatically Detects Polyps in Images from Camera Pills

Analyzing the footage from camera pills is a time-consuming task for medical professionals. Now computer scientists are attempting to automate the process.

### Symmetrybreaking - Fermilab/SLAC

The top 40 physics hits of 2012

The Higgs boson is a popular subject among the most-cited physics papers of 2012, but a particle simulation manual takes the top spot.

Think of it as a particle physics version of pop radio's “top 40” countdown: INSPIRE, a database of particle-physics publications, has released its annual list of most-cited articles.

Topping the charts in 2012 are articles about the Higgs boson, which made up about 20 percent of the list.

## May 12, 2013

### Geraint Lewis - Cosmic Horizons

Nature doesn't care how smart you are
Random Monday Morning Thought:

Becoming a science professor sorta snuck up on me. Not getting the title, as that happened at a distinct point in time (namely the first of January 2009), but the 'separation' from being a student and then postdoctoral researcher grows somewhat slowly. A colleague of mine recently expressed surprise when he discovered his students were somewhat daunted when speaking with him (this is partly as there is the perennial fear of "looking stupid" that students have), and I'm pretty sure my fellow faculty member does not feel that different to the students he talks to.

The important point, I think, is that students should realise that you don't get smarter with age; in fact, it's probably the opposite. What you do gain is experience. When a professor speaks from authority, it is not necessarily that they are "smart", but they have gathered significant experience over the years. But it's important to realise that there is a limit to experience, and just because a particular professor makes a pronouncement, it doesn't necessarily mean it's correct. Over at Letters to Nature, Luke Barnes has a nice article on appealing to authority.

Anyway, I just wanted to add to this a marvellous quote

In high school, my two idols were Einstein and Feynman. While Einstein felt that QM must be wrong, Feynman felt it was the ultimate truth of the universe. This discrepancy bothered me, and I wasn't sure who to believe. So, about six weeks into physics X, I screwed up my courage and asked Feynman about the "dice" and Einstein.  "Dr. Feynman", I asked, "Einstein was one of the greatest geniuses of physics, and certainly a lot smarter than me. He knew more physics that I ever hope to. But, he didn't believe in quantum mechanics--so why should I?"
Feynman paused -- which surprised all of us -- and smiled. He looked at me and said, in that wonderful Far Rockaway accent, "Nature doesn't care how smart you are. You can still be wrong." He went on to explain some background on Einstein's view of physics, and why he might feel that way.

(from here).

"Nature doesn't care how smart you are"; I think that's an important lesson that all of us should remember.

### Clifford V. Johnson - Asymptotia

Happy Mother’s Day
Here's a rose for Mother's Day (in the USA). It is from my garden, and I took the photo last week to make a card to send to my Mother and my Sister. Happy Mother's Day to all everywhere! -cvj (Look under "flowers" category for roses from past Mother's Days.)

### Jon Butterworth - Life and Physics

Is there any such thing as “nothing”?

That’s a question I got on twitter just now after the Feynman gig from @elainepixie.

I said (broken down into 140 character chunks):

One definition of “nothing” is “vacuum”, by which physicists mean “lowest energy state”. That exists. But in quantum mechanics it’s not really empty. It is permeated by quantum fields (e.g. the photon field) and it fluctuates. Particles pop in and out of brief existence, even in the lowest energy state of space-time. And in fact the field of the Higgs boson doesn’t even fluctuate around the value of zero, but around 246 GeV. So I guess really to have “nothing” means no physical laws, no time, no space. In that sense not sure what it means to say such a thing “exists”. You can speak about it, but surely it’s the opposite of “existence”?

Hmm. Anyone got a better answer?

Filed under: Particle Physics, Philosophy of Science, Physics, Science Tagged: Higgs

## May 11, 2013

### Tommaso Dorigo - Scientificblogging

CDF Memories, Circa 1992
In 1992 the top quark had not been discovered yet, and it did not make much sense for the CDF collaboration to have a full meeting devoted solely to it; rather, analyses targeting the search of the top quark were presented at a meeting which dealt with both bottom and top quarks. This was called back then "Heavy Flavour meeting".