Particle Physics Planet


May 23, 2015

John Baez - Azimuth

Network Theory in Turin

Here are the slides of the talk I’m giving on Monday to kick off the Categorical Foundations of Network Theory workshop in Turin:

Network theory.

This is a long talk, starting with the reasons I care about this subject, and working into the details of one particular project: the categorical foundations of networks as applied to electrical engineering and control theory. There are lots of links in blue; click on them for more details!


by John Baez at May 23, 2015 02:34 AM

May 22, 2015

Christian P. Robert - xi'an's og

a senseless taxi-ride

This morning, on my way to the airport (and to Montpellier for a seminar), Rock, my favourite taxi-driver, told me of a strange ride he endured the night before, so strange that he had not yet fully got over it! As it happened, he had picked an elderly lady with two large bags in the vicinity after a radio-call and drove her to a sort of catholic hostel in down-town Paris, near La Santé jail, a pastoral place housing visiting nuns and priests. However, when they arrived there, she asked the taxi to wait before leaving, quite appropriately as she had apparently failed to book the place. She then asked my friend to take her to another specific address, an hotel located nearby at Denfert-Rochereau. While Rock was waiting and the taxi counter running, the passenger literally checked in by visiting the hotel room and deciding she did not like it so she gave my taxi yet another hotel address near Saint-Honoré where she repeated the same process, namely visited the hotel room with the same outcome that she did not like the place. My friend was then getting worried about the meaning of this processionary trip all over Paris, the more because the lady did not have a particularly coherent discourse. And could not stop talking. The passenger then made him stop for food and drink, and, while getting back in the taxi, ordered him to drive her back to her starting place. After two hours and half, they thus came back to the place, with a total bill of 113 euros. The lady then handled a 100 euro bill to the taxi-driver, declaring she did not have any further money and that he should have brought her home directly from the first place they had stopped… In my friend’s experience, this was the weirdest passenger he ever carried and he thought the true point of the ride was to escape solitude and loneliness for one evening, even if chatting about non-sense the whole time.


Filed under: pictures, Travel Tagged: Montpellier, Paris, story, taxi, taxi-driver

by xi'an at May 22, 2015 10:15 PM

Symmetrybreaking - Fermilab/SLAC

LHC restart timeline

Physics is just around the corner for the LHC. Follow this timeline through the most exciting moments of the past few months.

May 22, 2015 10:11 PM

Emily Lakdawalla - The Planetary Society Blog

Tons of fun with the latest Ceres image releases from Dawn
Fantastic new images of Ceres continue to spill out of the Dawn mission, and armchair scientists all over the world are zooming into them, exploring them, and trying to solve the puzzles that they contain.

May 22, 2015 09:59 PM

Tommaso Dorigo - Scientificblogging

Highest Energy Collisions ? Not In My Book
Yesterday I posed a question - Are the first collisions recorded by the LHC running at 13 TeV the highest-energy ever produced by mankind with subatomic particles ? It was a tricky one, as usual, meant to think about the matter.

I received several tentative answer in the comments thread, and thus answered there. I paste the text here as it is of some interest to some of you and I wish it does not go overlooked.

---

Dear all, 

read more

by Tommaso Dorigo at May 22, 2015 07:57 PM

The n-Category Cafe

PROPs for Linear Systems

PROPs were developed in topology, along with operads, to describe spaces with lots of operations on them. But now some of us are using them to think about ‘signal-flow diagrams’ in control theory—an important branch of engineering. I talked about that here on the n-Café a while ago, but it’s time for an update.

Eric Drexler likes to say: engineering is dual to science, because science tries to understand what the world does, while engineering is about getting the world to do what you want. I think we need a slightly less ‘coercive’, more ‘cooperative’ approach to the world in order to develop ‘ecotechnology’, but it’s still a useful distinction.

For example, classical mechanics is the study of what things do when they follow Newton’s laws. Control theory is the study of what you can get them to do.

Say you have an upside-down pendulum on a cart. Classical mechanics says what it will do. But control theory says: if you watch the pendulum and use what you see to move the cart back and forth correctly, you can make sure the pendulum doesn’t fall over!

Control theorists do their work with the help of ‘signal-flow diagrams’. For example, here is the signal-flow diagram for an inverted pendulum on a cart:

When I take a look at a diagram like this, I say to myself: that’s a string diagram for a morphism in a monoidal category! And it’s true. Jason Erbele wrote a paper explaining this. Independently, Bonchi, Sobociński and Zanasi did some closely related work:

• John Baez and Jason Erbele, Categories in control.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, Interacting Hopf algebras.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, A categorical semantics of signal flow graphs.

Next week I’ll explain some of the ideas at the Turin meeting on the categorical foundations of network theory. But I also want to talk about this new paper that Simon Wadsley of Cambridge University wrote with my student Nick Woods:

• Simon Wadsley and Nick Woods, PROPs for linear systems.

This makes the picture neater and more general!

You see, Jason and I used signal flow diagrams to give a new description of the category of finite-dimensional vector spaces and linear maps. This category plays a big role in the control theory of linear systems. Bonchi, Sobociński and Zanasi gave a closely related description of an equivalent category, <semantics>Mat(k),<annotation encoding="application/x-tex"> \mathrm{Mat}(k),</annotation></semantics> where:

• objects are natural numbers, and

• a morphism <semantics>f:mn<annotation encoding="application/x-tex"> f : m \to n</annotation></semantics> is an <semantics>n×m<annotation encoding="application/x-tex"> n \times m</annotation></semantics> matrix with entries in the field <semantics>k,<annotation encoding="application/x-tex"> k,</annotation></semantics>

and composition is given by matrix multiplication.

But Wadsley and Woods generalized all this work to cover <semantics>Mat(R)<annotation encoding="application/x-tex">\mathrm{Mat}(R)</annotation></semantics> whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative rig. A rig is a ‘ring without negatives’—like the natural numbers. We can multiply matrices valued in any rig, and this includes some very useful examples… as I’ll explain later.

Wadsley and Woods proved:

Theorem. Whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative rig, <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is the PROP for bicommutative bimonoids over <semantics>R.<annotation encoding="application/x-tex"> R.</annotation></semantics>

This result is quick to state, but it takes a bit of explaining! So, let me start by bringing in some definitions.

Bicommutative bimonoids

We will work in any symmetric monoidal category, and draw morphisms as string diagrams.

A commutative monoid is an object equipped with a multiplication:

and a unit:

obeying these laws:

For example, suppose <semantics>FinVect k<annotation encoding="application/x-tex"> \mathrm{FinVect}_k</annotation></semantics> is the symmetric monoidal category of finite-dimensional vector spaces over a field <semantics>k<annotation encoding="application/x-tex"> k</annotation></semantics>, with direct sum as its tensor product. Then any object <semantics>VFinVect k<annotation encoding="application/x-tex"> V \in \mathrm{FinVect}_k </annotation></semantics> is a commutative monoid where the multiplication is addition:

<semantics>(x,y)x+y<annotation encoding="application/x-tex"> (x,y) \mapsto x + y </annotation></semantics>

and the unit is zero: that is, the unique map from the zero-dimensional vector space to <semantics>V.<annotation encoding="application/x-tex"> V.</annotation></semantics>

Turning all this upside down, cocommutative comonoid has a comultiplication:

and a counit:

obeying these laws:

For example, consider our vector space <semantics>VFinVect k<annotation encoding="application/x-tex"> V \in \mathrm{FinVect}_k</annotation></semantics> again. It’s a commutative comonoid where the comultiplication is duplication:

<semantics>x(x,x)<annotation encoding="application/x-tex"> x \mapsto (x,x) </annotation></semantics>

and the counit is deletion: that is, the unique map from <semantics>V<annotation encoding="application/x-tex"> V</annotation></semantics> to the zero-dimensional vector space.

Given an object that’s both a commutative monoid and a cocommutative comonoid, we say it’s a bicommutative bimonoid if these extra axioms hold:

You can check that these are true for our running example of a finite-dimensional vector space <semantics>V.<annotation encoding="application/x-tex"> V.</annotation></semantics> The most exciting one is the top one, which says that adding two vectors and then duplicating the result is the same as duplicating each one, then adding them appropriately.

Our example has some other properties, too! Each element <semantics>ck<annotation encoding="application/x-tex"> c \in k</annotation></semantics> defines a morphism from <semantics>V<annotation encoding="application/x-tex"> V</annotation></semantics> to itself, namely scalar multiplication by <semantics>c:<annotation encoding="application/x-tex"> c:</annotation></semantics>

<semantics>xcx<annotation encoding="application/x-tex"> x \mapsto c x </annotation></semantics>

We draw this as follows:

These morphisms are compatible with the ones so far:

Moreover, all the ‘rig operations’ in <semantics>k<annotation encoding="application/x-tex"> k</annotation></semantics>—that is, addition, multiplication, 0 and 1, but not subtraction or division—can be recovered from what we have so far:

We summarize this by saying our vector space <semantics>V<annotation encoding="application/x-tex"> V</annotation></semantics> is a bicommutative bimonoid ‘over <semantics>k<annotation encoding="application/x-tex"> k</annotation></semantics>’.

More generally, suppose we have a bicommutative bimonoid <semantics>A<annotation encoding="application/x-tex"> A</annotation></semantics> in a symmetric monoidal category. Let <semantics>End(A)<annotation encoding="application/x-tex"> \mathrm{End}(A)</annotation></semantics> be the set of bicommutative bimonoid homomorphisms from <semantics>A<annotation encoding="application/x-tex"> A</annotation></semantics> to itself. This is actually a rig: there’s a way to add these homomorphisms, and also a way to ‘multiply’ them (namely, compose them).

Suppose <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is any commutative rig. Then we say <semantics>A<annotation encoding="application/x-tex"> A</annotation></semantics> is a bicommutative bimonoid over <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> if it’s equipped with a rig homomorphism

<semantics>Φ:REnd(A)<annotation encoding="application/x-tex"> \Phi : R \to \mathrm{End}(A)</annotation></semantics>

This is a way of summarizing the diagrams I just showed you! You see, each <semantics>cR<annotation encoding="application/x-tex"> c \in R</annotation></semantics> gives a morphism from <semantics>A<annotation encoding="application/x-tex"> A</annotation></semantics> to itself, which we write as

The fact that this is a bicommutative bimonoid endomorphism says precisely this:

And the fact that <semantics>Φ<annotation encoding="application/x-tex"> \Phi</annotation></semantics> is a rig homomorphism says precisely this:

So sometimes the right word is worth a dozen pictures!

What Jason and I showed is that for any field <semantics>k,<annotation encoding="application/x-tex"> k,</annotation></semantics> the <semantics>FinVect k<annotation encoding="application/x-tex"> \mathrm{FinVect}_k</annotation></semantics> is the free symmetric monoidal category on a bicommutative bimonoid over <semantics>k.<annotation encoding="application/x-tex"> k.</annotation></semantics> This means that the above rules, which are rules for manipulating signal flow diagrams, completely characterize the world of linear algebra!

Bonchi, Sobociński and Zanasi used ‘PROPs’ to prove a similar result where the field is replaced by a sufficiently nice commutative ring. And Wadlsey and Woods used PROPS to generalize even further to the case of an arbitrary commutative rig!

But what are PROPs?

PROPs

A PROP is a particularly tractable sort of symmetric monoidal category: a strict symmetric monoidal category where the objects are natural numbers and the tensor product of objects is given by ordinary addition. The symmetric monoidal category <semantics>FinVect k<annotation encoding="application/x-tex"> \mathrm{FinVect}_k</annotation></semantics> is equivalent to the PROP <semantics>Mat(k),<annotation encoding="application/x-tex"> \mathrm{Mat}(k),</annotation></semantics> where a morphism <semantics>f:mn<annotation encoding="application/x-tex"> f : m \to n</annotation></semantics> is an <semantics>n×m<annotation encoding="application/x-tex"> n \times m</annotation></semantics> matrix with entries in <semantics>k,<annotation encoding="application/x-tex"> k,</annotation></semantics> composition of morphisms is given by matrix multiplication, and the tensor product of morphisms is the direct sum of matrices.

We can define a similar PROP <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative rig, and Wadsley and Woods gave an elegant description of the ‘algebras’ of <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics>. Suppose <semantics>C<annotation encoding="application/x-tex"> C</annotation></semantics> is a PROP and <semantics>D<annotation encoding="application/x-tex"> D</annotation></semantics> is a strict symmetric monoidal category. Then the category of algebras of <semantics>C<annotation encoding="application/x-tex"> C</annotation></semantics> in <semantics>D<annotation encoding="application/x-tex"> D</annotation></semantics> is the category of strict symmetric monoidal functors <semantics>F:CD<annotation encoding="application/x-tex"> F : C \to D</annotation></semantics> and natural transformations between these.

If for every choice of <semantics>D<annotation encoding="application/x-tex"> D</annotation></semantics> the category of algebras of <semantics>C<annotation encoding="application/x-tex"> C</annotation></semantics> in <semantics>D<annotation encoding="application/x-tex"> D</annotation></semantics> is equivalent to the category of algebraic structures of some kind in <semantics>D,<annotation encoding="application/x-tex"> D,</annotation></semantics> we say <semantics>C<annotation encoding="application/x-tex"> C</annotation></semantics> is the PROP for structures of that kind. This explains the theorem Wadsley and Woods proved:

Theorem. Whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative rig, <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is the PROP for bicommutative bimonoids over <semantics>R.<annotation encoding="application/x-tex"> R.</annotation></semantics>

The fact that an algebra of <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is a bicommutative bimonoid is equivalent to all this stuff:

The fact that <semantics>Φ(c)<annotation encoding="application/x-tex"> \Phi(c)</annotation></semantics> is a bimonoid homomorphism for all <semantics>cR<annotation encoding="application/x-tex"> c \in R</annotation></semantics> is equivalent to this stuff:

And the fact that <semantics>Φ<annotation encoding="application/x-tex"> \Phi</annotation></semantics> is a rig homomorphism is equivalent to this stuff:

This is a great result because it includes some nice new examples.

First, the commutative rig of natural numbers gives a PROP <semantics>Mat.<annotation encoding="application/x-tex"> \mathrm{Mat}.</annotation></semantics> This is equivalent to the symmetric monoidal category <semantics>FinSpan,<annotation encoding="application/x-tex"> \mathrm{FinSpan},</annotation></semantics> where morphisms are isomorphism classes of spans of finite sets, with disjoint union as the tensor product. Steve Lack had already shown that <semantics>FinSpan<annotation encoding="application/x-tex"> \mathrm{FinSpan}</annotation></semantics> is the PROP for bicommutative bimonoids. But this also follows from the result of Wadsley and Woods, since every bicommutative bimonoid <semantics>V<annotation encoding="application/x-tex"> V</annotation></semantics> is automatically equipped with a unique rig homomorphism

<semantics>Φ:End(V)<annotation encoding="application/x-tex"> \Phi : \mathbb{N} \to \mathrm{End}(V)</annotation></semantics>

Second, the commutative rig of booleans

<semantics>𝔹={F,T}<annotation encoding="application/x-tex"> \mathbb{B} = \{F,T\}</annotation></semantics>

with ‘or’ as addition and ‘and’ as multiplication gives a PROP <semantics>Mat(𝔹).<annotation encoding="application/x-tex"> \mathrm{Mat}(\mathbb{B}).</annotation></semantics> This is equivalent to the symmetric monoidal category <semantics>FinRel<annotation encoding="application/x-tex"> \mathrm{FinRel}</annotation></semantics> where morphisms are relations between finite sets, with disjoint union as the tensor product. Samuel Mimram had already shown that this is the PROP for special bicommutative bimonoids, meaning those where comultiplication followed by multiplication is the identity:

But again, this follows from the general result of Wadsley and Woods!

Finally, taking the commutative ring of integers <semantics>,<annotation encoding="application/x-tex"> \mathbb{Z},</annotation></semantics> Wadsley and Woods showed that <semantics>Mat()<annotation encoding="application/x-tex"> \mathrm{Mat}(\mathbb{Z})</annotation></semantics> is the PROP for bicommutative Hopf monoids. The key here is that scalar multiplication by <semantics>1<annotation encoding="application/x-tex"> -1</annotation></semantics> obeys the axioms for an antipode—the extra morphism that makes a bimonoid into a Hopf monoid. Here are those axioms:

More generally, whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative ring, the presence of <semantics>1R<annotation encoding="application/x-tex"> -1 \in R</annotation></semantics> guarantees that a bimonoid over <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is automatically a Hopf monoid over <semantics>R.<annotation encoding="application/x-tex"> R.</annotation></semantics> So, when <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a commutative ring, Wadsley and Woods’ result implies that <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is the PROP for Hopf monoids over <semantics>R.<annotation encoding="application/x-tex"> R.</annotation></semantics>

Earlier, in their paper on ‘interacting Hopf algebras’, Bonchi, Sobociński and Zanasi had given an elegant and very different proof that <semantics>Mat(R)<annotation encoding="application/x-tex"> \mathrm{Mat}(R)</annotation></semantics> is the PROP for Hopf monoids over <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> whenever <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> is a principal ideal domain. The advantage of their argument is that they build up the PROP for Hopf monoids over <semantics>R<annotation encoding="application/x-tex"> R</annotation></semantics> from smaller pieces, using some ideas developed by Steve Lack. But the new argument by Wadsley and Woods has its own charm.

In short, we’re getting the diagrammatics of linear algebra worked out very nicely, providing a solid mathematical foundation for signal flow diagrams in control theory!

by john (baez@math.ucr.edu) at May 22, 2015 07:56 PM

Lubos Motl - string vacua and pheno

Far-reaching features of physical theories are not sociological questions
Backreaction has reviewed a book on the "philosophy of string theory" written by a trained physicist and philosopher Richard Dawid who may appear as a guest blogger here at some point.

Many of the statements sound reasonable – perhaps because they have a kind of a boringly neutral flavor. But somewhere in the middle, a reader must be shocked by this sentence – whose content is then repeated many times:
Look at the arguments [in favor of string theory] that he raises: The No Alternatives Argument and the Unexpected Explanatory Coherence are explicitly sociological.
Oh, really?




These two properties – or, if you want to be a skeptic, claimed properties – of string theory are self-evidently (claimed) intrinsic mathematical properties of string theory. String theory seems to have no mathematically possible alternatives; and its ideas fit together much more seamlessly than what you would expect for a generic man-made theory of this complexity a priori.




If you're not familiar with the recent 4 decades in theoretical physics, you may have doubts whether string theory actually has these properties. But why would you think that these very questions are sociological in character?

If real-world humans want to answer such questions, they have to rely on the findings that have been made by themselves or other humans (unless some kittens turn out to be really clever), and only those that have been done by now. But the same self-evident limitations apply to every other question in science. We only know things about Nature that followed from the experience of humans, and only those in the past (and present), not those in the future. Does it mean that we should declare all questions that scientists are interested in to be "sociological questions"?

Postmodern and feminist philosophers (mocked by Alan Sokal's hoax) surely want to believe such things. All of science is just a manifestation of sociology. But can the rest of us agree that these postmodern opinions are pure šit? And if we can, can we please recognize that statements about string theory don't "conceptually" differ from other propositions in science and mathematics, so they are obviously non-sociological, too?

Alternatives of string theory – non-stringy consistent theories of quantum gravity in \(d\geq 4\) – either exist or they don't exist. What does it have to do with the society? Ideas in string theory either fit together, are unified, and point to universal mechanisms, or they don't. What is the role of the society here?

If you study what Sabine Hossenfelder actually means by the claim that these propositions are sociological, you will see an answer: She wants these questions to be studied as sociological questions because that's where she has something to offer. What she has to offer are lame and insulting conspiracy theories. String theory can't have any good properties because some string theorists are well-funded, or something like that.

This kind of assertion may impress the low quality human material that reads her blog but they won't influence a rational person. A rational person knows that whether a theory is funded has nothing to do with its particular mathematical properties. And if someone uses the argument about funding – in one way or another – as an argument to establish a proposition about a mathematical property of the theory, he or she is simply not playing the game of science. He or she – in this case Sabine Hossenfelder – is working on a cheap propaganda.

A cheap propaganda may use various strategies. Global warming alarmists claim that the huge funding they are getting – really stealing – from the taxpayers' wallets proves that they alarming predictions are justified. They are attempting to intimidate everyone else. Sabine Hossenfelder uses the opposite strategy. Those who occasionally get a $3 million prize must be wrong – because that's what the jealous readers of Backreaction want to be true. None of these – opposite – kinds of propaganda has any scientific merit, however.

Needless to say, she is not the only one who would love to "establish" certain answers by sociological observations. It just can't be done. It can't be done by the supporters of a theory and it can't be done by its foes, either. To settle technical questions – even far-reaching, seemingly "philosophical" questions – about a theory, you simply need to study the theory technically, whether you like it or not. Hossenfelder doesn't have the capacity to do so in the case of string theory but that doesn't mean that she may meaningfully replace her non-existing expertise by something she knows how to do, namely by sociological conspiracy theories.

There is no rigorous yet universal proof but there are lots of non-rigorous arguments as well as context-dependent proofs that seem to imply that string theory is the only game in town. Also, thousands of papers about string theory are full of "unexpectedly coherent explanatory surprises" that physicists were "forced" to learn about when they investigated many issues.

I understand that you don't have to believe me that it's the case if you're actually unfamiliar with these "surprises". But you should still be able to understand that their existence is not a sociological question. And if they exist, those who know that they exist aren't affected and can't be affected by "sociological arguments" that would try to "deduce" something else. You should also be able to understand that those who have not mastered string theory can't actually deduce the answer to the question from any solid starting point. In the better case, they believe that string theory fails to have those important virtues. In the worse case, they force themselves to believe that string theory doesn't have these virtues because they are motivated to spread this opinion and they usually start with themselves.

At any rate, their opinion is nothing else than faith or noise – or something worse than that. There is nothing of scientific value to back it.

Now, while the review is basically a positive one, Backreaction ultimately denies all these arguments, anyway. Hossenfelder doesn't understand that the "only game in town" and "surprising explanatory coherence" are actually arguments that do affect a researcher's confidence that the theory is on the right track. And be sure that they do.

If string theory is the only game in town, well, then it obviously doesn't make sense to try to play any other games simply because there aren't any.

If string theory boasts this "surprising explanatory coherence", it means that the probability of its being right is (much) higher than it would be otherwise. Why?

Take dualities. They say that two theories constructed from significantly different starting points and looking different when studied too sloppily are actually exactly equivalent when you incorporate all conceivable corrections and compare the full lists of objects and phenomena. What does it imply for the probability that such a theory is correct?

A priori, \(A_i\) and \(B_j\) were thought to be different, mutually exclusive hypotheses. If you prove that \(A_i\equiv B_j\), they are no longer mutually exclusive. You should add up their prior probabilities. Both of them will be assigned the sum. The duality allowed you to cover a larger (twice as large) territory on the "landscape of candidate theories".

You may view this quasi-Bayesian argument to be an explanation why important theories in physics almost always admit numerous "pictures" or "descriptions". They allow you to begin from various starting points. Quantum mechanics may be formulated in the Schrödinger picture or the Heisenberg picture, using the Feynman path integrals. And there are lots of representations or bases of the Hilbert space you may pick, too. It didn't have to be like that. But important theories simply tend to have this property and while it seems impossible to calculate the probabilities accurately, the argument above explains why it's sensible to expect that important theories have many dual descriptions.

Return to the year 100 AD and ask the question what is the largest city in the world. There may be many candidates. Some candidate towns sit on several roads. There is one candidate where all roads lead. I am sure you understand where I am going: Rome was obviously the most important city in the world and the fact that all roads led to Rome was a legitimate argument to think that Rome was more likely to be the winner. The roads play the same role as the dualities and unexpected mathematical relationships discovered during the research of string theory. The analogy is in no way exact but it is good enough.

There is another, refreshingly different way to understand why the dualities and mathematical relationships make string theory more likely. They reduce the number of independent assumptions, axioms, concepts, and building blocks of the theory. In this way, the theory becomes more natural and less contrived. If you apply Occam's razor correctly, this reduction of the number of the independent building blocks, concepts, axioms, and assumptions occurs for string theory and makes its alternatives look contrived in comparison.

For example, strings may move but because they're extended, they may also wind around a circle in the spacetime. T-duality allows you to exactly interchange these two quantum numbers. They're fundamentally "the same kind of information" which means that you shouldn't double count it. The theory is actually much simpler, fundamentally speaking, than a theory in which "an object may move as well as wind" because these two verbs are just two different interpretations of the same thing.

In quantum field theory, solitons are objects such as magnetic monopoles that, in the weak coupling limit, may be identified with a classical solution of the field theory. If the theory has an S-duality – which may be the case of both string theory and quantum field theory – such a soliton may be interchanged with the fundamental string (or electric charge). Again, they're fundamentally the same thing in two limiting descriptions or interpretations. If you count how many independent building blocks (as described by Occam's razor) a theory has, and if you do so in some fundamentally robust way, a theory with an S-duality will have a fewer independent building blocks or concepts than a generic theory without any S-duality where the elementary electric excitations and the classical field-theoretical solutions would be completely unrelated! Not only all particle species are made of the same string in the weakly coupled string theory; the objects that seem more heavy or extended than a vibrating string are secretly "equivalent" to a vibrating string, too.

Similar remarks apply to all dualities and similar relationships in string theory, including S-duality, T-duality, U-duality, mirror symmetry, equivalence of Gepner models (conglomerates of minimal models) and particular Calabi-Yau shapes, string-string duality, IIA-M and HE-M duality, the existence of matrix models, AdS/CFT correspondence, conceptually different but agreeing calculations of the black hole entropy, ER-EPR correspondence, and others. All these insights are roads leading to Rome, arguments that the city at the end of several roads is actually the same one and it is therefore more interesting.



None of these properties of string theory prove that it's the right theory of quantum gravity. But they do make it meaningful for a rational theoretical physicist to spend much more time with the structure than with alternative structures that don't have these properties. People simply spend more time in a city with many roads and on roads close to this city. The reasons are completely natural and rationally justified. These reasons have something to do with a separation of the prior probabilities – and of the researchers' time.

I understand that a vast majority of people, even physicists with a general PhD, can't understand these matters because their genuine understanding depends on a specialized expertise. But I am just so incredibly tired of all those low quality people who try to "reduce" all these important physics questions to sociological memes and ad hominem attacks. You just can't that, you shouldn't do that, and it's always the people who "reduce" the discourse in this lame sociological direction who suck.

Sabine Hossenfelder is one of the people who badly suck.

by Luboš Motl (noreply@blogger.com) at May 22, 2015 05:56 PM

Peter Coles - In the Dark

The Diary of One who Disappeared

At the end of a very busy day before I go home and vegetate, I only just have time for a quick post about the concert I attended last night in St George’s Church, Kemptown. It was a convenient venue for me as it is just at the end of my street; my polling station for the recent elections was there too.

Anyway, the title of the concert is taken from the song cycle of the same name composed by Leoš Janáček. It’s a sequence of 21 poems about a young man who falls for seductive gypsy girl and ends up running away from home to be with her, and care for the baby son she turns out at the end of the cycle to have born. There’s also a very tempestuous piano interlude, labelled Intermezzo Erotico in the programme, which (presumably) depicts the circumstances in which the baby was conceived. This work was performed by mezzo-soprano Anna Huntley and tenor Robert Murray accompanied by James Baillieu at the piano (who also played the piano at the recital I attended last week). Three female voices also took part in a few of these songs; they were hidden away in the gallery so it was quite a surprise when they joined in.

Despite being a big fan of Janáček I’ve never heard this music before, and I found it absolutely wonderful. It involves many abrupt and unexpected changes of mood, with soome simple folk-like melodies juxtaposed with much more disturbed and fragmented musical language. At the end, when the young man reveals that he has a son, the tenor reaches up for two stunning top Cs which took me completely by surprise and sent cold shivers down my spine. I must get a recording of this work. As soon as it had finished I wanted to listen to it all over again.

The Diary of One who Disappeared formed the second half of the concert. The first was also very varied and interesting. We began with he two principal singers taking turns at performing a selection of six from a well-known set of 49 Deutsche Volkslieder by Johannes Brahms. Then Robert Murray – who looks somewhat disconcertingly like Shane Warne – performed the Seven Sonnets of Michelangelo by Benjamin Britten (his Opus 22). These were the first pieces Britten composed specifically for the voice of his partner Peter Pears and were written way back in 1940. They’re all poems about love in its various forms and I think they’re wonderful, especially Sonnet XXX:

Veggio co’ bei vostri occhi un dolce lume,
Che co’ miei ciechi già veder non posso;
Porto co’ vostri piedi un pondo addosso,
Che de’ mie zoppi non è già costume.
Volo con le vostr’ale senza piume;
Col vostr’ingegno al ciel sempre son mosso;
Dal vostr’arbitrio son pallido e rosso,
Freddo al sol, caldo alle più fredde brume.
Nel voler vostro è sol la voglia mia,
I mie’ pensier nel vostro cor si fanno,
Nel vostro fiato son le mie parole.
Come luna da sè sol par ch’io sia;
Chè gli occhi nostri in ciel veder non sanno
Se non quel tanto che n’accende il sole.

It’s a fine poem in itself but Britten’s setting of it is both beautiful and imaginative. I’m guessing that it’s extremely difficult to sing because the vocal line is very complex and has some very challenging intervals. You can almost imagine it being part of a bel canto opera…

The first half of the concert closed with the Seven Gypsy Songs (Opus 55) by
Antonín Dvořák, by far the most famous of which is Songs My Mother Taught Me.

It was a very fine recital with some lovely music, beautifully sung. In fact the singing was so nice a blackbird outside the church decided to join in during the first half. It was a nicely balanced programme tied together by two recurrent themes: Gypsies and love (and sometimes both at the same time). TheI particularly enjoyed the blend of familiar and unfamiliar. For example, although I know the Sonnets by Britten I’ve only ever heard the classic Britten-Pears version so it was interesting to hear it performed by a very different singer.


by telescoper at May 22, 2015 05:25 PM

The n-Category Cafe

How to Acknowledge Your Funder

A comment today by Stefan Forcey points out ways in which US citizens can try to place legal limits on the surveillance powers of the National Security Agency, which we were discussing in the context of its links with the American Mathematical Society. If you want to act, time is of the essence!

But Stefan also tells us how he resolved a dilemma. Back here, he asked Café patrons what he should do about the fact that the NSA was offering him a grant (for non-classified work). Take their money and contribute to the normalization of the NSA’s presence within the math community, or refuse it and cause less mathematics to be created?

What he decided was to accept the funding and — in this paper at least — include a kind of protesting acknowledgement, citing his previous article for the Notices of the AMS.

I admire Stefan for openly discussing his dilemma, and I think there’s a lot to be said for how he’s handled it.

by leinster (tom.leinster@ed.ac.uk) at May 22, 2015 01:45 PM

astrobites - astro-ph reader's digest

Cosmic Reionization of Hydrogen and Helium

In a time long long ago…

The story we’re hearing today requires us to go back to the beginning of the Universe and explore briefly its rich yet mysterious history. The Big Bang marks the creation of the Universe 13.8 billion years ago. Seconds after the Big Bang, fundamental particles came into existence. They smashed together to form protons and neutrons, which collided to form the nuclei of hydrogen, helium, and lithium. Electrons, at this time, were wheezing by at such high velocities to be captured by their surrounding atomic nuclei.  The Universe was ionized and there were no stable atoms.

The Universe coming of age: Recombination and Reionization

After 300,000 or so years later, the Universe had cooled down a little bit. The electrons weren’t moving as fast as before and could be captured by atomic nuclei to form neutral atoms. This ushered in the era of recombination and propelled the Universe toward a neutral state. Structure formation happened next, where some of the first structures to form are thought to have been quasars (actively accreting supermassive black holes), massive galaxies, and the first generation of stars (population III stars). The intense radiation from incipient quasars and stars started to ionize the neutral hydrogen in their surroundings, beckoning the second milestone of the Universe known as the epoch of reionization (EoR). Recent cosmological studies suggested that the reionization epoch began no later than redshift (z) ~ 10.6, corresponding to ~350 Myr after the Big Bang.

To probe when reionization ended or completed, we can look at the spectra of high-redshift quasars and compare them with that of low-redshift quasars. Figure 1 shows this comparison. The spectrum of a quasar at z ~ 6 shows almost zero flux in the region of wavelengths shorter than the quasar’s redshifted Lyman-alpha line. This feature is known as the Gunn-Peterson trough and is caused by the absorption of the quasar light when it travels through the neutral space and gets absorbed by neutral hydrogen. Low-redshift quasars do not show this feature as the hydrogen along the path of the quasar light is already ionized. Quasar light does not get absorbed and can travel unobstructed to our view. The difference in the spectra of low- and high-redshift quasars suggests that the Universe approached the end of reionization around z ~ 6, corresponding to ~1 Gyr after the Big Bang. (This astrobite provides a good review of reionization and its relation to quasar spectum.)

qso

Fig 1 – The top panel is a synthetic quasar spectrum at z = 0, compared with the bottom panel showing the spectrum of the current known highest redshift quasar ULAS J112001.48+064124.3 (hereafter ULAS J1120+0641) at z ~ 7.1. While the Lyman-alpha line of the top spectrum is located at its rest-frame wavelength of 1216 nm, it is very redshifted in the spectrum of ULAS J1120+0641 (note the scale of the wavelengths). Compared to the low-redshift spectrum, there is a rapid drop in flux before the Lyman-alpha line for ULAS J1120+0641, signifying the Gunn-Peterson trough. [Top figure from P J. Francis et al. 1991 and bottom figure from Mortlock et al. 2011]

 

Problems with Reionization, and a Mini Solution

The topic of today’s paper concerns possible ionizing sources during the epoch of reionization, which also happens to be one of the actively-researched questions in astronomy. Quasars and stars in galaxies are the most probable ionizing sources, since they are emitters of the Universe most intense radiation (see this astrobite for how galaxies might ionize the early Universe). This intense radiation falls in the UV and X-ray regimes and can ionize neutral hydrogen (and potentially also neutral helium, which requires twice as much ionizing energy). But, there are problems with this picture.

First of all, the ionizing radiation from high-redshift galaxies are found to be insufficient to maintain the Universe’s immense bath of hydrogen in ionized state. To make up for this, the fraction of ionizing photons that escape the galaxies (and contribute to reionization) — known as the escape fraction — has to be higher than what we see observationally. Second of all, we believe that the contribution of quasars to the ionizing radiation becomes less important at higher and higher redshifts and is negligible at z >~ 6. So, we have a conundrum here. If we can’t solve the problem of reionization with quasars and galaxies, we need other ionizing sources. The paper today investigates one particular ionizing source: mini-quasars.

What are mini-quasars? Before that, what do I mean when I say quasars? Quasars in the normal sense of the word usually refer to the central accreting engines of supermassive black holes (~109  Msun) where powerful radiation escapes in the form of a jet. A mini-quasar is the dwarf version of a quasar. More quantitatively, it is the central engine of an intermediate-mass black hole (IMBH) that has a mass of ~ 102 – 105 Msun. Previous studies hinted at the role of mini-quasars toward the reionization of hydrogen; the authors in this paper went an extra mile and studied the combined impact of mini-quasars and stars not only on the reionization of hydrogen, but also on the reionization of helium.  Looking into the reionization of helium allows us to investigate the properties of mini-quasars. Much like solving a set of simultaneous equations, getting the correct answer to the problem of hydrogen reionization requires that we also simultaneously constrain the reionization of helium.

The authors calculated the number of ionizing photons from mini-quasars and stars analytically. They considered only the most optimistic case for mini-quasars where all ionizing photons contribute to reionization, i.e. the escape fraction fesc, BH = 1. Since the escape fraction of ionizing photons from stars is still poorly constrained, three escape fractions fesc are considered. Figure 2 shows the relative contributions of mini-quasars and stars in churning out hydrogen ionizing photons as a function of redshifts for different escape fractions from stars.  As long as fesc is small enough, mini-quasars are able to produce more hydrogen ionizing photons than stars.

fig1

Fig 2 – Ratio of the number of ionizing photons produced by mini-quasars relative to stars (y-axis) as a function of redshifts (x-axis). Three escape fractions of ionizing photons fesc from stars are considered. [Figure 2 of the paper]

Figure 3 shows the contributions of mini-quasars and mini-quasars plus (normal) quasars toward reionization of hydrogen and helium. Mini-quasars alone are found to contribute non-negligibly (~ 20%) toward hydrogen reionization at z ~ 6, while contribution from quasars starts to become more important at low redshifts . The combined contribution from mini-quasars and quasars is observationally consistent with when helium reionization ended. Figure 4 shows the combined contribution of mini-quasars and stars to hydrogen and helium reionization. The escape fraction of ionizing photons from stars significantly affects hydrogen and helium reionizations, ie they influence whether hydrogen and helium reionizations end earlier or later than current theory.

fig2

Fig 3 – Volume of space filled by ionized hydrogen and helium, Qi(z), as a function of redshift z. The different colored lines signify the contributions of mini-quasars (IMBH) and quasars (SMBH) to hydrogen and helium reionizations. [Figure 3 of the paper]

fig3

Fig 4 – Volume of space filled by ionized hydrogen and helium, Qi(z), as a function of redshift z. The two panels refer to the different assumptions on the mini-quasar spectrum, where the plot on the bottom is the more favorable of the two. The different lines refer to the different escape fractions of ionizing photons from stars that contribute to hydrogen and helium reionizations. [Figure 4 of the paper]

The authors caution against a couple of caveats in their paper. Although they demonstrate that contribution from mini-quasars is not negligible, this is only for the most optimistic case where all photons from the mini-quasars contribute to reionization. The authors also did not address the important issue of feedback from accretion onto IMBHs, which regulates BHs growth and consequently determines how common mini-quasars are. The escape fractions from stars also need to be better constrained in order to place a tighter limit on the joint contribution of mini-quasars and stars to reionization. Improved measurements of helium reionization would also help in constraining the properties of mini-quasars. Phew….sounds like we still have a lot of work to do. This paper presents some interesting results, but we are definitely still treading on muddy grounds and the business of cosmic reionization is not any less tricky than we hope.

 

by Suk Sien Tie at May 22, 2015 12:24 PM

Christian P. Robert - xi'an's og

postdoc in the Alps

Post-doctoral Position in Spatial/Computational Statistics (Grenoble, France)

A post-doctoral position is available in Grenoble, France, to work on computational methods for spatial point process models. The candidate will work with Simon Barthelmé (GIPSA-lab, CNRS) and Jean-François Coeurjolly (Univ. Grenoble Alpes, Laboratory Jean Kuntzmann) on extending point process methodology to deal with large datasets involving multiple sources of variation. We will focus on eye movement data, a new and exciting application area for spatial statistics. The work will take place in the context of an interdisciplinary project on eye movement modelling involving psychologists, statisticians and applied mathematicians from three different institutes in Grenoble.

The ideal candidate has a background in spatial or computational statistics or machine learning. Knowledge of R (and in particular the package spatstat) and previous experience with point process models is a definite plus.

The duration of the contract is 12+6 months, starting 01.10.2015 at the earliest. Salary is according to standard CNRS scale (roughly EUR 2k/month).

Grenoble is the largest city in the French Alps, with a very strong science and technology cluster. It is a pleasant place to live, in an exceptional mountain environment.


Filed under: Kids, Mountains, Statistics, Travel, University life Tagged: Alps, CNRS, computational statistics, Grenoble, IMAG, Mount Lady Macdonald, mountains, point processes, postdoctoral position, spatial statistics

by xi'an at May 22, 2015 12:18 PM

arXiv blog

Computational Aesthetics Algorithm Spots Beauty That Humans Overlook

Beautiful images are not always popular ones, which is where the CrowdBeauty algorithm can help, say computer scientists.

One of the depressing truths about social media is that the popularity of an image is not necessarily an indication of its quality. It’s easy to find hugely popular content of dubious quality. But it’s much harder to find unpopular content of high quality.

May 22, 2015 05:00 AM

May 21, 2015

Christian P. Robert - xi'an's og

Bruce Lindsay (March 7, 1947 — May 5, 2015)

When early registering for Seattle (JSM 2015) today, I discovered on the ASA webpage the very sad news that Bruce Lindsay had passed away on May 5.  While Bruce was not a very close friend, we had met and interacted enough times for me to feel quite strongly about his most untimely death. Bruce was indeed “Mister mixtures” in many ways and I have always admired the unusual and innovative ways he had found for analysing mixtures. Including algebraic ones through the rank of associated matrices. Which is why I first met him—besides a few words at the 1989 Gertrude Cox (first) scholarship race in Washington DC—at the workshop I organised with Gilles Celeux and Mike West in Aussois, French Alps, in 1995. After this meeting, we met twice in Edinburgh at ICMS workshops on mixtures, organised with Mike Titterington. I remember sitting next to Bruce at one workshop dinner (at Blonde) and him talking about his childhood in Oregon and his father being a journalist and how this induced him to become an academic. He also contributed a chapter on estimating the number of components [of a mixture] to the Wiley book we edited out of this workshop. Obviously, his work extended beyond mixtures to a general neo-Fisherian theory of likelihood inference. (Bruce was certainly not a Bayesian!) Last time, I met him, it was in Italia, at a likelihood workshop in Venezia, October 2012, mixing Bayesian nonparametrics, intractable likelihoods, and pseudo-likelihoods. He gave a survey talk about composite likelihood, telling me about his extended stay in Italy (Padua?) around that time… So, Bruce, I hope you are now running great marathons in a place so full of mixtures that you can always keep ahead of the pack! Fare well!

 


Filed under: Books, Running, Statistics, Travel, University life Tagged: American statisticians, Bruce Lindsay, composite likelihood, Edinburgh, ICMS, Italia, marathon, mixture estimation, mixtures of distributions, Penn State University, unknown number of components, Venezia

by xi'an at May 21, 2015 10:15 PM

Tommaso Dorigo - Scientificblogging

Bang !! 13 TeV - The Highest Energy Ever Achieved By Mankind ?!
The LHC has finally started to produce 13-TeV proton-proton collisions!

The picture below shows one such collision, as recorded by the CMS experiment today. The blue boxes show the energy recorded in the calorimeter, which measures particle energy by "destroying" them as they interact with the dense layers of matter that this device is made up of; the yellow curves show tracks reconstructed by the ionization deposits of charged particles left in the silicon detector layers of the inner tracker. 

read more

by Tommaso Dorigo at May 21, 2015 08:01 PM

Emily Lakdawalla - The Planetary Society Blog

LightSail Update: All Systems Nominal
It's been 24 hours since The Planetary Society’s LightSail spacecraft was deposited into space yesterday afternoon. All systems continue to look healthy.

May 21, 2015 07:33 PM

John Baez - Azimuth

Information and Entropy in Biological Systems (Part 4)

I kicked off the workshop on Information and Entropy in Biological Systems with a broad overview of the many ways information theory and entropy get used in biology:

• John Baez, Information and entropy in biological systems.

Abstract. Information and entropy are being used in biology in many different ways: for example, to study biological communication systems, the ‘action-perception loop’, the thermodynamic foundations of biology, the structure of ecosystems, measures of biodiversity, and evolution. Can we unify these? To do this, we must learn to talk to each other. This will be easier if we share some basic concepts which I’ll sketch here.

The talk is full of links, in blue. If you click on these you can get more details. You can also watch a video of my talk:


by John Baez at May 21, 2015 05:26 PM

Jester - Resonaances

How long until it's interesting?
Last night, for the first time, the LHC  collided particles at the center-of-mass energy of 13 TeV. Routine collisions should follow early in June. The plan is to collect 5-10 inverse femtobarn (fb-1) of data before winter comes, adding to the 25 fb-1 from Run-1. It's high time dust off your Madgraph and tool up for what may be the most exciting time in particle physics in this century. But when exactly should we start getting excited? When should we start friending LHC experimentalists on facebook? When is the time to look over their shoulders for a glimpse of of gluinos popping out of the detectors. One simple way to estimate the answer is to calculate what is the luminosity when the number of particles produced  at 13 TeV will exceed that produced during the whole Run-1. This depends on the ratio of the production cross sections at 13 and 8 TeV which is of course strongly dependent on the particle's mass and production mechanism. Moreover, the LHC discovery potential will also depend on how the background processes change, and on a host of other experimental issues.  Nevertheless, let us forget for a moment about  the fine-print, and  calculate the ratio of 13 and 8 TeV cross sections for a few particles popular among the general public. This will give us a rough estimate of the threshold luminosity when things should get interesting.

  • Higgs boson: Ratio≈2.3; Luminosity≈10 fb-1.
    Higgs physics will not be terribly exciting this year, with only a modest improvement of the couplings measurements expected. 
  • tth: Ratio≈4; Luminosity≈6 fb-1.
    Nevertheless, for certain processes involving the Higgs boson the improvement may be a bit  faster. In particular, the theoretically very important process of Higgs production in association with top quarks (tth) was on the verge of being detected in Run-1. If we're lucky, this year's data may tip the scale and provide an evidence for a non-zero top Yukawa couplings. 
  • 300 GeV Higgs partner:  Ratio≈2.7 Luminosity≈9 fb-1.
    Not much hope for new scalars in the Higgs family this year.  
  • 800 GeV stops: Ratio≈10; Luminosity≈2 fb-1.
    800 GeV is close to the current lower limit on the mass of a scalar top partner decaying to a top quark and a massless neutralino. In this case, one should remember that backgrounds also increase at 13 TeV, so the progress will be a bit slower than what the above number suggests. Nevertheless,  this year we will certainly explore new parameter space and make the naturalness problem even more severe. Similar conclusions hold for a fermionic top partner. 
  • 3 TeV Z' boson: Ratio≈18; Luminosity≈1.2 fb-1.
    Getting interesting! Limits on Z' bosons decaying to leptons will be improved very soon; moreover, in this case background is not an issue.  
  • 1.4 TeV gluino: Ratio≈30; Luminosity≈0.7 fb-1.
    If all goes well, better limits on gluinos can be delivered by the end of the summer! 

In summary, the progress will be very fast for new heavy particles. In particular, for gluon-initiated production of TeV-scale particles  already the first inverse femtobarn may bring us into a new territory. For lighter particles the progress will be slower, especially when backgrounds are difficult.  On the other hand, precision physics, such as the Higgs couplings measurements, is unlikely to be in the spotlight this year.

by Jester (noreply@blogger.com) at May 21, 2015 05:20 PM

Peter Coles - In the Dark

Phlogiston, Dark Energy and Modified Levity

What happens when something burns?

Had you aslked a seventeenth-century scientist that question and the chances are the answer would  have involved the word phlogiston, a name derived from the Greek  φλογιστόν, meaning “burning up”. This “fiery principle” or “element” was supposed to be present in all combustible materials and the idea was that it was released into air whenever any such stuff was ignited. The act of burning was thought to separate the phlogiston from the dephlogisticated “true” form of the material, also known as calx.

The phlogiston theory held sway until  the late 18th Century, when Antoine Lavoisier demonstrated that combustion results in an increase in weight of the material being burned. This poses a serious problem if burning also involves the loss of phlogiston unless phlogiston has negative weight. However, many serious scientists of the 18th Century, such as Georg Ernst Stahl, had already suggested that phlogiston might have negative weight or, as he put it, “levity”. Nowadays we would probably say “anti-gravity”.

Eventually, Joseph Priestley discovered what actually combines with materials during combustion:  oxygen. Instead of becoming dephlogisticated, things become oxidised by fixing oxygen from air, which is why their weight increases. It’s worth mentioning, though, the name that Priestley used for oxygen was in fact “dephlogisticated air” (because it was capable of combining more extensively with phlogiston than ordinary air). He  remained a phlogistonian longer after making the discovery that should have killed the theory.

So why am I rambling on about a scientific theory that has been defunct for more than two centuries?

Well,   there just might be a lesson from history about the state of modern cosmology. Not long ago I gave a talk in the fine city of Bath on the topic of Dark Energy and its Discontents. For the cosmologically uninitiated, the standard cosmological model involves the hypothesis that about 75% of the energy budget of the Universe is in the form of this “dark energy”.

Dark energy is needed to reconcile three basic measurements: (i) the brightness distant supernovae that seem to indicate the Universe is accelerating (which is where the anti-gravity comes in); (ii) the cosmic microwave background that suggests the Universe has flat spatial sections; and (iii) the direct estimates of the mass associated with galaxy clusters that accounts for about 25% of the mass needed to close the Universe. A universe without dark energy appears not to be able to account for these three observations simultaneously within our current understanding of gravity as obtained from Einstein’s theory of general relativity.

We don’t know much about what this dark energy is, except that in order to make our current understanding work out it has to produce an effect something like anti-gravity, vaguely reminiscent of the “negative weight” hypothesis mentioned above. In most theories, the dark energy component does this by violating the strong energy condition of general relativity. Alternatively, it might also be accounted for by modifying our theory of gravity in such a way that accounts for anti-gravity in some other way. In the light of the discussion above maybe what we need is a new theory of levity? In other words, maybe we’re taking gravity too seriously?

Anyway, I don’t mind admitting how uncomfortable this dark energy makes me feel. It makes me even more uncomfortable that such an enormous  industry has grown up around it and that its existence is accepted unquestioningly by so many modern cosmologists. Isn’t there a chance that, with the benefit of hindsight, future generations will look back on dark energy in the same way that we now see the phlogiston theory?

Or maybe the dark energy really is phlogiston. That’s got to be worth a paper!


by telescoper at May 21, 2015 04:56 PM

Clifford V. Johnson - Asymptotia

So the equations are not…
Working on rough layouts of one of the stories for the book. One rough panel ended up not looking so rough, and after Monday's ink dalliances I was itching to fiddle with brushes again, and then I thought I'd share. So... slightly less rough, shall we say? A more careful version would point the eyes a bit better, for example...(Much of the conversation, filling a bit more of the while space, has been suppressed - spoilers.) light_relief_II_sample -cvj Click to continue reading this post

by Clifford at May 21, 2015 03:49 PM

CERN Bulletin

First 13TeV collisions: reporting from the CCC

On Wednesday 20 May at around 10.30 p.m., protons collided in the LHC at the record-breaking energy of 13 TeV for the first time. These test collisions were to set up various systems and, in particular, the collimators. The tests and the technical adjustments will continue in the coming days.

 

The CCC was abuzz as the LHC experiments saw 13 TeV collisions.
 

Preparation for the first physics run at 6.5 TeV per beam has continued in the LHC. This included the set-up and verification of the machine protection systems. In addition, precise measurements of the overall focusing properties of the ring – the so-called “optics” – were performed by inducing oscillations of the bunches, and observing the response over many turns with the beam position monitors (BPM).

The transverse beam size in the accelerator changes from the order of a millimetre around most of the circumference down to some tens of microns at the centre of the experiments where the beams collide. Reducing the beam size to the micrometre level while at top energy at the interaction points is called “squeezing”. Quadrupole magnets shape the beam and small imperfections in magnetic field strength can mean that the actual beam sizes don’t exactly match the model. After an in depth analysis of the BPM measurements and after simulating the results with correction models, the operators made small corrections to the magnetic fields. As a result, the beam sizes fit the model to within a few percent. This is remarkable for a 27 km machine!

The preparation for first collisions at beam energies of 6.5 TeV started Wednesday, 20 May in the late evening. Soon after, the first record-breaking collisions were seen in the LHC experiments. On Thursday, 21 May, the operators went on to test the whole machine in collision mode with beams that were ‘de-squeezed’ at the interaction points. During the “de-squeeze”, the beam is made larger at the experiment collision points than those used for standard operation. These large beams are interesting for calibration measurements at the experiments, during which the beams are scanned across each other – the so-called "Van der Meer scans".

The two spots are beam 1 (clockwise) and beam 2 (anti-clockwise) traveling inside the LHC in opposite directions. The images are elaborated from data from the synchrotron light monitors. The beam sizes aren’t exactly the same at the B1 and B2 telescopes as the beam intensity as well as the beam optics setup can differ.

Progress was also made on the beam intensity front. In fact, last week the LHC also broke the intensity record for 2015 by circulating 40 nominal bunches in each of the rings, giving a beam intensity of 4×1012 protons per beam. There were some concerns that the unidentified obstacle in the beam-pipe of a Sector 8-1 dipole could be affected by the higher beam currents. The good news is that this is not the case. No beam losses occurred at the location of the obstacle and, after two hours, the operators dumped the beams in the standard way. Commissioning continues and the LHC is on track for the start of its first high-energy physics run in a couple of weeks.

May 21, 2015 03:05 PM

Symmetrybreaking - Fermilab/SLAC

LHC achieves record-energy collisions

The Large Hadron Collider broke its own record again in 13-trillion-electronvolt test collisions.

Today engineers at the Large Hadron Collider successfully collided several tightly packed bunches of particles at 13 trillion electronvolts. This is one of the last important steps on the way toward data collection, which is scheduled for early June.

As engineers ramp up the energy of the collider, the positions of the beams of particles change. The protons are also focused into much tighter packets, so getting two bunches to actually intersect requires very precise tuning.

“Colliding protons inside the LHC is equivalent to firing two needles 6 miles apart with such precision that they collide halfway,” says Syracuse University physicist Sheldon Stone, a senior researcher on the LHCb experiment. “It takes a lot of testing to make sure the two bunches meet at the right spot and do not miss each other.”

Engineers spent the last two years outfitting the LHC to collide protons at a higher energy and faster rate than ever before. Last month they successfully circulated low-energy protons around the LHC for the first time since the shutdown. Five days later, they broke their own energy record by ramping up the energy of a single proton beam to 6.5 trillion electronvolts.

High-energy test collisions allow engineers to practice steering beams in the LHC.

“We have to find the positions where the two beams cross, so what we do is steer the beams up and down and left and right until we get the optimal collision rate,” says CERN engineer Ronaldus SuykerBuyk of the operations team.

In addition to finding the collision sweet spots, engineers will also use these tests to finish calibrating the machine components and positioning the collimators, which protect the accelerator and detectors from stray particles.

The design of the LHC allows more than 2800 bunches of protons to circulate in the machine at a time. But the LHC operations team is testing the machine with just one or two bunches per beam to ensure all is running smoothly.

The next important milestone will be preparing the LHC to consistently and safely ramp, steer and collide proton beams for up to eight consecutive hours.

Declaring stable beams will be only the beginning for the LHC operations team.

"The machine evolves around you," says CERN engineer Jorg Wenninger. "There are little changes over the months. There’s the reproducibility of the magnets. And the alignment of the machine moves a little with the slow-changing geology of the area. So we keep adjusting every day."

First 13 TeV collisions in the ALICE detector

Courtesy of: ALICE collaboration

First 13 TeV collisions in the ATLAS detector

Courtesy of: ATLAS collaboration

First 13 TeV collisions in the CMS detector

Courtesy of: CMS collaboration

First 13 TeV collisions in the LHCb detector

Courtesy of: LHCb collaboration

 

LHC restart timeline

February 2015

The Large Hadron Collider is now cooled to nearly its operational temperature.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC filled with liquid helium

The Large Hadron Collider is now cooled to nearly its operational temperature.
Read more…

A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring.

Info-Graphic by: Sandbox Studio, Chicago
 

First LHC magnets prepped for restart

A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring. Read more…

Engineers and technicians have begun to close experiments in preparation for the next run.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC experiments prep for restart

Engineers and technicians have begun to close experiments in preparation for the next run.
Read more…
March 2015

The Large Hadron Collider has overcome a technical hurdle and could restart as early as next week.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC restart back on track

The Large Hadron Collider has overcome a technical hurdle and could restart as early as next week. Read more…
April 2015

The Large Hadron Collider has circulated the first protons, ending a two-year shutdown.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC sees first beams

The Large Hadron Collider has circulated the first protons, ending a two-year shutdown. Read more…

The Large Hadron Collider accelerated protons to the fastest speed ever attained on Earth.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC breaks energy record

The Large Hadron Collider accelerated protons to the fastest speed ever attained on Earth.
Read more…
May 2015

LHC sees first low-energy collisions

Info-Graphic by: Sandbox Studio, Chicago
 

LHC sees first low-energy collisions

The Large Hadron Collider is back in the business of colliding particles.
Read more…

The Large Hadron Collider broke its own record again in 13-trillion-electronvolt test collisions.

Info-Graphic by: Sandbox Studio, Chicago
 

LHC achieves record-energy collisions

The Large Hadron Collider broke its own record again in 13-trillion-electronvolt test collisions.
Read more…

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at May 21, 2015 02:21 PM

CERN Bulletin

2015, the year of all dangers
On Thursday, May 7 many of you attended, in the packed Main Amphitheatre, the crisis meeting organized by the Staff Association. The main aim of this public meeting was to lift the veil on the intentions of certain CERN Council delegates who would like to: attack again and again our pensions; reduce the budget of CERN in the medium term; more generally, revise downward our employment and working conditions.   Since the beginning of 2014 some disturbing rumours circulate about our pensions. Several CERN Council delegates would like to re-open the balanced package of measures that they accepted in 2010 to ensure the full capitalization of the CERN Pension Fund on a 30-year horizon. This constitutes not only a non-respect of their commitments, but also a non-respect of the applicable rules and procedures. Indeed, the governance principles stipulate that the Pension Fund Governing Board ensures the health of the Fund, and, as such, alone has authority to propose to the CERN Council stabilization measures, if necessary. It should be noted that, to date, there is no indication that the measures in question do not meet expectations; the interference of the CERN Council is thus unjustified. As if this were not enough, the package of measures proposed by CERN Management intended to mitigate in 2015 the increase of the contributions of the Members States expressed in their national currencies following the Swiss National Bank’s decision on 15 January 2015 to discontinue the minimum exchange rate of CHF 1.20 per euro, found no consensus among the Member States. The Management had to withdraw its proposal, but the difficulties for the Members States to face this increase remains. Finally, these attacks come at the worst time for us since we are in the final phase of a five-yearly review exercise. This review is intended to verify that the financial and social conditions offered by the Organization are able to guarantee the attractiveness of CERN. In this atmosphere of attacks and threats, we fear the worst. Not only for us, the Staff Association, but for you all, employees and users of the Organization, the absolute priority is the optimal running of current projects and the realization of future scientific projects. Investments to launch these projects, in particular the HL-LHC, must be made now. A decrease in the budget, as mentioned by some delegates, would be catastrophic for the long-term future of the Organization. Some delegates arrive at CERN with economic viewpoints eying only the very short term. We know that austerity does not function, a fortiori in basic research, where one should take a long-term approach to enable discoveries tomorrow and create high value-added jobs the day after tomorrow. It is thus more than worrying that some delegations become agitated without reason and contaminate others. This must stop! We ask respect for commitments and procedures, and, above all, respect for the CERN personnel, who offered Nobel prizes and many discoveries to European and global science. All together, we must act with determination before adverse decisions are taken. We ask you to inform a maximum of your colleagues so that all the staff (employed and associated members of personnel) can say NO to jeopardizing our Organization. The video and the slides of the crisis meeting are available at https://indico.cern.ch/event/392832/.

by Staff Association at May 21, 2015 02:19 PM

ZapperZ - Physics and Physicists

What Is Really "Real" In Quantum Physics
This is an excellent article from this week's Nature. It gives you a summary of some of the outstanding issues in Quantum Physics that are actively being looked into. Many of these things are fundamental questions of the interpretation of quantum physics, and it is being done not simply via a philosophical discussion, but via experimental investigation. I do not know how long this article will be available to the public, so read it now quickly.

One of the best part about this article is that it clearly defines some of the philosophical terminologies in term of how they are perceived in physics. You get to understand the meanings of "psi-epistemic models" and "psi-ontic models", and the differences between them and how they can be distinguished in experiments.

But this is where the debate gets stuck. Which of quantum theory's many interpretations — if any — is correct? That is a tough question to answer experimentally, because the differences between the models are subtle: to be viable, they have to predict essentially the same quantum phenomena as the very successful Copenhagen interpretation. Andrew White, a physicist at the University of Queensland, says that for most of his 20-year career in quantum technologies “the problem was like a giant smooth mountain with no footholds, no way to attack it”.

That changed in 2011, with the publication of a theorem about quantum measurements that seemed to rule out the wavefunction-as-ignorance models. On closer inspection, however, the theorem turned out to leave enough wiggle room for them to survive. Nonetheless, it inspired physicists to think seriously about ways to settle the debate by actually testing the reality of the wavefunction. Maroney had already devised an experiment that should work in principle, and he and others soon found ways to make it work in practice. The experiment was carried out last year by Fedrizzi, White and others.
There is even a discussion on devising a test for Pilot wave model after the astounding demonstration of the concept using simple classical wave experiment.

Zz.

by ZapperZ (noreply@blogger.com) at May 21, 2015 01:24 PM

astrobites - astro-ph reader's digest

Super starbursts at high redshifts

Title: A higher efficiency of converting gas to stars push galaxies at z~1.6 well above the star forming main sequence
Authors: Silverman et al. (2015)
First author institution: Kavli Institude for the Physics and Mathematics of the Universe, Todai Institutes for Advanced Study, the University of Tokyo, Kashiwa, Japan
Status: Submitted to Astrophysics Journal Letters

In the past couple of years there has been some observational evidence for a bimodal nature of the star formation efficiency (SFE) in galaxies. Whilst most galaxies lie on the typical relationship between mass and star formation rate (the star forming “main sequence”), slowly converting gas into stars, some form stars at a much higher rate. These “starburst” galaxies are much rarer than the typical galaxy, making up only ~2% of the population and yet ~10% of the total star formation. This disparity in the populations has only been studied for local galaxies and therefore more evidence is needed to back up these claims.

Figure 1: Hubble i band (and one IR K band) images of the seven galaxies studied by Silverman et al. (2015). Overlaid are the blue contours showing CO emission and red contours showing IR emission.  Note that the centre of CO emission doesn't always line up with the light seen in the Hubble image. Figure 2 in Silverman et al (2015).

Figure 1: Hubble i band (and one IR K band) images of the seven galaxies studied by Silverman et al. (2015). Overlaid are the blue contours showing CO emission and red contours showing IR emission. Note that the centre of CO emission doesn’t always line up with the light seen in the Hubble image. Figure 2 in Silverman et al (2015).

In this recent paper by Silverman et al. (2015), the authors have observed seven high redshift galaxies (or large distance, shown in Figure 1) at z~1.6  with ALMA (Atacama Large Millimeter Array, northern Chile) and IRAM (Institut de Radioastronomie Millimétrique, Spain), measuring the luminosity of the emission line from the 2-1 and 3-2 electron orbit transitions in carbon monoxide in each galaxy spectrum. The luminosity of the light from these transitions allows the authors to estimate the molecular hydrogen (H_2) gas mass of each galaxy.

Observations of each galaxy in the near-infrared (NIR; 24-500 μm) with Herschel (the SPIRE spectrograph) allow an estimation of the star formation rate (SFR) from the total luminosity integrated across the NIR range, L_{NIR}. The CO and NIR observations are shown by the red and blue contours respectively, overlaid on Hubble UV (i band) images in Figure 1. Notice how the CO/NIR emission doesn’t always coincide with the UV light, suggesting that a lot of the star formation is obscured in some of these galaxies.

Figure 2. The gas depletion timescale (1/SFE) against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at \latex $0 < z< 0.25$ (grey points) with the star forming main sequence show by the solid black line.  Figure 3c in Silverman et al. (2015).

Figure 2. The gas depletion timescale (1/SFE) against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at 0 < z< 0.25 (grey points) with the star forming main sequence shown by the solid black line. Figure 3c in Silverman et al. (2015).

With these measurements of the gas mass and the SFR, the SFE can be calculated and in turn the gas depletion timescale, which is the reciprocal of the SFE. This is plotted against the SFR in Figure 2 for the 7 high redshift starburst galaxies in this study (red circles), local starburst galaxies (red crosses) and normal galaxies at 0 < z< 0.25 (grey points), with the star forming main sequence shown by the solid black line. These results show that the efficiency of star formation in starburst galaxies is highly elevated compared to those residing on the main sequence,but not as high as those galaxies in the local universe.

These observations therefore dilutes the theory of a bimodal nature of star formation efficiency, and leads to one of a continuous distribution of SFE as a function of distance from the star forming main sequence. The authors consider the idea that the mechanisms leading to such a continuous nature of elevated gas depletion timescales could be related to major mergers between galaxies, which lead to rapid gas compression, boosting star formation. This is supported also by the images in Figure 1, which show multiple clumps of UV emitting regions, as seen by the Hubble Space Telescope.

To really put some weight behind this theory though the authors conclude, like most astrophysical studies, that they need a much larger sample of starburst galaxies at these high redshifts ( z ~ 1.6) to determine what the heck is going on.

by Becky Smethurst at May 21, 2015 12:58 PM

Tommaso Dorigo - Scientificblogging

EU Grants Submitted And Won: Some Statistics
The European Union has released some data on the latest call for applications for ITN grants. These are "training networks" where academic and non-academic institutions pool up to provide innovative training to doctoral students, in the meanting producing excellent research outputs.

read more

by Tommaso Dorigo at May 21, 2015 11:16 AM

CERN Bulletin

Cine club
Wednesday 27 May 2015 at 20:00 CERN Council Chamber Wait Until Dark Directed by Terence Young USA, 1967, 108 minutes   When Sam Hendrix carries a doll across the Canada-US border, he sets off a chain of events that will lead to a terrifying ordeal for his blind wife, Susy. The doll was stuffed with heroin and when it cannot be located, its owner, a Mr. Roat, stages a piece of theatre in an attempt to recover it. He arranges for Sam to be away from the house for a day and then has two con men, Mike Talman and a Mr. Carlito, alternately encourage or scare Susy into telling them where the doll is hidden. Talman pretends to be an old friend of Sam's while Carlito pretends to be a police officer. Despite their best efforts they make little headway as Susy has no idea where the doll might be, leading Mr. Roat to take a somewhat more violent approach to getting the information from her. Original version English; French subtitles   Wednesday 3 Juin 2015 at 20:00 CERN Council Chamber Sogni d’ oro Directed by Nanni Moretti Italy, 1981, 105 minutes Michele Apicella is a young film and theater director, who lives his troubles as an artist. In Italy reach the Eighties, and Michele, who was contestant in the Sixties, now finds himself in a new era full of crisis of values and ignorance. So Michele, with his works, means to represent the typical outcast and left indifferent intellectual outcast who establishes a breach between him and the world of ordinary people. Original version Italian; English subtitles

by Cine club at May 21, 2015 07:02 AM

CERN Bulletin

The n-Category Cafe

The Origin of the Word "Quandle"

A quandle is a set equipped with a binary operation with number of properties, the most important being that it distributes over itself:

<semantics>a(bc)=(ab)(ac)<annotation encoding="application/x-tex"> a \triangleright (b \triangleright c) = (a \triangleright b)\triangleright (a \triangleright c) </annotation></semantics>

They show up in knot theory, where they capture the essence of how the strands of a knot cross over each other… yet they manage to give an invariant of a knot, independent of the way you draw it. Even better, the quandle is a complete invariant of knots: if two knots have isomorphic quandles, there’s a diffeomorphism of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> mapping one knot to the other.

I’ve always wondered where the name ‘quandle’ came from. So I decided to ask their inventor, David Joyce—who also proved the theorem I just mentioned.

He replied:

I needed a usable word. “Distributive algebra” had too many syllables. Piffle was already taken. I tried trindle and quagle, but they didn’t seem right, so I went with quandle.

So there you go! Another mystery unraveled.

by john (baez@math.ucr.edu) at May 21, 2015 06:18 AM

May 20, 2015

Emily Lakdawalla - The Planetary Society Blog

LightSail Sends First Data Back to Earth
LightSail is sending home telemetry following a Wednesday commute to orbit aboard an Atlas V rocket.

May 20, 2015 11:19 PM

Christian P. Robert - xi'an's og

non-reversible MCMC

While visiting Dauphine, Natesh Pillai and Aaron Smith pointed out this interesting paper of Joris Bierkens (Warwick) that had escaped my arXiv watch/monitoring. The paper is about turning Metropolis-Hastings algorithms into non-reversible versions, towards improving mixing.

In a discrete setting, a way to produce a non-reversible move is to mix the proposal kernel Q with its time-reversed version Q’ and use an acceptance probability of the form

\epsilon\pi(y)Q(y,x)+(1-\epsilon)\pi(x)Q(x,y) \big/ \pi(x)Q(x,y)

where ε is any weight. This construction is generalised in the paper to any vorticity (skew-symmetric with zero sum rows) matrix Γ, with the acceptance probability

\epsilon\Gamma(x,y)+\pi(y)Q(y,x)\big/\pi(x)Q(x,y)

where ε is small enough to ensure all numerator values are non-negative. This is a rather annoying assumption in that, except for the special case derived from the time-reversed kernel, it has to be checked over all pairs (x,y). (I first thought it also implied the normalising constant of π but everything can be set in terms of the unormalised version of π, Γ or ε included.) The paper establishes that the new acceptance probability preserves π as its stationary distribution. An alternative construction is to make the proposal change from Q in H such that H(x,y)=Q(x,y)+εΓ(x,y)/π(x). Which seems more pertinent as not changing the proposal cannot improve that much the mixing behaviour of the chain. Still, the move to the non-reversible versions has the noticeable plus of decreasing the asymptotic variance of the Monte Carlo estimate for any integrable function. Any. (Those results are found in the physics literature of the 2000’s.)

The extension to the continuous case is a wee bit more delicate. One needs to find an anti-symmetric vortex function g with zero integral [equivalent to the row sums being zero] such that g(x,y)+π(y)q(y,x)>0 and with same support as π(x)q(x,y) so that the acceptance probability of g(x,y)+π(y)q(y,x)/π(x)q(x,y) leads to π being the stationary distribution. Once again g(x,y)=ε(π(y)q(y,x)-π(x)q(x,y)) is a natural candidate but it is unclear to me why it should work. As the paper only contains one illustration for the discretised Ornstein-Uhlenbeck model, with the above choice of g for a small enough ε (a point I fail to understand since any ε<1 should provide a positive g(x,y)+π(y)q(y,x)), it is also unclear to me that this modification (i) is widely applicable and (ii) is relevant for genuine MCMC settings.


Filed under: Books, Statistics, University life Tagged: arXiv, MCMC algorithms, Monte Carlo Statistical Methods, Ornstein-Uhlenbeck model, reversibility, Université Paris Dauphine, University of Warwick

by xi'an at May 20, 2015 10:15 PM

John Baez - Azimuth

Information and Entropy in Biological Systems (Part 3)

We had a great workshop on information and entropy in biological systems, and now you can see what it was like. I think I’ll post these talks one a time, or maybe a few at a time, because they’d be overwhelming taken all at once.

So, let’s dive into Chris Lee’s exciting ideas about organisms as ‘information evolving machines’ that may provide ‘disinformation’ to their competitors. Near the end of his talk, he discusses some new results on an ever-popular topic: the Prisoner’s Dilemma. You may know about this classic book:

• Robert Axelrod, The Evolution of Cooperation, Basic Books, New York, 1984. Some passages available free online.

If you don’t, read it now! He showed that the simple ‘tit for tat’ strategy did very well in some experiments where the game was played repeatedly and strategies who did well got to ‘reproduce’ themselves. This result was very exciting, so a lot of people have done research on it. More recently a paper on this subject by William Press and Freeman Dyson received a lot of hype. I think this is a good place to learn about that:

• Mike Shulman, Zero determinant strategies in the iterated Prisoner’s Dilemma, The n-Category Café, 19 July 2012.

Chris Lee’s new work on the Prisoner’s Dilemma is here, cowritten with two other people who attended the workshop:

The art of war: beyond memory-one strategies in population games, PLOS One, 24 March 2015.

Abstract. We show that the history of play in a population game contains exploitable information that can be successfully used by sophisticated strategies to defeat memory-one opponents, including zero determinant strategies. The history allows a player to label opponents by their strategies, enabling a player to determine the population distribution and to act differentially based on the opponent’s strategy in each pairwise interaction. For the Prisoner’s Dilemma, these advantages lead to the natural formation of cooperative coalitions among similarly behaving players and eventually to unilateral defection against opposing player types. We show analytically and empirically that optimal play in population games depends strongly on the population distribution. For example, the optimal strategy for a minority player type against a resident tit-for-tat (TFT) population is ‘always cooperate’ (ALLC), while for a majority player type the optimal strategy versus TFT players is ‘always defect’ (ALLD). Such behaviors are not accessible to memory-one strategies. Drawing inspiration from Sun Tzu’s the Art of War, we implemented a non-memory-one strategy for population games based on techniques from machine learning and statistical inference that can exploit the history of play in this manner. Via simulation we find that this strategy is essentially uninvadable and can successfully invade (significantly more likely than a neutral mutant) essentially all known memory-one strategies for the Prisoner’s Dilemma, including ALLC (always cooperate), ALLD (always defect), tit-for-tat (TFT), win-stay-lose-shift (WSLS), and zero determinant (ZD) strategies, including extortionate and generous strategies.

And now for the talk! Click on the talk title here for Chris Lee’s slides, or go down and watch the video:

• Chris Lee, Empirical information, potential information and disinformation as signatures of distinct classes of information evolving machines.

Abstract. Information theory is an intuitively attractive way of thinking about biological evolution, because it seems to capture a core aspect of biology—life as a solution to “information problems”—in a fundamental way. However, there are non-trivial questions about how to apply that idea, and whether it has actual predictive value. For example, should we think of biological systems as being actually driven by an information metric? One idea that can draw useful links between information theory, evolution and statistical inference is the definition of an information evolving machine (IEM) as a system whose elements represent distinct predictions, and whose weights represent an information (prediction power) metric, typically as a function of sampling some iterative observation process. I first show how this idea provides useful results for describing a statistical inference process, including its maximum entropy bound for optimal inference, and how its sampling-based metrics (“empirical information”, Ie, for prediction power; and “potential information”, Ip, for latent prediction power) relate to classical definitions such as mutual information and relative entropy. These results suggest classification of IEMs into several distinct types:

1. Ie machine: e.g. a population of competing genotypes evolving under selection and mutation is an IEM that computes an Ie equivalent to fitness, and whose gradient (Ip) acts strictly locally, on mutations that it actually samples. Its transition rates between steady states will decrease exponentially as a function of evolutionary distance.

2. “Ip tunneling” machine: a statistical inference process summing over a population of models to compute both Ie, Ip can directly detect “latent” information in the observations (not captured by its model), which it can follow to “tunnel” rapidly to a new steady state.

3. disinformation machine (multiscale IEM): an ecosystem of species is an IEM whose elements (species) are themselves IEMs that can interact. When an attacker IEM can reduce a target IEM’s prediction power (Ie) by sending it a misleading signal, this “disinformation dynamic” can alter the evolutionary landscape in interesting ways, by opening up paths for rapid co-evolution to distant steady-states. This is especially true when the disinformation attack targets a feature of high fitness value, yielding a combination of strong negative selection for retention of the target feature, plus strong positive selection for escaping the disinformation attack. I will illustrate with examples from statistical inference and evolutionary game theory. These concepts, though basic, may provide useful connections between diverse themes in the workshop.


by John Baez at May 20, 2015 07:58 PM

arXiv blog

Machine-Learning Algorithm Mines Rap Lyrics, Then Writes Its Own

An automated rap-generating algorithm pushes the boundaries of machine creativity, say computer scientists.

The ancient skill of creating and performing spoken rhyme is thriving today because of the inexorable rise in the popularity of rapping. This art form is distinct from ordinary spoken poetry because it is performed to a beat, often with background music.

May 20, 2015 05:31 PM

Emily Lakdawalla - The Planetary Society Blog

Liftoff! LightSail Sails into Space aboard Atlas V Rocket
The first of The Planetary Society’s two LightSail spacecraft is now in space following a late morning launch from Cape Canaveral Air Force Station in Florida.

May 20, 2015 03:44 PM

astrobites - astro-ph reader's digest

Merging White Dwarfs with Magnetic Fields

The Problem

White dwarfs, the final evolutionary state of most stars, will sometimes find themselves with another white dwarf nearby. In some of these binaries, gravitational radiation will bring the two white dwarfs closer together. When they get close enough, one of the white dwarfs will start transferring matter to the other white dwarf before they merge. These mergers are thought to produce a number of interesting phenomena. Rapid mass transfer from one white dwarf to the other could cause a collapse into a neutron star. The two white dwarfs could undergo a nuclear explosion as a Type 1a supernova. Least dramatically, these merging white dwarfs could also form into one massive, rapidly rotating white dwarf.

There have been many simulations over the last 35 years of white dwarfs merging as astronomers try to figure out the conditions that cause each of these outcomes. However, none of these simulations have included magnetic fields during the merging process, though it is well known that many white dwarfs have magnetic fields.. This is mostly because other astronomers have just been interested in different properties and results of mergers. Today’s paper simulates the merging of two white dwarfs with magnetic fields to see how these fields change and influence the merger.

The Method

The authors choose to simulate the merger of two fairly typical white dwarfs. They have Carbon-Oxygen cores and 0.625 and 0.65 solar masses. The magnetic fields in the core are 2 x 107 Gauss and 103 Gauss at the surface. Recall that the Earth has a magnetic field strength of about 0.5 Gauss. The temperature on the surface of each white dwarf is 5,000,000 K. The authors start the white dwarfs close to each other (about 2 x 109 cm apart and a period of 49.5 seconds) to simulate the merger.

To keep track of what is happening, the authors use a code called AREPO. AREPO works as a moving mesh code – the highest resolution is kept where interesting things are happening. There have been a number of past Astrobites that have covered how AREPO works and some of the applications to planetary disks and galaxy evolution.

The Findings

Figure 1:

Figure 1: Result from the simulation showing how the temperature (left) and magnetic field strength (right) change over time (top to bottom). We are looking down on the merger from above.

Figure 1 shows the main result from the paper. The left column is the temperature and the right column in the magnetic field strength at various times during the simulation. By 20 seconds, just a little mass is starting to transfer between the two white dwarfs.  Around 180 seconds, tidal forces finally tear the less massive white dwarf apart. Streams of material are wrapping around the system. These streams form Kelvin-Helmholtz instabilities that amplify the magnetic field. Note how in the second row of Figure 1, the streams with the highest temperatures also correspond to the largest magnetic field strengths. The strength of the magnetic field is changing quickly and increasing during this process. By 250 seconds, many of the streams have merged into a disk around the remaining white dwarf.

By 400 seconds (not shown in the figure), the simulations show a dense core surrounded by a hot envelope. A disk of material surrounds this white dwarf. The magnetic field structure is complex. In the core, the field strength is around 1010 Gauss, significantly stronger than at the start of the simulation. The field strength is about 109 Gauss at the interface of the hot envelope and the disk. The total magnetic energy grows by a factor of 109 from the start of the simulation to the end.

These results indicate that most of the magnetic field growth occurs from the Kelvin-Helmholtz instabilities during the merger. The field strength increases slowly at first, then very rapidly before plateauing out. The majority of the field growth occurs during the tidal disruption phase (between about 100 and 200 seconds in the simulation). Since accretion streams are a common feature of white dwarf mergers, these strong magnetic fields should be created in most white dwarf mergers. As this paper is the first to simulate the merging of two white dwarfs with magnetic fields, future work should continue to refine our understanding of this process and observational implications.

 

by Josh Fuchs at May 20, 2015 03:18 PM

Symmetrybreaking - Fermilab/SLAC

Small teams, big dreams

A small group of determined scientists can make big contributions to physics.

Particle physics is the realm of billion-dollar machines and teams of thousands of scientists, all working together to explore the smallest components of the universe.

But not all physics experiments are huge, as the scientists of DAMIC, Project 8, SPIDER and ATRAP can attest. Each of their groups could fit in a single Greyhound bus, with seats to spare.

Don’t let their size fool you; their numbers may be small, but their ambitions are not.

Smaller machines

Small detectors play an important role in searching for difficult-to-find particles.

Take dark matter, for example. Because no one knows what exactly dark matter is or what the mass of a dark matter particle might be, detection experiments need to cover all the bases.

DAMIC is an experiment that aims to observe dark matter particles that larger detectors can’t see. 

The standard strategy used in most experiments is scaling up the size of the detector to increase the number of potential targets for dark matter particles to hit. DAMIC takes another approach: eliminating all sources of background noise to allow the detector to see potential dark matter particle interactions of lower and lower energies.

The detector sits in a dust-free room 2 kilometers below ground at SNOLAB in Sudbury, Canada. To eliminate as much noise as possible, it is held in 10 tons of lead at around minus 240 degrees Fahrenheit. Its small size allows scientists to shield it more easily than they could a larger instrument.

DAMIC is currently the smallest dark matter detection experiment—both in the size of apparatus and the number of people on the team. While many dark matter detectors use more than a hundred thousand grams of active material, the current version of DAMIC runs on a mere five grams and the full detector will have 100 grams. Its team is made up of around ten scientists and students.

“What’s really nice is that even though this is a small experiment, it has the potential of making a huge contribution and having a big impact,” says DAMIC member Javier Tiffenberg, a postdoctoral fellow at Fermilab.

Top to bottom engagement

In collaborations larger than 100 people, specialized teams usually work on different parts of an experiment. In smaller groups, all members work together and engage in everything from machine construction to data analysis.

The 20 or so members of the Project 8 experiment are developing a new technique to measure the mass of neutrinos. On this experiment, moving quickly between designing, testing and analyzing an apparatus is of great importance, says Martin Fertl, a postdoctoral researcher at the University of Washington. Immediate access to hardware and analysis tools helps these projects move forward quickly and allows changes to be implemented with ease.

“A single person can install a new piece of hardware and within a day or so, test the equipment, take new data, analyze that data and then decide whether or not the system requires any additional modification,” he says.

Project 8 aims to determine the mass of neutrinos indirectly using tritium. Tritium decays to Helium-3, releasing an electron and a neutrino. Scientists can measure the energy emitted by these electrons to help them determine the neutrino mass.

“It was satisfying for us all when the first data came out and we were seeing electrons,” says UW postdoc Matthew Sternberg. “We basically all took a crack at the data to see what we could pull off of it.”

A fertile training ground

Small collaborations can be especially beneficial to fledging scientists entering the field.

Space-based projects carry a high cost and risk that can prevent students from being very involved. Balloon-borne experiments, however, are the next best thing. By getting above the atmosphere, balloons provide many of the same benefits for a fraction of the price.

In the roughly 30-member collaboration of the balloon-borne SPIDER experiment, graduate students played a role in designing, engineering, building and launching the instrument, and are now working on analysis.

“It’s great training for graduate students who end up working on large satellite experiments,” says Sasha Rahlin, a graduate student at Princeton University.

SPIDER is composed of six large cameras tethered to a balloon and was launched 110,000 feet above Antarctica to orbit the Earth for about 20 days in search of information about the early universe. Using measurements from this flight, researchers are looking for fluctuations in the polarization of cosmic background radiation, the light leftover from the big bang.

“When the balloon went up, all of us were in the control room watching each sub-system turn on and do exactly what it was supposed to,” Rahlin says. “There was a huge moment of ‘Wow, this actually works.’ And every component from start to finish had grad student blood, sweat and tears.”

Around 20 people went down to McMurdo Station in Antarctica to launch SPIDER with the help of a team from NASA that launches balloon experiments in several locations around the world. According to Zigmund Kermish, a postdoctoral fellow at Princeton University, being a small group sometimes means having to optimize time and manpower to get tasks done.

“It’s been really inspiring to see what we do with limited resources,” said Kermish. “It’s amazing what motivated graduated students can make happen.”

Big ambitions

Scientists on small collaborations are working toward big scientific goals. The ATRAP experiment is no exception; it will help answer some fundamental questions about why our universe exists.

Four members of the collaboration are located at CERN, where the apparatus is located, while only 15 people are involved overall.

ATRAP creates antihydrogen by confining positrons and antiprotons in a trap, cooling them to near absolute zero until they can combine to form atoms. ATRAP holds these atoms while physicists make precise measurements of their properties to compare with hydrogen atoms, their matter counterparts.

This can help determine whether nature treats matter and antimatter alike, says Eric Tardiff, a Harvard University postdoc at CERN. If researchers find evidence for violation of this symmetry, they will have a potential explanation for one of physics’ largest mysteries—why the universe contains unequal amounts of antimatter and matter particles. “No experiment has explained [this asymmetry] yet,” he says.

Think small

Small experiments play an important role in particle physics. They help train researchers early in their career by giving them experience across many parts of the scientific process. And despite their size, they hold enormous potential to make game-changing scientific discoveries. As Margaret Mead once said, “Never doubt that a small group of thoughtful, committed citizens can change the world.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Diana Kwon at May 20, 2015 01:00 PM

Peter Coles - In the Dark

One More for the Bad Statistics in Astronomy File…

It’s been a while since I last posted anything in the file marked Bad Statistics, but I can remedy that this morning with a comment or two on the following paper by Robertson et al. which I found on the arXiv via the Astrostatistics Facebook page. It’s called Stellar activity mimics a habitable-zone planet around Kapteyn’s star and it the abstract is as follows:

Kapteyn’s star is an old M subdwarf believed to be a member of the Galactic halo population of stars. A recent study has claimed the existence of two super-Earth planets around the star based on radial velocity (RV) observations. The innermost of these candidate planets–Kapteyn b (P = 48 days)–resides within the circumstellar habitable zone. Given recent progress in understanding the impact of stellar activity in detecting planetary signals, we have analyzed the observed HARPS data for signatures of stellar activity. We find that while Kapteyn’s star is photometrically very stable, a suite of spectral activity indices reveals a large-amplitude rotation signal, and we determine the stellar rotation period to be 143 days. The spectral activity tracers are strongly correlated with the purported RV signal of “planet b,” and the 48-day period is an integer fraction (1/3) of the stellar rotation period. We conclude that Kapteyn b is not a planet in the Habitable Zone, but an artifact of stellar activity.

It’s not really my area of specialism but it seemed an interesting conclusions so I had a skim through the rest of the paper. Here’s the pertinent figure, Figure 3,

bad_stat_figure

It looks like difficult data to do a correlation analysis on and there are lots of questions to be asked  about  the form of the errors and how the bunching of the data is handled, to give just two examples.I’d like to have seen a much more comprehensive discussion of this in the paper. In particular the statistic chosen to measure the correlation between variates is the Pearson product-moment correlation coefficient, which is intended to measure linear association between variables. There may indeed be correlations in the plots shown above, but it doesn’t look to me that a straight line fit characterizes it very well. It looks to me in some of the  cases that there are simply two groups of data points…

However, that’s not the real reason for flagging this one up. The real reason is the following statement in the text:

bad_stat_text

Aargh!

No matter how the p-value is arrived at (see comments above), it says nothing about the “probability of no correlation”. This is an error which is sadly commonplace throughout the scientific literature, not just astronomy.  The point is that the p-value relates to the probability that the given value of the test statistic (in this case the Pearson product-moment correlation coefficient, r) would arise by chace in the sample if the null hypothesis H (in this case that the two variates are uncorrelated) were true. In other words it relates to P(r|H). It does not tells us anything directly about the probability of H. That would require the use of Bayes’ Theorem. If you want to say anything at all about the probability of a hypothesis being true or not you should use a Bayesian approach. And if you don’t want to say anything about the probability of a hypothesis being true or not then what are you trying to do anyway?

If I had my way I would ban p-values altogether, but it people are going to use them I do wish they would be more careful about the statements make about them.


by telescoper at May 20, 2015 09:38 AM

Jester - Resonaances

Antiprotons from AMS
This week the AMS collaboration released the long expected measurement of the cosmic ray antiproton spectrum.  Antiprotons are produced in our galaxy in collisions of high-energy cosmic rays with interstellar matter, the so-called secondary production.  Annihilation of dark matter could add more antiprotons on top of that background, which would modify the shape of the spectrum with respect to the prediction from the secondary production. Unlike for cosmic ray positrons, in this case there should be no significant primary production in astrophysical sources such as pulsars or supernovae. Thanks to this, antiprotons could in principle be a smoking gun of dark matter annihilation, or at least a powerful tool to constrain models of WIMP dark matter.

The new data from the AMS-02 detector extend the previous measurements from PAMELA up to 450 GeV and significantly reduce experimental errors at high energies. Now, if you look at the  promotional material, you may get an impression that a clear signal of dark matter has been observed.  However,  experts unanimously agree that the brown smudge in the plot above is just shit, rather than a range of predictions from the secondary production. At this point, there is certainly no serious hints for dark matter contribution to the antiproton flux. A quantitative analysis of this issue appeared in a paper today.  Predicting  the antiproton spectrum is subject to large experimental uncertainties about the flux of cosmic ray proton and about the nuclear cross sections, as well as theoretical uncertainties inherent in models of cosmic ray propagation. The  data and the predictions are compared in this Jamaican band plot. Apparently, the new AMS-02 data are situated near the upper end of the predicted range.

Thus, there is no currently no hint of dark matter detection. However, the new data are extremely useful to constrain models of dark matter. New constraints on the annihilation cross section of dark matter  are shown in the plot to the right. The most stringent limits apply to annihilation into b-quarks or into W bosons, which yield many antiprotons after decay and hadronization. The thermal production cross section - theoretically preferred in a large class of WIMP dark matter models - is in the  case of b-quarks excluded for the mass of the dark matter particle below 150 GeV. These results provide further constraints on models addressing the hooperon excess in the gamma ray emission from the galactic center.

More experimental input will allow us to tune the models of cosmic ray propagation to better predict the background. That, in turn, should lead to  more stringent limits on dark matter. Who knows... maybe a hint for dark matter annihilation will emerge one day from this data; although, given the uncertainties,  it's unlikely to ever be a smoking gun.

Thanks to Marco for comments and plots. 

by Jester (noreply@blogger.com) at May 20, 2015 08:40 AM

Jester - Resonaances

What If, Part 1
This is the do-or-die year, so Résonaances will be dead serious. This year, no stupid jokes on April Fools' day: no Higgs in jail, no loose cables, no discovery of supersymmetry, or such. Instead, I'm starting with a new series "What If" inspired  by XKCD.  In this series I will answer questions that everyone is dying to know the answer to. The first of these questions is

If HEP bloggers were Muppets,
which Muppet would they be? 

Here is  the answer.

  • Gonzo the Great: Lubos@Reference Frame (on odd-numbered days)
    The one true uncompromising artist. Not treated seriously by other Muppets, but adored by chicken.
  • Animal: Lubos@Reference Frame (on even-numbered days)
    My favorite Muppet. Pure mayhem and destruction. Has only two modes: beat it, or eat it.
  • Swedish Chef: Tommaso@Quantum Diaries Survivor
    The Muppet with a penchant for experiment. No one understands what he says but it's always amusing nonetheless.
  • Kermit the Frog: Matt@Of Particular Significance
    Born Muppet leader, though not clear if he really wants the job.
  • Miss Piggy: Sabine@Backreaction
    Not the only female Muppet, but certainly the best known. Admired for her stage talents but most of all for her punch.
  • Rowlf: Sean@Preposterous Universe
    The real star and one-Muppet orchestra. Impressive as an artist or and as a comedian, though some complain he's gone to the dogs.

  • Statler and Waldorf: Peter@Not Even Wrong
    Constantly heckling other Muppets from the balcony, yet every week back for more.
  • Fozzie Bear:  Jester@Résonaances
    Failed stand-up comedian. Always stressed that he may not be funny after all.
     
If you have a match for  Bunsen, Beaker, or Dr Strangepork, let me know in the comments.

In preparation:
-If theoretical physicists were smurfs... 

-If LHC experimentalists were Game of Thrones characters...
-If particle physicists lived in Middle-earth... 

-If physicists were cast for Hobbit's dwarves... 
and more. 


by Jester (noreply@blogger.com) at May 20, 2015 08:39 AM

May 19, 2015

Emily Lakdawalla - The Planetary Society Blog

Two Months from Pluto!
Two months. Eight and half weeks. 58 days. It's a concept almost too difficult to grasp: we are on Pluto's doorstep.

May 19, 2015 11:12 PM

arXiv blog

Quantum Life Spreads Entanglement Across Generations

The way creatures evolve in a quantum environment throws new light on the nature of life.

May 19, 2015 06:52 PM

ATLAS Experiment

From ATLAS Around the World: First Blog From Hong Kong

Guess who ATLAS’s youngest member is? It’s Hong Kong! We will be celebrating our first birthday in June, 2015. The Hong Kong ATLAS team comprises members from The Chinese University of Hong Kong (CUHK), The University of Hong Kong (HKU) and The Hong Kong University of Science and Technology (HKUST), operating under the umbrella of the Joint Consortium for Fundamental Physics formed in 2013 by physicists in the three universities. We have grown quite a bit since 2014. There are now four faculty members, two postdocs, two research assistants, and six graduate students in our team. In addition, five undergraduates from Hong Kong will spend a summer in Geneva at the CERN Summer Program. You can’t miss us if you are at CERN this summer (smile and say hi to us please)!

While half of our team is stationed at CERN, taking shifts and working on Higgs property analysis, SUSY searches, and muon track reconstruction software, the other half is working in Hong Kong on functionality, thermal, and radiation tests on some components of the muon system readout electronics, in collaboration with the University of Michigan group. We have recently secured funds to set up a Tier-2 computing center for ATLAS in Hong Kong, and we may work on ATLAS software upgrade tasks as well.

I have also been actively participating in education and outreach activities in Hong Kong. In October last year, I have invited two representatives from Hong Kong Science Museum to visit CERN, so that they can obtain first-hand information on its operation and the lives and work of students and scientists. This will help them to plan an exhibition there on CERN and LHC in 2016. The timing is just right to bring the excitement with the restart of the LHC to Hong Kong. I have been giving talks on particle physics and cosmology for students and the general public. The latest one was just two weeks ago, for the 60th anniversary of Queen Elizabeth School, where I was a student myself many years ago. So many memories came back to me! I was an active member of the astronomy club and a frequent user of the very modest telescope we had. I knew back then the telescope is a time machine that brings images of the past to our eyes. How fortunate I am now, to be a user of the LHC and ATLAS, the ultimate time machine, and a member of the ATLAS community studying the most fundamental questions about the universe. Even though the young students in the audience might find it difficult to understand everything we do, they can certainly feel our excitement in our quest for the scientific truth.

 


F00062 Ming-chung Chu is a professor at the Department of Physics, The Chinese University of Hong Kong. He did his undergraduate and graduate studies both at Caltech. After some years of postdoc at MIT and Caltech, he went back to Hong Kong in 1995, where he was born and grew up. h proud to have helped bring particle physics to Hong Kong.

 

Part of the Hong Kong team in a group meeting at CERN.

Part of the Hong Kong team in a group meeting at CERN. Photo courtesy Prof. Luis Flores Castillo.

The humble telescope I used at high school pointed me both to the past and to the future.

The humble telescope I used at high school pointed me both to the past and to the future. Photo courtesy Tat Hung Wan.

Secondary school students in Hong Kong after a popular science talk on particle physics at the Chinese University of Hong Kong.

Secondary school students in Hong Kong after a popular science talk on particle physics at the Chinese University of Hong Kong. Photo courtesy Florence Cheung.

by mchu at May 19, 2015 06:14 PM

astrobites - astro-ph reader's digest

Stealing Hot Gas from Galaxies

Title: Ram Pressure Stripping of Hot Coronal Gas from Group and Cluster Galaxies and the Detectability of Surviving X-ray Coronae
Authors: Rukmani Vijayaraghavan & Paul M. Ricker
First Author’s institution: Dept. of Astronomy, University of Illinois at Urbana-Champaign
Status: Accepted to MNRAS

Making crass generalizations, galaxies are really just star forming machines that come in a variety of shapes and sizes. They form in dark matter halos, and grow over time as they accrete gas and merge with other galaxies. Left to their own devices, they would slowly turn this gas into stars, operating in a creative balance between the outflow of gas from galaxies, through supernova driven winds, for example, and the ongoing inflow of gas from the galaxy’s surroundings. Eventually, galaxies will convert nearly all of their gas into stars, and “die” out. However, galaxies often do not evolve in isolation. This is the case for galaxies in groups and clusters of galaxies, and the effects of those environments prove detrimental to our star forming machines.

In this simple picture, cold gas contained within the disks of galaxies acts directly as star formation fuel. However, galaxies are also surrounded by hot, gaseous coronae (think millions of degrees), that act as reservoirs for gas that may eventually cool, fall into the galaxy, and form stars. Removing the cold gas will immediately stop star formation, while removing the hot coronae will quietly shut off star formation, cutting off the supply of more gas. Rather dramatically, this removal of hot gas and delayed shut off is referred to as “strangulation“. In galaxy groups and clusters, both cold and hot gas can be violently removed from galaxies as they travel through the hot gas interspersed throughout the group/cluster (called the intracluster medium, or ICM). This violent removal of gas is known as ram pressure stripping (RPS), which again, can lead to strangulation. However, some galaxies survive this process, or are only partially affected. The authors of today’s astrobite focus on the strangulation process, and how the hot corona of galaxies are removed, and how they may even survive as galaxies move through clusters.

Simulating Galactic Strangulation

The authors construct two hydrodynamical simulations, one each for a galaxy group and galaxy cluster containing 26 and 152 galaxies respectively. Each galaxy in their simulation has a mass greater than 109 solar masses, and the group/cluster has a total mass of 3.2×1013 / 1.2×1014 solar masses. They make some idealizations in order to more cleanly isolate the effects of the group/cluster environment on their galaxies. Their galaxies all exist in spherical dark matter halos, with the dark matter implemented using live particles. The authors only include the hot gaseous halos, also spherical, for their galaxies, and leave out the cold gas, as their focus is the strangulation process.

panel

Figure 1: Images of the galaxy group (top) and galaxy cluster (bottom) as viewed through gas temperature projections, for the initial conditions (left) and after about 2 Gyr of evolution (right). The galaxies stand out as the denser, blue dots surrounded by nearly spherical gaseous halos. On the right, there appear to be many less galaxies in both cases, as their hot gas has been removed, and the hot gas remaining in galaxies is severely disrupted.  (Source: FIgures 3 and 5 of Vijayaraghavan and Ricker 2015)

Figure 1 shows temperature projections of the initial conditions, and the group/cluster after 2.0 Gyr of evolution. The blue circles are galaxies, surrounded by warmer gaseous halos; the images are centered on the centers of the even hotter gas contained within the galaxy group/cluster. As shown, there is significant hot corona gas loss over time from the galaxies, yet some are still able to hold onto their gas. This is shown even more quantitatively in Figure 2, giving the averaged mass profiles of galaxies (current mass over initial mass as a function of radius) in the group (left) and the cluster (right) over time. Their results show that about 90% of the gas bound to galaxies is removed after 2.4 Gyr, and that the process is generally slower in groups than clusters.

gasloss

Figure 2: Averaged gas mass profiles as a function of radius for the galaxies in the group (left) and the cluster (right) over time. The profiles are normalized by the initial gas profile. The solid lines show all galaxies, while the dashed lines give galaxies with initial masses greater than 1011 solar masses, and dash-dotted those with initial masses less than 1011 solar masses (Source: FIgure 6 of Vijayaraghavan and Ricker 2015)

Bridging Simulations and Observations

Aside from studying the stripping and gas loss processes in detail, the authors seek to make observational predictions for what (if any) of the hot gas can be observed in galaxies in groups and clusters. To do this, the authors take their simulations and make synthetic X-ray observations of their group and cluster galaxies. Figure 3 gives an example of one of these maps for the galaxy group at about 1 Gyr. Shown is the temperature projection on left (similar to Figure 1), next to a 40 ks and 400 ks exposure mock X-ray observation.

xray

Figure 3: Temerature projection (left) and mock X-ray emission maps (center and right) for the galaxy group at about 1 Gyr. The maps are shown for a 40 kilosecond (ks) and 400 ks exposure time. As shown, some of the galaxy corona gas is visible for long enough X-ray exposures. The red central dot in the X-ray images is the X-ray emission from the much hotter gas belonging to the galaxy group. (Source Figure 10 of Vijayaraghavan and Ricker 2015)

The authors find that the tails of hot gas coming off of stripped galaxies, and the remaining hot gas bound to the galaxies, can be observed for 1 – 2 Gyr after they first start being stripped using a 40 kilosecond (ks) exposure with the Chandra X-ray telescope. This is a fairly long exposure, however, and the authors suggest that the hot gas can be detected by making multiple, shorter observations of many galaxies in clusters and stacking the resulting images together. As suggested by the rate of gas stripping between galaxy groups and clusters, they suggest a successful detection is more likely in galaxy groups, where stripping is a slower process.

Towards Understanding Strangulation

This work dives into how a galaxy’s environment affects its evolution. Galaxies’ movement through galaxy groups and galaxy clusters throws a cog into these star forming machines. This work presents some exciting suggestions that, by combining current and upcoming X-ray observations, we may be able to directly detect this strangulation in action.

by Andrew Emerick at May 19, 2015 05:03 PM

Peter Coles - In the Dark

Dust, by Phyllis King

I do not know what dust is.
I do not know where it comes from.
I only know that it settles on things.
I cannot see it in the air or watch it fall.
Sometimes I’m home all day
But I never see it sliding about looking for a place to rest when my back is turned.
Does it wait ’til I go out?
Or does it happen in the night when I go to sleep?
Dust is not fussy about the places it chooses
Though it seems to prefer still objects.
Sometimes, out of kindness, I let it lie for weeks.
On some places it will lie forever
However, dust holds no grudges and once removed
It will always return in a friendly way.

by Phyllis April King


by telescoper at May 19, 2015 05:03 PM

Clifford V. Johnson - Asymptotia

Ready for the day…
I have prepared the Tools of the Office of Dad*: ready_for_the_day -cvj *At least until lunchtime. Then, another set to prep... Click to continue reading this post

by Clifford at May 19, 2015 04:45 PM

Clifford V. Johnson - Asymptotia

‘t Hooft on Scale Invariance…
Worth a read: This is 't Hooft's summary (link is a pdf) of a very interesting idea/suggestion about scale invariance and its possible role in finding an answer to a number of puzzles in physics. (It is quite short, but think I'll need to read it several times and mull over it a lot.) It won the top Gravity Research foundation essay prize this year, and there were several other interesting essays in the final list too. See here. -cvj Click to continue reading this post

by Clifford at May 19, 2015 04:27 PM

ZapperZ - Physics and Physicists

Review of Leonard Mlodinow's "Upright Tinkers"
This is a review of physicist's Leonard Mlodinow's new book "Upright Tinkers: : The Human Journey from Living in Trees to Understanding the Cosmos."

In it, he debunks the myths about famous scientists and how major discoveries and ideas came about.

With it, he hopes to correct the record on a number of counts. For instance, in order to hash out his theory of evolution, Darwin spent years post-Galapagos shifting through research and churning out nearly 700 pages on barnacles before his big idea began to emerge. Rather than divine inspiration, Mlodinow says, achieving real innovation takes true grit, and a willingness to court failure, a lesson we’d all be wise to heed.

“People use science in their daily lives all the time whether or not its what we think of as ‘science,’” he continues. “Data comes in that you have to understand. Life’s not simple. It require patience to solve problems, and I think science can teach you that if you know what it really is.”

Scientists would agree. Recently, psychologist Angela Duckworth has begun overturning fundamental conventional wisdom about the role intelligence plays in our life trajectories with research illustrating that, no matter the arena, it’s often not the smartest kids in the room who become the most successful; it’s the most determined ones.

As I've said many times on here, there is a lot of value in learning science, even for non-scientists, IF there is a conscious effort to reveal and convey the process of analytic, systematic thinking. We all live in a world where we try to find correlations among many things, and then try to figure out the cause-and-effect. This is the only way we make sense of our surrounding, and how we acquire knowledge of things. Science allows us to teach this skill to students, and letting them be aware of how we consider something to be valid.

This is what is sadly lacking today, especially in the world of politics and social policies.

Zz.

by ZapperZ (noreply@blogger.com) at May 19, 2015 04:18 PM

ZapperZ - Physics and Physicists

Record Number of Authors In Physics Paper
I don't know why this has been making the news reports a lot since last week. I suppose it must be a landmark even or something.

The latest paper on the Higgs is making the news, not for its results, but for setting the record for the largest number of authors on a paper, 5154 of them.

Only the first nine pages in the 33-page article, published on 14 May in Physical Review Letters, describe the research itself — including references. The other 24 pages list the authors and their institutions.

The article is the first joint paper from the two teams that operate ATLAS and CMS, two massive detectors at the Large Hadron Collider (LHC) at CERN, Europe’s particle-physics lab near Geneva, Switzerland. Each team is a sprawling collaboration involving researchers from dozens of institutions and countries.

And oh yeah, they reduced the uncertainty in the Higgs mass to 0.25%, but who cares about that!

This is neither interesting nor surprising to me. The number of collaborators in each of the ATLAS and CMS detector is already huge by themselves. So when they pool together their results and analysis, it isn't surprising that this happens.

Call me silly, but what I was more surprised with, and it is more unexpected, is that the research article itself is "nine pages". I thought PRL always limits its papers to only 4 pages!

BTW, this paper is available for free under the Creative Commons License, you may read it for yourself.

Zz.

by ZapperZ (noreply@blogger.com) at May 19, 2015 03:47 PM

Clifford V. Johnson - Asymptotia

Quick Experiment…
On my way back from commencement day on campus last Friday I got to spend a bit of time on the subway, and for the first time in a while I got to do a quick sketch. (I have missed the subway so much!) Yesterday, at home, I found myself with a couple of new brushes that I wanted to try out, and so I did a brushed ink sketch from the sketch... quick_ink_experimentIt felt good to flow the ink around - haven't done that in a while either. Then I experimented with splashing a bit of digital colour underneath it. (This is all with the graphic book project in mind, where I think at least one story might [...] Click to continue reading this post

by Clifford at May 19, 2015 03:34 PM

Symmetrybreaking - Fermilab/SLAC

Looking to the heavens for neutrino masses

Scientists are using studies of the skies to solve a neutrino mystery.

Neutrinos may be the lightest of all the particles with mass, weighing in at a tiny fraction of the mass of an electron. And yet, because they are so abundant, they played a significant role in the evolution and growth of the biggest things in the universe: galaxy clusters, made up of hundreds or thousands of galaxies bound together by mutual gravity.

Thanks to this deep connection, scientists are using these giants to study the tiny particles that helped form them. In doing so, they may find out more about the fundamental forces that govern the universe.

Curiously light

When neutrinos were first discovered, scientists didn’t know right away if they had any mass. They thought they might be like photons, which carry energy but are intrinsically weightless.

But then they discovered that neutrinos came in three different types and that they can switch from one type to another, something only particles with mass could do.

Scientists know that the masses of neutrinos are extremely light, so light that they wonder whether they come from a source other than the Higgs field, which gives mass to the other fundamental particles we know. But scientists have yet to pin down the exact size of these masses.

It’s hard to measure the mass of such a tiny particle with precision.

In fact, it’s hard to measure anything about neutrinos. They are electrically neutral, so they are immune to the effects of magnetic fields and related methods physicists use to detect particles. They barely interact with other particles at all: Only a more-or-less direct hit with an atomic nucleus can stop a neutrino, and that doesn’t happen often.

More than a trillion neutrinos pass through your body each second from the sun alone, and almost none of those end up striking any of your atoms. Even the densest matter is nearly transparent to neutrinos. However, by creating beams of neutrinos and by building large, sensitive targets to catch neutrinos from nuclear reactors and the sun, scientists have been able to detect a small portion of the particles as they pass through.

In experiments so far, scientists have estimated that the total mass of the three types of neutrinos together is roughly between 0.06 electronvolts and 0.2 electronvolts. For comparison, an electron’s mass is 511 thousand electronvolts and a proton weighs in at 938 million electronvolts.

Because the Standard Model—the theory describing particles and the interactions governing them—predicts massless neutrinos, finding the exact neutrino mass value will help physicists modify their models, yielding new insights into the fundamental forces of nature.

Studying galaxy clusters could provide a more precise answer.

Footprints of a neutrino

One way to study galaxy clusters is to measure the cosmic microwave background, the light traveling to us from 380,000 years after the big bang. During its 13.8-billion-year journey, this light passed through and near all the galaxies and galaxy clusters that formed. For the most part, these obstacles didn’t have a big effect, but taken cumulatively, they filtered the CMB light in a unique way, given the galaxies’ number, size and distribution.

The filtering affected the polarization—the orientation of the electric part of light—and originated in the gravitational field of galaxies. As CMB light traveled through the gravitational field, its path curved and its polarization twisted very slightly, an effect known as gravitational lensing. (This is a less dramatic version of lensing familiar from the beautiful Hubble Space Telescope images.)

The effect is similar to the one that got everyone excited in 2014, when researchers with the BICEP2 telescope announced they had measured the polarization of CMB light due to primordial gravitational waves, which subsequent study showed to be more ambiguous.

That ambiguity won’t be a problem here, says Oxford University cosmologist Erminia Calabrese, who studies the CMB on the Atacama Cosmology Telescope Polarization project. “There is one pattern of CMB polarization that is generated only by the deflection of the CMB radiation.” That means we won’t easily mistake gravitational lensing for anything else.

Small and mighty

Manoj Kaplinghat, a physicist at the University of California at Irvine, was one of the first to work out how neutrino mass could be estimated from CMB data alone. Neutrinos move very quickly relative to stuff like atoms and the invisible dark matter that binds galaxies together. That means they don’t clump up like other forms of matter, but their small mass still contributes to the gravitational field.

Enough neutrinos, even fairly low-mass ones, can deprive a newborn galaxy of a noticeable amount of mass as they stream away, possibly throttling the growth of galaxies that can form in the early universe. It’s nearly as simple as that: Heavier neutrinos mean galaxies must grow more slowly, while lighter neutrinos mean  faster galaxy growth.

Kaplinghat and colleagues realized the polarization of the CMB provides a measure the total amount of gravity from galaxies in the form of gravitational lensing, which working backward will constrain the mass of neutrinos. “When you put all that together, what you realize is you can do a lot of cool neutrino physics,” he says.

Of course the CMB doesn’t provide a direct measurement of the neutrino mass. From the point of view of cosmology, the three types of neutrinos are indistinguishable. As a result, what CMB polarization gives us is the total mass of all three types together.

However, other projects are working on the other end of this puzzle. Experiments such as the Main Injector Neutrino Oscillation Search, managed by Fermilab, have determined the differences in mass between the different neutrino types.

Depending on which neutrino is heaviest, we know how the masses of the other two types of neutrinos relate. If we can figure out the total mass, we can figure out the masses of each one. Together, cosmological and terrestrial measurements will get us the individual neutrino masses that neither is able to alone.

The space-based Planck observatory and POLARBEAR project in northern Chile have yielded preliminary results in this search already. And scientists at ACTPol, located at high elevation in Chile’s Atacama Desert, are working on this as well. They will determine the neutrino mass as well as the best estimates we have, down to the lowest possible values allowed, once the experiments are running at their highest precision, Calabrese says.

Progress is necessarily slow: The gravitational lensing pattern comes from seeing small patterns emerging from light captured across a large swath of the sky, much like the image in an Impressionist painting arises from abstract brushstrokes that look like very little by themselves.

In more scientific terms, it’s a cumulative, statistical effect, and the more data we have, the better chance we have to measure the lensing effect—and the mass of a neutrino.

 

Like what you see? Sign up for a free subscription to symmetry!

by Matthew R. Francis at May 19, 2015 01:00 PM

astrobites - astro-ph reader's digest

The Next Transit Hunters
  • Title: The Next Great Exoplanet Hunt
  • Authors: Kevin Heng and Joshua Winn
  • First Author’s Institution: University of Bern
  • Published in American Scientist

How do you answer when the theorist frowns and says: “Exoplanetary science isn’t fundamental, it’s just applied physics—no offense, of course”. Your reply might be: “None taken Dr. X, yes, we are not expecting to gain insights into grand unified theories by hunting for exoplanets, but the stakes are nevertheless high. We are on the verge of the Copernican revolution all over again: we could find signs of life out there, however small, removing humanity from the center of the biological Universe. Are you with us?”

Indeed, the stakes are high, and the hunt is on. For the last three decades there has been a rush of activity to find exoplanets. We have found many. The most successful method to date is the transit method. Kepler, the most successful transit mission to date, has confirmed the existence of over 1000 exoplanets, two thirds of currently known planets.

According to Heng & Winn, the authors of today’s paper, the long-term strategy of exoplanet hunting is clear. First you find them. Second, you characterize them. Third, you search for biomarkers in their atmospheres. The first two steps both have a number of maturing methods, but we are still finding our footing with the third step. In this paper Heng & Winn largely focus on transiting planets—planets whose atmospheres we can study for biomarkers. Why? Read on.

Space-based transits versus ground based?

Initially, exoplanet transits were studied by ground based observatories. They have problems: the Sun is periodically in the way (usually during the day), and Earth’s atmosphere interferes with the observations. These problems can be circumvented by launching telescopes into space, see figure below. In space, the precision is only limited by fundamental photon counting noise; we can’t ask for anything better. Granted, launching things into space is expensive, but is getting cheaper through help from the private sector. Notwithstanding, ground-based surveys will still most likely continue to give you the most bang for your buck, and will continue to play a strong complementary role to space based transit missions in the future.

The upper panel shows a planetary transit observed from the ground with a 1.2m diameter telescope, while the lower panel shows a transit observed with Kepler (1.0m). The precision in space is higher: we don’t have to deal with the atmosphere, and our measurements are not interrupted periodically with the Sun rising every day. Figure 1 from the paper.

Figure 1: Precision in space is better. The upper panel shows a planetary transit observed from the ground with a 1.2m diameter telescope, while the lower panel shows a transit observed with Kepler (1.0m). The precision in space is higher: we don’t have to deal with the atmosphere, and our measurements are not interrupted periodically with the Sun rising every day. Figure 1 from the paper.

Characterizing exoplanetary atmospheres

The atmospheres of transiting planets can be studied and analyzed for biomarkers via transit spectroscopy. There are two main ways. First, during a transit, some of the starlight can shine through the atmosphere of the planet. We can then look for atmospheric absorption features, by contrasting the observed spectrum while transiting, and while not. The second way is to study occultations. The planet itself reflects light from it’s host star, which contains information about its atmospheric structure. We can infer how much light is reflected by detecting the drop in brightness as the planet travels behind the star.

However, not all exoplanets are created equal for atmospheric characterization. This characterization is easiest for Hot Jupitersbig Jupiter-size planets that orbit close to their host starwhich tend to have puffy atmospheres. Heng & Winn note a stark contrast between the impressive sounding 1500 confirmed exoplanets, and the relatively small number of exoplanets we can currently meaningfully characterize the atmospheres of: only about a dozen (take a look at Figure 2). Most of them are hot gas giants, flaming puffy planets significantly larger than the Earth: not habitable. We want to change that, and go for the gold: habitable planets.

The diagram shows stellar V magnitude, or brightness, versus the strength of a planetary transit signal at 1 atmospheric scale height, a measure of how easy it is to characterize an exoplanet’s atmosphere. The easiest atmospheres are to the upper right; the hardest to the lower left. The curves show where Hubble (red), and JWST (blue) can obtain a spectrum with resolution R, and a signal-to-noise-ratio of S/N. We see that JWST will enable us to probe a lot more atmospheres! Figure 2 from the paper.

Figure 2: Exoplanetary atmospheres are hard to study. The diagram shows stellar V magnitude, or brightness, versus the strength of a planetary transit signal at 1 atmospheric scale height, a measure of how easy it is to characterize an exoplanet’s atmosphere. The easiest atmospheres are to the upper right; the hardest to the lower left. The curves show where Hubble (red), and JWST (blue) can obtain a spectrum with resolution R, and a signal-to-noise-ratio of S/N. We see that JWST will enable us to probe a lot more atmospheres! Figure 2 from the paper.

Planned Transit-Hunting Machines: Finders, and Characterizers

According to Heng & Winn, the path to finding habitable planets is clear. We need to detect Earth sized planets around the nearest, and brightest stars. These are the systems that maximize the atmospheric signal-to-noise ratio. We could then proceed to search for biosignatures with enough statistics to robustly probe for signs of life. This, however, requires new telescopes to first find them, and then characterize them. What is planned?

First are the transit-finders. Kepler changed the game, but left most of the sky relatively unexplored. Its success has inspired a fleet of transit-hunting successors, space missions along with complimentary efforts on the ground.  Like Heng & Winn, we will focus here on the space missions, some of which are contrasted in the figure below. NASA’s TESS mission is scheduled to be launched in 2017, and will scan the entire sky in a systematic manner, specializing in finding nearby short period planets. Conversely, the European CHEOPS mission, also scheduled to fly in 2017, focuses on studying transits, one star at a time. Later on, in 2024, the PLATO mission is the most ambitious of all. It will borrow hunting strategies from Kepler, TESS and CHEOPS, and aims to build a catalog of true Earth analogs: Earth-like exoplanets orbiting within the habitable zones of Sun-like stars.

Space based missions, like Kepler, TESS, and PLATO, are sensitive to a wide range of exoplanet sizes, while current ground-based surveys can only find the larger sized planets. TESS and PLATO specialize in finding planets all over the the sky, why the original Kepler mission stared deeper and fainter at a single field of view. Orbital periods are not shown in this diagram. LY means one light year. Figure 3 from the paper.

Figure 3: The sensitivity of transit-hunters Space based missions, like Kepler, TESS, and PLATO, are sensitive to a wide range of exoplanet sizes, while current ground-based surveys can only find the larger sized planets. TESS and PLATO specialize in finding planets all over the the sky, while the original Kepler mission stared deeper and fainter at a single field of view. Orbital periods are not shown in this diagram. LY means one light year. Figure 3 from the paper.

Then there are the atmospheric-characterizers. These are the big telescopes that will focus on recording transmission spectra during planetary transits. This will largely fall into the hands of the much anticipated JWST space telescope, and the upcoming extremely large ground based telescopes. Ideally, before these expensive shared-time observatories come online, we would want to have compiled a list of our best candidates: our top 10 sexiest transiting planets. Let’s get cracking!

by Gudmundur Stefansson at May 19, 2015 03:16 AM

May 18, 2015

The n-Category Cafe

The Revolution Will Not Be Formalized

After a discussion with Michael Harris over at the blog about his book Mathematics without apologies, I realized that there is a lot of confusion surrounding the relationship between homotopy type theory and computer formalization — and that moreover, this confusion may be causing people to react negatively to one or the other due to incorrect associations. There are good reasons to be confused, because the relationship is complicated, and various statements by prominent members of both communities about a “revolution” haven’t helped matters. This post and its sequel(s) are my attempt to clear things up.

In this post I will talk mainly about computer formalization of mathematics, independently of homotopy type theory. There are multiple applications of computers to mathematics, and people sometimes confuse computer verification of proofs with computerized automated proof-finding. Homotopy type theory has very little to do with the latter, so henceforth by “formalization” I will mean only the verification of proofs.

This often means the verification of pre-existing proofs, but it is also possible to use a computer to help construct a proof and verify it at the same time. The tools that we use to verify proofs incorporate a small amount of automation, so that this process can sometimes save a small amount of effort over writing a proof by hand first; but at least in the realm of pure mathematics, the automation mainly serves to save us from worrying about details that the computer cares about but that we wouldn’t have worried about in a paper proof anyway.

Computer-verified proof has been going on for decades. Recently it’s garnered a bit more attention, due partly to complete verifications of some “big-name” theorems such as the four-color theorem, the odd-order theorem, and the Kepler conjecture, whose proofs were so long or complicated or automated that reasonable mathematicians might wonder whether there was an error somewhere.

What is the future of computer-verified proof? Is it the future of mathematics? Should we be happy or worried about that prospect? Does it mean that computers will take over mathematics and leave no room for the humans? My personal opinion is that (1) computer-verified proof is only going to get more common and important, but (2) it will be a long time before all mathematics is computer-verified, if indeed that ever happens, and (3) if and when it does happen, it won’t be anything to worry about.

The reason I believe (2) is that my personal experience with computer proof assistants leads me to the conclusion that they are still very far from usable by the average mathematician on a daily basis. Despite all the fancy tools that exist now, verifying a proof with a computer is usually still a lot more work than writing that proof on paper. And that’s after you spend the necessary time and effort learning to use the proof assistant tool, which generally comes with quite a passel of idiosyncracies.

Moreover, in most cases the benefits to verifying a proof with a computer are doubtful. For big theorems that are very long or complicated or automated, so that their authors have a hard time convincing other mathematicians of their correctness by hand, there’s a clear win. (That’s one of the reasons I believe (1), because I believe that proofs of this sort are also going to get more common.) Moreover, a certain kind of mathematician finds proof verification fun and rewarding for its own sake. But for the everyday proof by your average mathematician, which can be read and understood by any other average mathematician, the benefit from sweating long hours to convince a computer of its truth is just not there (yet). That’s why, despite periodic messianic claims from various quarters, you don’t see mathematicians jumping on any bandwagon of proof verification.

Now there would be certain benefits to the mathematical community if all proofs were computer verified. Notably, of course, we could be sure that they were correct. If all submissions to journals were verified first by their authors, then referees could be mostly absolved of checking the correctness of the results and could focus on exposition and interest. (There are still certain things to check, however, such as that the computer formalization does in fact prove what the paper claims that it does.)

I can imagine that once there is a “critical mass” of mathematicians choosing to verify their own results with a computer, journals might start requiring such verification, thereby tipping the balance completely towards all mathematics being verified, in the same way that many journals now require submissions in (La)TeX. However, we’re a long way from that critical mass, because proof assistants have a long way to go before they are as easy to use as TeX. My guess is that it will take about 100 years, if it happens at all. Moreover, it won’t be a “revolution” in the sense of a sudden overthrow of the status quo. Historians might look back on it afterwards and call it a revolution, but at the time when it happens it will feel like a natural and gradual process.

As for (3), for the most part computer verification is just a supplement to ordinary mathematical reasoning. It’s a tool used by human mathematicians. We have to learn to use it correctly, and establish certain conventions, such as the fact that you’ve verified a proof with a computer doesn’t absolve you from explaining it clearly to humans. But I have faith that the mathematical community will be up to that task, and I don’t see any point in worrying right now about things that are so far in the future we can barely imagine them. (And no, I don’t believe in the AI singularity either.)

(Next time: What about HoTT?)

by shulman (viritrilbia@gmail.com) at May 18, 2015 08:32 PM

Peter Coles - In the Dark

Groovin’ High

I stumbled across this on Youtube and just had to share it. I’ve got this track on an old vinyl LP of Charlie Parker performances recorded live at Birdland, the famous New York jazz club named in his (Bird’s) honour. I don’t think any of the tracks on that album have ever been reissued on CD or for download so I was both surprised and delighted to find this. It was recorded live in 1953, so it’s a bit lo-fi, but what’s particularly interesting is the unusual collection of instruments. Bird is alto sax as usual, but the rest of the band consists of Cornelius Thomas on drums, Bernie McKay on guitar and Milt Buckner on the Hammand Organ. That’s very far from a typical bebop band. Milt Buckner’s organ accompaniment is perhaps an acquired taste but Charlie Parker clearly enjoyed this setting. He plays beautifully throughout, especially during the exciting chase sequence with the drummer near the end. The tune was written by Parker’s old sparring partner Dizzy Gillespie and is based on the chords of Whispering, an old ballad written in 1920. I’m not sure why Dizzy Gillespie decided to hang his tune on that particular harmonic progression, but it’s a thrill to hear Bird racing through the changes in such exhilarating style.


by telescoper at May 18, 2015 03:52 PM

Tommaso Dorigo - Scientificblogging

The Challenges Of Scientific Publishing: A Conference By Elsevier
I spent the last weekend in Berlin, attending a conference for editors organized by Elsevier. And I learnt quite a bit during two very busy days. As a newbie - I am handling editor for the journal "Reviews in Physics" since January this year - I did expect to learn a lot from the event; but I will admit that I decided to accept the invitation to attend the event more out of curiosity for a world that is at least in part new to me, rather than out of professional sense of duty.

read more

by Tommaso Dorigo at May 18, 2015 03:28 PM

Quantum Diaries

Drell-Yan, Drell-Yan with Jets, Drell-Yan with all the Jets

All those super low energy jets that the LHC cannot see? LHC can still see them.

Hi Folks,

Particle colliders like the Large Hadron Collider (LHC) are, in a sense, very powerful microscopes. The higher the collision energy, the smaller distances we can study. Using less than 0.01% of the total LHC energy (13 TeV), we see that the proton is really just a bag of smaller objects called quarks and gluons.

myproton_profmattstrassler

This means that when two protons collide things are sprayed about and get very messy.

atlas2009-collision-vp1-142308-482137-web

One of the most important processes that occurs in proton collisions is the Drell-Yan process. When a quark, e.g., a down quark d, from one proton and an antiquark, e.g., an down antiquark d, from an oncoming proton collide, they can annihilate into a virtual photon (γ) or Z boson if the net electric charge is zero (or a W boson if the net electric charge is one). After briefly propagating, the photon/Z can split into a lepton and its antiparticle partner, for example into a muon and antimuon or electronpositron pair! In pictures, quark-antiquark annihilation into a lepton-antilepton pair (Drell-Yan process) looks like this

feynmanDiagram_DrellYan_Simple

By the conservation of momentum, the sum of the muon and antimuon momenta will add up to the photon/Z boson  momentum. In experiments like ATLAS and CMS, this gives a very cool-looking distribution

cms_DY_7TeV

Plotted is the invariant mass distribution for any muon-antimuon pair produced in proton collisions at the 7 TeV LHC. The rightmost peak at about 90 GeV (about 90 times the proton’s mass!) is a peak corresponding to the production Z boson particles. The other peaks represent the production of similarly well-known particles in the particle zoo that have decayed into a muon-antimuon pair. The clarity of each peak and the fact that this plot uses only about 0.2% of the total data collected during the first LHC data collection period (Run I) means that the Drell-Yan process is a very useful for calibrating the experiments. If the experiments are able to see the Z boson, the rho meson, etc., at their correct energies, then we have confidence that the experiments are working well enough to study nature at energies never before explored in a laboratory.

However, in real life, the Drell-Yan process is not as simple as drawn above. Real collisions include the remnants of the scattered protons. Remember: the proton is bag filled with lots of quarks and gluons.

feynmanDiagram_DrellYan_wRad

Gluons are what holds quarks together to make protons; they mediate the strong nuclear force, also known as quantum chromodynamics (QCD). The strong force is accordingly named because it requires a lot of energy and effort to overcome. Before annihilating, the quark and antiquark pair that participate in the Drell-Yan process will have radiated lots of gluons. It is very easy for objects that experience the strong force to radiate gluons. In fact, the antiquark in the Drell-Yan process originates from an energetic gluon that split into a quark-antiquark pair. Though less common, every once in a while two or even three energetic quarks or gluons (collectively called jets) will be produced alongside a Z boson.

feynmanDiagram_DrellYan_3j

Here is a real life Drell-Yan (Z boson) event with three very energetic jets. The blue lines are the muons. The red, orange and green “sprays” of particles are jets.

atlas_158466_4174272_Zmumu3jets

 

As likely or unlikely it may be for a Drell-Yan process or occur with additional energetic jets, the frequency at which they do occur appear to match very well with our theoretical predictions. The plot below show the likelihood (“Production cross section“) of a W or Z boson with at least 0, 1, 2, 3, or 4(!) very energetic jets. The blue bars are the theoretical predictions and the red circles are data. Producing a W or Z boson with more energetic jets is less likely than having fewer jets. The more jets identified, the smaller the production rate (“cross section”).

cms_StairwayHeaven_2014

How about low energy jets? These are difficult to observe because experiments have high thresholds for any part of a collision to be recorded. The ATLAS and CMS experiments, for example, are insensitive to very low energy objects, so not every piece of an LHC proton collision will be recorded. In short: sometimes a jet or a photon is too “dim” for us to detect it. But unlike high energy jets, it is very, very easy for Drell-Yan processes to be accompanied with low energy jets.

feynmanDiagram_DrellYan_wRadx6

There is a subtlety here. Our standard tools and tricks for calculating the probability of something happening in a proton collision (perturbation theory) assumes that we are studying objects with much higher energies than the proton at rest. Radiation of very low energy gluons is a special situation where our usual calculation methods do not work. The solution is rather cool.

As we said, the Z boson produced in the quark-antiquark annihilation has much more energy than any of the low energy gluons that are radiated, so emitting a low energy gluon should not affect the system much. This is like massive freight train pulling coal and dropping one or two pieces of coal. The train carries so much momentum and the coal is so light that dropping even a dozen pieces of coal will have only a negligible effect on the train’s motion. (Dropping all the coal, on the other hand, would not only drastically change the train’s motion but likely also be a terrible environmental hazard.) We can now make certain approximations in our calculation of a radiating a low energy gluon called “soft gluon factorization“. The result is remarkably simple, so simple we can generalize it to an arbitrary number of gluon emissions. This process is called “soft gluon resummation” and was formulated in 1985 by Collins, Soper, and Sterman.

Low energy gluons, even if they cannot be individually identified, still have an affect. They carry away energy, and by momentum conservation this will slightly push and kick the system in different directions.

feynmanDiagram_DrellYan_wRadx6_Text

 

If we look at Z bosons with low momentum from the CDF and DZero experiments, we see that the data and theory agree very well! In fact, in the DZero (lower) plot, the “pQCD” (perturbative QCD) prediction curve, which does not include resummation, disagrees with data. Thus, soft gluon resummation, which accounts for the emission of an arbitrary number of low energy radiations, is important and observable.

cdf_pTZ dzero_pTZ

In summary, Drell-Yan processes are a very important at high energy proton colliders like the Large Hadron Collider. They serve as a standard candle for experiments as well as a test of high precision predictions. The LHC Run II program has just begun and you can count on lots of rich physics in need of studying.

Happy Colliding,

Richard (@bravelittlemuon)

 

by Richard Ruiz at May 18, 2015 03:00 PM

ZapperZ - Physics and Physicists

Electron Pairing Without Superconductivity
The interesting news from last week is the publication in Nature of the confirmation of the presence of electron pairs in STO, but without superconductivity.

This is significant because this has always been a possibility, i.e. where the electrons pair up but do not form any long range order or become a condensate. This phenomenon was hinted at in the cuprate superconductors especially in the underdoped regime where experiments such as tunneling and ARPES have shown the presence of a gap, called the pseudogap, above the critical temperature Tc. Whether this pseudogap is the precursor to the electrons having long-range order and condenses below Tc, or whether these electrons are actually competing with those that do, is still a highly debated question.

My guess is that this paper will be a significant piece of information to that puzzle.

Zz.

by ZapperZ (noreply@blogger.com) at May 18, 2015 11:52 AM

The n-Category Cafe

Categorifying the Magnitude of a Graph

Tom Leinster introduced the idea of the magnitude of graphs (first at the Café and then in a paper). I’ve been working with my mathematical brother Richard Hepworth on categorifying this and our paper has just appeared on the arXiv.

Categorifying the magnitude of a graph, Richard Hepworth and Simon Willerton.

The magnitude of a graph can be thought of as an integer power series. For example, consider the Petersen graph.

Petersen graph

Its magnitude starts in the following way. <semantics>#P =1030q+30q 2+90q 3450q 4 +810q 5+270q 65670q 7+.<annotation encoding="application/x-tex"> \begin{aligned} \#P&=10-30q+30q^{2}+90q ^{3}-450q^{4}\\ &\quad\quad+810q^{5} + 270 q^{6} - 5670 q^{7} +\dots. \end{aligned} </annotation></semantics>

Richard observed that associated to each graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> there is a bigraded group <semantics>MH *,*(G)<annotation encoding="application/x-tex">\mathrm{MH}_{\ast ,\ast }(G)</annotation></semantics>, the graph magnitude homology of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, that has the graph magnitude <semantics>#G<annotation encoding="application/x-tex"># G</annotation></semantics> as its graded Euler characteristic. <semantics>#G = k,l0(1) krank(MH k,l(G))q l = l0χ(MH *,l(G))q l.<annotation encoding="application/x-tex"> \begin{aligned} #G &= \sum _{k,l\geqslant 0} (-1)^{k}\cdot \mathrm{rank}\bigl (\mathrm{MH}_{k,l}(G)\bigr )\cdot q^{l}\\ &= \sum _{l\geqslant 0} \chi \bigl (\mathrm{MH}_{\ast ,l}(G)\bigr )\cdot q^{l}. \end{aligned} </annotation></semantics> So graph magnitude homology categorifies graph magnitude in the same sense that Khovanov homology categorifies the Jones polynomial.

For instance, for the Petersen graph, the ranks of <semantics>MH k,l(P)<annotation encoding="application/x-tex">\mathrm{MH}_{k,l}(P)</annotation></semantics> are given in the following table. You can check that the alternating sum of each row gives a coefficient in the above power series.

<semantics> k 0 1 2 3 4 5 6 7 0 10 1 30 2 30 3 120 30 l 4 480 30 5 840 30 6 1440 1200 30 7 7200 1560 30 <annotation encoding="application/x-tex"> \begin{array}{rrrrrrrrrr} &&&&&&k\\ &&0&1&2&3&4&5&6&7 \\ &0 & 10\\ & 1 & & 30 \\ &2 & && 30 \\ &3 &&& 120 & 30 \\ l &4 &&&& 480 & 30 \\ &5 &&&&& 840 & 30 \\ &6 &&&&& 1440 & 1200 & 30 \\ &7 &&&&&& 7200 & 1560 & 30 \\ \\ \end{array} </annotation></semantics>

Many of the properties that Tom proved for the magnitude are shadows of properties of magnitude homology and I’ll describe them here.

The definition

Let’s have a quick look at the definition first: this is a typical piece of homological algebra. For each graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> we define a <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>-chain on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> to be a <semantics>k+1<annotation encoding="application/x-tex">k+1</annotation></semantics>-tuple of vertices of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> such that adjacent vertices are distinct: <semantics>(a 0,,a k),a iVertices(G),a ia i+1.<annotation encoding="application/x-tex"> (a_{0},\dots ,a_{k}),\quad a_{i}\in \mathrm{Vertices}(G),\quad a_{i}\ne a_{i+1}. </annotation></semantics> We equip the graph with the path length metric, so each edge has length <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>, and we define the length of a chain to be the length of the path obtained by traversing its vertices: <semantics>((a 0,,a k)) i=0 k1d(a i,a i+1).<annotation encoding="application/x-tex"> \ell ((a_{0},\dots ,a_{k}))\coloneqq \sum _{i=0}^{k-1}d(a_{i},a_{i+1}). </annotation></semantics> The chain group <semantics>MC k,l(G)<annotation encoding="application/x-tex">\mathrm{MC}_{k,l}(G)</annotation></semantics> is defined to be the free abelian group on the set of <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>-chains of length <semantics>l<annotation encoding="application/x-tex">l</annotation></semantics>. We define the differential <semantics>:MC k,lMC k1,l<annotation encoding="application/x-tex">\partial \colon \mathrm{MC}_{k,l}\to \mathrm{MC}_{k-1,l}</annotation></semantics> by <semantics>(1) i i<annotation encoding="application/x-tex">\partial \coloneqq \sum (-1)^{i}\partial _{i}</annotation></semantics> where <semantics> i(a 0,,a k){(a 0,,a i^,,a k) if(a 0,,a i^,,a k)=l, 0 otherwise.<annotation encoding="application/x-tex"> \partial _{i}(a_{0},\ldots ,a_{k}) \coloneqq \begin{cases} (a_{0},\ldots ,\widehat{a_{i}},\ldots ,a_{k}) & \mathrm{if } \ell (a_{0},\ldots ,\widehat{a_{i}},\ldots ,a_{k})=l, \\ 0 & \mathrm{otherwise}. \end{cases} </annotation></semantics> Here, as usual, <semantics>x^<annotation encoding="application/x-tex">\widehat{x}</annotation></semantics> means “omit <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>”. The graph magnitude is then defined to be the homology of this differential: <semantics>MH k,l(G)H k(MC *,l(G),).<annotation encoding="application/x-tex"> \mathrm{MH}_{k,l}(G)\coloneqq \mathrm{H}_{k}(\mathrm{MC}_{\ast ,l}(G),\partial ). </annotation></semantics> Unfortunately, I don’t know of any intuitive interpretation of the chain groups.

Functoriality

One standard advantage of this sort of categorification is that it has functoriality where the original invariant did not. We don’t usually have morphisms between numbers or polynomials, but we do have morphisms between groups. In the case here we have the category of graphs with the morphisms sending vertices to vertices with edges either preserved or contracted, and graph magnitude homology gives a functor from that to the category of bigraded groups and homomorphisms.

Categorifying properties of magnitude

Here are some of the properties of magnitude that we can categorify. With the exception of disjoint unions, all of the other categorification results require some decidedly non-trivial homological algebra to prove.

Disjoint unions

Tom showed that magnitude is additive with respect to the disjoint union of graphs: <semantics>#(GH)=#G+#H.<annotation encoding="application/x-tex"> #(G\sqcup H) = #G + #H. </annotation></semantics> Our categorification of this is the additivity of the magnitude homology: <semantics>MH *,*(GH)MH *,*(G)MH *,*(H).<annotation encoding="application/x-tex"> \mathrm{MH}_{\ast ,\ast }(G\sqcup H) \cong \mathrm{MH}_{\ast ,\ast }(G) \oplus \mathrm{MH}_{\ast ,\ast }(H). </annotation></semantics>

Products

Tom showed that magnitude is multiplicative with respect to the cartesian product <semantics><annotation encoding="application/x-tex">\square</annotation></semantics> of graphs <semantics>#(GH)=#G#H.<annotation encoding="application/x-tex"> #(G\Box H) = #G\cdot #H. </annotation></semantics> The categorification of this is a Künneth Theorem which says that there is a non-naturally split, short exact sequence: <semantics>0MH *,*(G)MH *,*(H)MH *,*(GH) Tor(MH *+1,*(G),MH *,*(H))0.<annotation encoding="application/x-tex"> \begin{aligned} 0\to \mathrm{MH}_{\ast ,\ast }(G)\otimes \mathrm{MH}_{\ast ,\ast }(H) \to \mathrm{MH}_{\ast ,\ast }(G\square H)\\ \to \mathrm{Tor}\bigl (\mathrm{MH}_{\ast +1,\ast }(G), \mathrm{MH}_{\ast ,\ast }(H)\bigr ) \to 0. \end{aligned} </annotation></semantics> If either <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> or <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> has torsion-free magnitude homology, then this sequence reduces to an isomorphism <semantics>MH *,*(GH)MH *,*(G)MH *,*(H).<annotation encoding="application/x-tex"> \mathrm{MH}_{\ast ,\ast }(G{\square }H)\cong \mathrm{MH}_{\ast ,\ast }(G)\otimes \mathrm{MH}_{\ast ,\ast }(H). </annotation></semantics> Despite quite a bit of computation, we don’t know whether any graphs have torsion in their magnitude homology.

Unions

Tom showed that a form of the inclusion-exclusion formula holds for certain graphs. We need a pile of definitions first.

Definition. A subgraph <semantics>UX<annotation encoding="application/x-tex">U\subset X</annotation></semantics> is called convex if <semantics>d U(u,v)=d X(u,v)<annotation encoding="application/x-tex">d_{U}(u,v)=d_{X}(u,v)</annotation></semantics> for all <semantics>u,vU<annotation encoding="application/x-tex">u,v\in U</annotation></semantics>.

One way of reading this is that for <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> and <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> in <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> there is a geodesic joining them that lies in <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>.

Definition. Let <semantics>UX<annotation encoding="application/x-tex">U\subset X</annotation></semantics> be a convex subgraph. We say that <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> projects to <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> if for every <semantics>xX<annotation encoding="application/x-tex">x\in X</annotation></semantics> that can be connected by an edge-path to some vertex of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>, there is <semantics>π(x)U<annotation encoding="application/x-tex">\pi (x)\in U</annotation></semantics> such that for all <semantics>uU<annotation encoding="application/x-tex">u\in U</annotation></semantics> we have <semantics>d(x,u)=d(x,π(x))+d(π(x),u).<annotation encoding="application/x-tex"> d(x,u) = d(x,\pi (x)) + d(\pi (x),u). </annotation></semantics>

For instance, a four-cycle graph projects to any edge, whereas a five cycle does not project to an edge.

Definition. A projecting decomposition is a triple <semantics>(X;G,H)<annotation encoding="application/x-tex">(X;G,H)</annotation></semantics> consisting of a graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and subgraphs <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> such that

  • <semantics>X=GH<annotation encoding="application/x-tex">X=G\cup H</annotation></semantics>,

  • <semantics>GH<annotation encoding="application/x-tex">G\cap H</annotation></semantics> is convex in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>,

  • <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> projects to <semantics>GH<annotation encoding="application/x-tex">G\cap H</annotation></semantics>.

Tom showed that if <semantics>(X;G,H)<annotation encoding="application/x-tex">(X;G,H)</annotation></semantics> is a projecting decomposition then <semantics>#X+#(GH)=#G+#H.<annotation encoding="application/x-tex"> #X +# (G\cap H)= #G +#H . </annotation></semantics> Our categorification of this result is that if <semantics>(X;G,H)<annotation encoding="application/x-tex">(X;G,H)</annotation></semantics> is a projecting decomposition, then there is a naturally split short exact sequence <semantics>0MH *,*(GH)MH *,*(G)MH *,*(H)MH *,*(X)0<annotation encoding="application/x-tex"> 0\to \mathrm{MH}_{\ast ,\ast }(G\cap H)\to \mathrm{MH}_{\ast ,\ast }(G)\oplus \mathrm{MH}_{\ast ,\ast }(H)\to \mathrm{MH}_{\ast ,\ast }(X)\to 0 </annotation></semantics> (which is a form of Mayer-Vietoris sequence) and consequently there is a natural isomorphism <semantics>MH *,*(X)MH *,*(GH)MH *,*(G)MH *,*(H).<annotation encoding="application/x-tex"> \mathrm{MH}_{\ast ,\ast }(X)\oplus \mathrm{MH}_{\ast ,\ast }(G\cap H) \cong \mathrm{MH}_{\ast ,\ast }(G)\oplus \mathrm{MH}_{\ast ,\ast }(H). </annotation></semantics>

Diagonality

Tom noted many examples of graphs which had magnitude with coefficients which alternate in sign; these examples included complete graphs, complete bipartite graphs, forests and graphs with up to four vertices.

Our categorification of this is that in all of these cases the magnitude homology is supported on the diagonal: we say a graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is diagonal if <semantics>MH k,l(G)=0<annotation encoding="application/x-tex">\mathrm{MH}_{k,l}(G)=0</annotation></semantics> if <semantics>kl<annotation encoding="application/x-tex">k\neq l</annotation></semantics>. In this case the magnitude is given by <semantics>#G= l0(1) lrankMH l,l(G)q l,<annotation encoding="application/x-tex"> #G=\sum _{l\geq 0}(-1)^{l}\cdot \mathrm{rank}\mathrm{MH}_{l,l}(G)\cdot q^{l}, </annotation></semantics> and it means in particular that the coefficients of the magnitude alternate in sign.

The join <semantics>GH<annotation encoding="application/x-tex">G\star H</annotation></semantics> of graphs <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> is obtained by adding an edge between every vertex of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and every vertex of <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics>. This is a very drastic operation, for instance the diameter of the resulting join is at most <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>.

Theorem. If <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> are non-empty graphs then the join <semantics>GH<annotation encoding="application/x-tex">G\star H</annotation></semantics> is diagonal.

This tells us immediately that complete graphs and complete bipartite graphs are diagonal. Together with the other properties of magnitude homology mentioned above, we recover the alternating magnitude property of all the graphs noted by Leinster, as well as many more.

However, there is one graph that appears to be diagonal, at least up to degree <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> according to our computer calculations but which we can’t prove is diagonal, i.e. it doesn’t follow from the results above. This graph is the icosahedron graph which is the one-skeleton of the icosahedron.

icosahedron

Where next?

There are plenty of questions that you can ask about magnitude homology, a fundamental one is whether there are graphs with the same magnitude but different magnitude homology and a related one is whether you can categorify Tom’s result (conjectured by me and David Speyer) on the magnitude of graphs which differ by certain Whitney twists.

One interesting thing demonstrated here is that magnitude has its tendrils in homological algebra, yet another area of mathematics to add to the list which includes biodiversity, enriched category theory, curvature and Minkowski dimension.

by willerton (S.Willerton@sheffield.ac.uk) at May 18, 2015 10:53 AM

John Baez - Azimuth

PROPs for Linear Systems

Eric Drexler likes to say: engineering is dual to science, because science tries to understand what the world does, while engineering is about getting the world to do what you want. I think we need a slightly less ‘coercive’, more ‘cooperative’ approach to the world in order to develop ‘ecotechnology’, but it’s still a useful distinction.

For example, classical mechanics is the study of what things do when they follow Newton’s laws. Control theory is the study of what you can get them to do.

Say you have an upside-down pendulum on a cart. Classical mechanics says what it will do. But control theory says: if you watch the pendulum and use what you see to move the cart back and forth correctly, you can make sure the pendulum doesn’t fall over!

Control theorists do their work with the help of ‘signal-flow diagrams’. For example, here is the signal-flow diagram for an inverted pendulum on a cart:

When I take a look at a diagram like this, I say to myself: that’s a string diagram for a morphism in a monoidal category! And it’s true. Jason Erbele wrote a paper explaining this. Independently, Bonchi, Sobociński and Zanasi did some closely related work:

• John Baez and Jason Erbele, Categories in control.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, Interacting Hopf algebras.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, A categorical semantics of signal flow graphs.

I’ll explain some of the ideas at the Turin meeting on the categorical foundations of network theory. But I also want to talk about this new paper that Simon Wadsley of Cambridge University wrote with my student Nick Woods:

• Simon Wadsley and Nick Woods, PROPs for linear systems.

This makes the picture neater and more general!

You see, Jason and I used signal flow diagrams to give a new description of the category of finite-dimensional vector spaces and linear maps. This category plays a big role in the control theory of linear systems. Bonchi, Sobociński and Zanasi gave a closely related description of an equivalent category, \mathrm{Mat}(k), where:

• objects are natural numbers, and

• a morphism f : m \to n is an n \times m matrix with entries in the field k,

and composition is given by matrix multiplication.

But Wadsley and Woods generalized all this work to cover \mathrm{Mat}(R) whenever R is a commutative rig. A rig is a ‘ring without negatives’—like the natural numbers. We can multiply matrices valued in any rig, and this includes some very useful examples… as I’ll explain later.

Wadsley and Woods proved:

Theorem. Whenever R is a commutative rig, \mathrm{Mat}(R) is the PROP for bicommutative bimonoids over R.

This result is quick to state, but it takes a bit of explaining! So, let me start by bringing in some definitions.

Bicommutative bimonoids

We will work in any symmetric monoidal category, and draw morphisms as string diagrams.

A commutative monoid is an object equipped with a multiplication:

and a unit:

obeying these laws:

For example, suppose \mathrm{FinVect}_k is the symmetric monoidal category of finite-dimensional vector spaces over a field k, with direct sum as its tensor product. Then any object V \in \mathrm{FinVect}_k is a commutative monoid where the multiplication is addition:

(x,y) \mapsto x + y

and the unit is zero: that is, the unique map from the zero-dimensional vector space to V.

Turning all this upside down, cocommutative comonoid has a comultiplication:

and a counit:

obeying these laws:

For example, consider our vector space V \in \mathrm{FinVect}_k again. It’s a commutative comonoid where the comultiplication is duplication:

x \mapsto (x,x)

and the counit is deletion: that is, the unique map from V to the zero-dimensional vector space.

Given an object that’s both a commutative monoid and a cocommutative comonoid, we say it’s a bicommutative bimonoid if these extra axioms hold:

You can check that these are true for our running example of a finite-dimensional vector space V. The most exciting one is the top one, which says that adding two vectors and then duplicating the result is the same as duplicating each one, then adding them appropriately.

Our example has some other properties, too! Each element c \in k defines a morphism from V to itself, namely scalar multiplication by c:

x \mapsto c x

We draw this as follows:

These morphisms are compatible with the ones so far:

Moreover, all the ‘rig operations’ in k—that is, addition, multiplication, 0 and 1, but not subtraction or division—can be recovered from what we have so far:

We summarize this by saying our vector space V is a bicommutative bimonoid ‘over k‘.

More generally, suppose we have a bicommutative bimonoid A in a symmetric monoidal category. Let \mathrm{End}(A) be the set of bicommutative bimonoid homomorphisms from A to itself. This is actually a rig: there’s a way to add these homomorphisms, and also a way to ‘multiply’ them (namely, compose them).

Suppose R is any commutative rig. Then we say A is a bicommutative bimonoid over R if it’s equipped with a rig homomorphism

\Phi : R \to \mathrm{End}(A)

This is a way of summarizing the diagrams I just showed you! You see, each c \in R gives a morphism from A to itself, which we write as

The fact that this is a bicommutative bimonoid endomorphism says precisely this:

And the fact that \Phi is a rig homomorphism says precisely this:

So sometimes the right word is worth a dozen pictures!

What Jason and I showed is that for any field k, the \mathrm{FinVect}_k is the free symmetric monoidal category on a bicommutative bimonoid over k. This means that the above rules, which are rules for manipulating signal flow diagrams, completely characterize the world of linear algebra!

Bonchi, Sobociński and Zanasi used ‘PROPs’ to prove a similar result where the field is replaced by a sufficiently nice commutative ring. And Wadlsey and Woods used PROPS to generalize even further to the case of an arbitrary commutative rig!

But what are PROPs?

PROPs

A PROP is a particularly tractable sort of symmetric monoidal category: a strict symmetric monoidal category where the objects are natural numbers and the tensor product of objects is given by ordinary addition. The symmetric monoidal category \mathrm{FinVect}_k is equivalent to the PROP \mathrm{Mat}(k), where a morphism f : m \to n is an n \times m matrix with entries in k, composition of morphisms is given by matrix multiplication, and the tensor product of morphisms is the direct sum of matrices.

We can define a similar PROP \mathrm{Mat}(R) whenever R is a commutative rig, and Wadsley and Woods gave an elegant description of the ‘algebras’ of \mathrm{Mat}(R). Suppose C is a PROP and D is a strict symmetric monoidal category. Then the category of algebras of C in D is the category of strict symmetric monoidal functors F : C \to D and natural transformations between these.

If for every choice of D the category of algebras of C in D is equivalent to the category of algebraic structures of some kind in D, we say C is the PROP for structures of that kind. This explains the theorem Wadsley and Woods proved:

Theorem. Whenever R is a commutative rig, \mathrm{Mat}(R) is the PROP for bicommutative bimonoids over R.

The fact that an algebra of \mathrm{Mat}(R) is a bicommutative bimonoid is equivalent to all this stuff:

The fact that \Phi(c) is a bimonoid homomorphism for all c \in R is equivalent to this stuff:

And the fact that \Phi is a rig homomorphism is equivalent to this stuff:

This is a great result because it includes some nice new examples.

First, the commutative rig of natural numbers gives a PROP \mathrm{Mat}. This is equivalent to the symmetric monoidal category \mathrm{FinSpan}, where morphisms are isomorphism classes of spans of finite sets, with disjoint union as the tensor product. Steve Lack had already shown that \mathrm{FinSpan} is the PROP for bicommutative bimonoids. But this also follows from the result of Wadsley and Woods, since every bicommutative bimonoid V is automatically equipped with a unique rig homomorphism

\Phi : \mathbb{N} \to \mathrm{End}(V)

Second, the commutative rig of booleans

\mathbb{B} = \{F,T\}

with ‘or’ as addition and ‘and’ as multiplication gives a PROP \mathrm{Mat}(\mathbb{B}). This is equivalent to the symmetric monoidal category \mathrm{FinRel} where morphisms are relations between finite sets, with disjoint union as the tensor product. Samuel Mimram had already shown that this is the PROP for special bicommutative bimonoids, meaning those where comultiplication followed by multiplication is the identity:

But again, this follows from the general result of Wadsley and Woods!

Finally, taking the commutative ring of integers \mathbb{Z}, Wadsley and Woods showed that \mathrm{Mat}(\mathbb{Z}) is the PROP for bicommutative Hopf monoids. The key here is that scalar multiplication by -1 obeys the axioms for an antipode—the extra morphism that makes a bimonoid into a Hopf monoid. Here are those axioms:

More generally, whenever R is a commutative ring, the presence of -1 \in R guarantees that a bimonoid over R is automatically a Hopf monoid over R. So, when R is a commutative ring, Wadsley and Woods’ result implies that \mathrm{Mat}(R) is the PROP for Hopf monoids over R.

Earlier, in their paper on ‘interacting Hopf algebras’, Bonchi, Sobociński and Zanasi had given an elegant and very different proof that \mathrm{Mat}(R) is the PROP for Hopf monoids over R whenever R is a principal ideal domain. The advantage of their argument is that they build up the PROP for Hopf monoids over R from smaller pieces, using some ideas developed by Steve Lack. But the new argument by Wadsley and Woods has its own charm.

In short, we’re getting the diagrammatics of linear algebra worked out very nicely, providing a solid mathematical foundation for signal flow diagrams in control theory!


by John Baez at May 18, 2015 02:31 AM

May 15, 2015

John Baez - Azimuth

Carbon Emissions Stopped Growing?

In 2014, global carbon dioxide emissions from energy production stopped growing!

At least, that’s what preliminary data from the International Energy Agency say. It seems the big difference is China. The Chinese made more electricity from renewable sources, such as hydropower, solar and wind, and burned less coal.

In fact, a report by Greenpeace says that from April 2014 to April 2015, China’s carbon emissions dropped by an amount equal to the entire carbon emissions of the United Kingdom!

I want to check this, because it would be wonderful if true: a 5% drop. They say that if this trend continues, China will close out 2015 with the biggest reduction in CO2 emissions every recorded by a single country.

The International Energy Agency also credits Europe’s improved attempts to cut carbon emissions for the turnaround. In the US, carbon emissions has basically been dropping since 2006—with a big drop in 2009 due to the economic collapse, a partial bounce-back in 2010, but a general downward trend.

In the last 40 years, there have only been 3 times in which emissions stood still or fell compared to the previous year, all during global economic crises: the early 1980’s, 1992, and 2009. In 2014, however, the global economy expanded by 3%.

So, the tide may be turning! But please remember: while carbon emissions may start dropping, they’re still huge. The amount of the CO2 in the air shot above 400 parts per million in March this year. As Erika Podest of NASA put it:

CO2 concentrations haven’t been this high in millions of years. Even more alarming is the rate of increase in the last five decades and the fact that CO2 stays in the atmosphere for hundreds or thousands of years. This milestone is a wake up call that our actions in response to climate change need to match the persistent rise in CO2. Climate change is a threat to life on Earth and we can no longer afford to be spectators.

Here is the announcement by the International Energy Agency:

Global energy-related emissions of carbon dioxide stalled in 2014, IEA, 13 March 2015.

Their full report on this subject will come out on 15 June 2015. Here is the report by Greenpeace EnergyDesk:

China coal use falls: CO2 reduction this year could equal UK total emissions over same period, Greenpeace EnergyDesk.

I trust them less than the IEA when it comes to using statistics correctly, but someone should be able to verify their claims if true.


by John Baez at May 15, 2015 04:46 PM

arXiv blog

Machine-Learning Algorithm Calculates Fair Distance for a Race Between Usain Bolt and Long-Distance Runner Mo Farah

In an entirely new model of athletic performance, three numbers characterize an athlete’s capability over short, middle, and long-distance races

It’s obviously unfair to compare the performance of sprinters and long-distance runners. These endeavors place entirely different demands on the body, which is why good sprinters are entirely unsuited to the demands of marathon running and distance runners perform poorly in sprints.

May 15, 2015 04:00 AM

May 14, 2015

ZapperZ - Physics and Physicists

Quark Gluon Plasma
The quark-gluon plasma (or fluid) that was observed at RHIC several years ago, is back in focus in this Don Lincoln's video.



So where do I get that t-shirt that he was wearing? :)

Zz.

by ZapperZ (noreply@blogger.com) at May 14, 2015 01:20 PM

Symmetrybreaking - Fermilab/SLAC

The accelerator in the Louvre

The Accélérateur Grand Louvre d’analyse élémentaire solves ancient mysteries with powerful particle beams.

In a basement 15 meters below the towering glass pyramid of the Louvre Museum in Paris sits a piece of work the curators have no plans to display: the museum’s particle accelerator.

This isn’t a Dan Brown novel. The Accélérateur Grand Louvre d’analyse élémentaire is real and has been a part of the museum since 1988.

Researchers use AGLAE’s beams of protons and alpha particles to find out what artifacts are made of and to verify their authenticity. The amounts and combinations of elements an object contains can serve as a fingerprint hinting at where minerals were mined and when an item was made.

Scientists have used AGLAE to check whether a saber scabbard gifted to Napoleon Bonaparte by the French government was actually cast in solid gold (it was) and to identify the minerals in the hauntingly lifelike eyes of a 4500-year-old Egyptian sculpture known as The Seated Scribe (black rock crystal and white magnesium carbonate veined with thin red lines of iron oxide).

“What makes the AGLAE facility unique is that our activities are 100 percent dedicated to cultural heritage,” says Claire Pacheco, who leads the team that operates the machine. It is the only particle accelerator that has been used solely for this field of research.

Pacheco began working with ion-beam analysis at AGLAE while pursuing a doctorate degree in ancient materials at France’s University of Bordeaux. She took over as its lead scientist in 2011 and now operates the particle accelerator with a team of three engineers.

Jean-Claude Dran, a scientist who worked with AGLAE during its early days and served for several years as a scientific advisor, says the study methods pioneered for AGLAE are uniquely suited to art and archaeological artifacts. “These techniques are very powerful, very accurate and very sensitive to trace elements.”

Photo by: V. Fournier, C2RMF

Crucially, they are also non-destructive in most cases, Pacheco says.

“Of course, AGLAE is non-invasive, which is priority No. 1 for cultural heritage” she says. The techniques used at AGLAE include particle-induced X-ray and gamma-ray emission spectrometries, which can identify the slightest traces of elements ranging from lithium to uranium.

Before AGLAE, research facilities typically required samples to be placed in a potentially damaging vacuum for similar materials analysis. Researchers hoping to study pieces too large for a vacuum chamber were out of luck. AGLAE, because its beams work outside the vacuum, allows researchers to study objects of any size and shape.

The physicists and engineers who conduct AGLAE experiments typically work hand-in-hand with curators and art historians.

While AGLAE frequently studies items from the local collection, it has a larger mission to study art and relics from museums all around France. It is also available to outside researchers, who have used it on pieces from museums such as the J. Paul Getty Museum in Los Angeles and the Metropolitan Museum of Art in New York.

AGLAE has been used to study glasses, metals and ceramics. In one case, Pacheco’s team wanted to know the origins of pieces of lusterware, a type of ceramic that takes on a metallic shine when kiln-fired. The technique emerged in ninth-century Mesopotamia and was spread all around the Mediterranean during the Muslim conquests. It had mostly faded by the 17th century, but some potters in Spain still carry on the tradition.

Pacheco’s team used AGLAE to pinpoint the elements in the lusterware, and then they mixed up batches of raw materials from different locations. “What we have tried to do is make a kind of ‘identity card’ for every production center at every period in time,” Pacheco says.

Another, recently published study details how AGLAE was also used to analyze the chemical signature of traces of decorative paint on ivory tusks. Pacheco’s team determined that the tusks were likely painted during the seventh century B.C.

A limitation of the AGLAE particle analysis techniques is that they are not very effective for studying paintings because of a slight risk of damage. But Pacheco says that an upgrade now in progress aims to produce a lower-power beam that, coupled with more sensitive detectors, could solve this problem.

Dubbed NEW AGLAE, the upgraded setup could boost automation to allow the accelerator to operate around the clock—it now operates only during the day.

While public tours are not permitted of AGLAE, Pacheco says there are frequent visits by researchers working in cultural heritage.

“It’s so marvelous,” she says. “We are very, very lucky to work in this environment, to study these objects.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. and Kelen Tuttle at May 14, 2015 01:00 PM

Tommaso Dorigo - Scientificblogging

Burton Richter Advocates Electron-Positron Colliders, For A Change
Burton Richter, 1975 Nobel prize in Physics for the discovery of the J/ψ meson, speaks about the need of a new linear collider for the measurement of Higgs boson branching fractions in a video on Facebook (as soon as I understand how to paste here I will!)

Richter has been a fervent advocate of electron-positron machines over hadronic accelerators throughout his life. So you really could not expect anything different from him - but he still does it with all his might. At one point he says, talking of the hadron collider scientists who discovered the Higgs boson:

read more

by Tommaso Dorigo at May 14, 2015 12:06 PM

May 13, 2015

Symmetrybreaking - Fermilab/SLAC

LHC experiments first to observe rare process

A joint result from the CMS and LHCb experiments precludes or limits several theories of new particles or forces.

Two experiments at the Large Hadron Collider at CERN have combined their results and observed a previously unseen subatomic process.

As published in the journal Nature this week, a joint analysis by the CMS and LHCb collaborations has established a new and extremely rare decay of the Bs particle—a heavy composite particle consisting of a bottom antiquark and a strange quark—into two muons. Theorists had predicted that this decay would only occur about four times out of a billion, and that is roughly what the two experiments observed.

“It’s amazing that this theoretical prediction is so accurate and even more amazing that we can actually observe it at all,” says Syracuse University Professor Sheldon Stone, a member of the LHCb collaboration. “This is a great triumph for the LHC and both experiments.”

LHCb and CMS both study the properties of particles to search for cracks in the Standard Model, our best description so far of the behavior of all directly observable matter in the universe. The Standard Model is known to be incomplete since it does not address issues such as the presence of dark matter or the abundance of matter over antimatter in our universe. Any deviations from this model could be evidence of new physics at play, such as new particles or forces that could provide answers to these mysteries.

“Many theories that propose to extend the Standard Model also predict an increase in this Bs decay rate,” says Fermilab’s Joel Butler of the CMS experiment. “This new result allows us to discount or severely limit the parameters of most of these theories. Any viable theory must predict a change small enough to be accommodated by the remaining uncertainty.”

Courtesy of: LHCb collaboration

Researchers at the LHC are particularly interested in particles containing bottom quarks because they are easy to detect, abundantly produced and have a relatively long lifespan, according to Stone.

“We also know that Bs mesons oscillate between their matter and their antimatter counterparts, a process first discovered at Fermilab in 2006,” Stone says. “Studying the properties of B mesons will help us understand the imbalance of matter and antimatter in the universe.”

That imbalance is a mystery scientists are working to unravel. The big bang that created the universe should have resulted in equal amounts of matter and antimatter, annihilating each other on contact. But matter prevails, and scientists have not yet discovered the mechanism that made that possible.

“The LHC will soon begin a new run at higher energy and intensity,” Butler says. “The precision with which this decay is measured will improve, further limiting the viable Standard Model extensions. And of course, we always hope to see the new physics directly in the form of new particles or forces.”

Courtesy of: CMS collaboration


Fermilab published a version of this article as a press release.

 

Like what you see? Sign up for a free subscription to symmetry!

May 13, 2015 01:00 PM

Jaques Distler - Musings

Action-Angle Variables

This semester, I taught the Graduate Mechanics course. As is often the case, teaching a subject leads you to rethink that you thought you understood, sometimes with surprising results.

The subject for today’s homily is Action-Angle variables.

Let <semantics>(,ω)<annotation encoding="application/x-tex">(\mathcal{M},\omega)</annotation></semantics> be a <semantics>2n<annotation encoding="application/x-tex">2n</annotation></semantics>-dimensional symplectic manifold. Let us posit that <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics> had a foliation by <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional Lagrangian tori (a torus, <semantics>TM<annotation encoding="application/x-tex">T\subset M</annotation></semantics>, is Lagrangian if <semantics>ω| T=0<annotation encoding="application/x-tex">\omega|_T =0</annotation></semantics>). Removing a subset, <semantics>S<annotation encoding="application/x-tex">S\subset \mathcal{M}</annotation></semantics>, of codimension <semantics>codim(S)2<annotation encoding="application/x-tex">codim(S)\geq 2</annotation></semantics>, where the leaves are singular, we can assume that all of the leaves on <semantics>=\S<annotation encoding="application/x-tex">\mathcal{M}'=\mathcal{M}\backslash S</annotation></semantics> are smooth tori of dimension <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>.

The objective is to construct coordinates <semantics>φ i,K i<annotation encoding="application/x-tex">\varphi^i, K_i</annotation></semantics> with the following properties.

  1. The <semantics>φ i<annotation encoding="application/x-tex">\varphi^i</annotation></semantics> restrict to angular coordinates on the tori. In particular <semantics>φ i<annotation encoding="application/x-tex">\varphi^i</annotation></semantics> shifts by <semantics>2π<annotation encoding="application/x-tex">2\pi</annotation></semantics> when you go around the corresponding cycle on <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>.
  2. The <semantics>K i<annotation encoding="application/x-tex">K_i</annotation></semantics> are globally-defined functions on <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics> which are constant on each torus.
  3. The symplectic form <semantics>ω=dK idφ i<annotation encoding="application/x-tex">\omega= d K_i\wedge d \varphi^i</annotation></semantics>.

From 1, it’s clear that it’s more convenient to work with the 1-forms <semantics>dφ i<annotation encoding="application/x-tex">d\varphi^i</annotation></semantics>, which are single-valued (and closed, but not necessarily exact), rather than with the <semantics>φ i<annotation encoding="application/x-tex">\varphi^i</annotation></semantics> themselves. In 2, it’s rather important that the <semantics>K i<annotation encoding="application/x-tex">K_i</annotation></semantics> are really globally-defined. In particular, an integrable Hamiltonian is a function <semantics>H(K)<annotation encoding="application/x-tex">H(K)</annotation></semantics>. The <semantics>K i<annotation encoding="application/x-tex">K_i</annotation></semantics> are the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> conserved quantities which make the Hamiltonian integrable.

Obviously, a given foliation is compatible with infinitely many “integrable Hamiltonians,” so the existence of a foliation is the more fundamental concept.

All of this is totally standard.

What never really occurred to me is that the standard construction of action-angle variables turns out to be very closely wedded to the particular case of a cotangent bundle, <semantics>=T *M<annotation encoding="application/x-tex">\mathcal{M}=T^*M</annotation></semantics>.

As far as I can tell, action-angle variables don’t even exist for foliations of more general symplectic manifolds, <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>.

Any contagent bundle, <semantics>T *M<annotation encoding="application/x-tex">T^*M</annotation></semantics>, has a canonical 1-form, <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics>, on it. The standard symplectic structure is <semantics>ω=dθ<annotation encoding="application/x-tex">\omega = d\theta</annotation></semantics>. The construction of the action-variables requires that we choose a homology basis, <semantics>γ i<annotation encoding="application/x-tex">\gamma_i</annotation></semantics>, for each torus, in a fashion that is locally-constant1, as we move between tori of the foliation. The <semantics>K i<annotation encoding="application/x-tex">K_i</annotation></semantics> are then defined as
(1)<semantics>K i=12π γ iθ<annotation encoding="application/x-tex">K_i = \frac{1}{2\pi}\int_{\gamma_i} \theta </annotation></semantics>
Note that, because the torus is Lagrangian, the values of <semantics>K i<annotation encoding="application/x-tex">K_i</annotation></semantics> are independent of the particular choices of path chosen to represent <semantics>γ i<annotation encoding="application/x-tex">\gamma_i</annotation></semantics>. Having constructed the <semantics>K i<annotation encoding="application/x-tex">K_i</annotation></semantics>, the closed 1-forms
(2)<semantics>dφ i=i /K iω<annotation encoding="application/x-tex">d\varphi^i = i_{\partial/\partial K_i}\omega </annotation></semantics>
Great! Except that, for a general symplectic manifold, there’s no analogue of <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics>. In particular, it’s trivial to construct examples of symplectic manifolds, foliated by Lagrangian tori, for which no choice of action variables, <semantics>K i<annotation encoding="application/x-tex">K_i</annotation></semantics>, exist. As a simple example, take <semantics>(,ω)=(T 4,dθ 1dθ 3+dθ 2dθ 4)<annotation encoding="application/x-tex"> (\mathcal{M},\omega)= (T^4, d\theta_1\wedge d\theta_3 +d\theta_2\wedge d\theta_4) </annotation></semantics> Obviously, we can foliate this by Lagrangian tori (taking <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> to be the subsets <semantics>{θ 1,θ 2=const}<annotation encoding="application/x-tex">\{\theta_1, \theta_2=\text{const}\}</annotation></semantics>). But the corresponding action variables don’t exist. We’d happily choose <semantics>K i=θ i<annotation encoding="application/x-tex">K_i=\theta_i</annotation></semantics>, for <semantics>i=1,2<annotation encoding="application/x-tex">i=1,2</annotation></semantics>, but those aren’t single-valued functions on <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>. You could try to use functions that are actually single-valued (e.g., <semantics>K i=sin(θ i)<annotation encoding="application/x-tex">K_i=\sin(\theta_i)</annotation></semantics>), but then the corresponding 1-forms, <semantics>η i<annotation encoding="application/x-tex">\eta^i</annotation></semantics>, in <semantics>ω=dK iη i<annotation encoding="application/x-tex">\omega = dK_i\wedge\eta^i</annotation></semantics>, don’t have <semantics>2π×<annotation encoding="application/x-tex">2\pi\times</annotation></semantics>integral periods (heck, they’re not even closed!).

Surely, there’s some sort of cohomological characterization of when Action-Angle variables exist. The situation feels a lot like the characterization of when symplectomorphisms (vector fields that preserve the symplectic form) are actually Hamiltonian vector fields2.

And, even when the obstruction vanishes, how do we generalize the construction (1), (2) to more general symplectic manifolds?

Update:

Just to be clear, there are plenty of examples where you can construct action-angle variables for foliations of symplectic manifolds which are not cotangent bundles. An easy example is <semantics>(,ω)=(S 2,rdrdθ(1+r 2) 2)<annotation encoding="application/x-tex"> (\mathcal{M},\omega) = \left(S^2, \frac{r dr\wedge d\theta}{{(1+r^2)}^2}\right) </annotation></semantics> where <semantics>(K,φ)=(12(1+r 2),θ)<annotation encoding="application/x-tex">(K,\varphi)= \left(-\frac{1}{2(1+r^2)},\theta\right)</annotation></semantics> are action-angle variables for the obvious foliation by circles. This example “works” because once you remove the singular leaves (at <semantics>r=0,<annotation encoding="application/x-tex">r=0,\infty</annotation></semantics>), <semantics>ω<annotation encoding="application/x-tex">\omega</annotation></semantics> becomes cohomologically trivial on <semantics><annotation encoding="application/x-tex">\mathcal{M}'</annotation></semantics> and we can then use the standard construction. <semantics>[ω]=0H 2(\S,)<annotation encoding="application/x-tex">[\omega]=0\in H^2(\mathcal{M}\backslash S,\mathbb{R})</annotation></semantics> sounds like a sufficient condition for constructing action-angle variables. But is it necessary?

1I’m pretty sure we need them to be globally-constant over <semantics><annotation encoding="application/x-tex">\mathcal{M}'</annotation></semantics>. I’ll assume there’s no obstruction to doing that.

2If you’re not familiar with that story, note that <semantics> Xω=0<annotation encoding="application/x-tex"> \mathcal{L}_X \omega = 0 </annotation></semantics> is tantamount to the condition that <semantics>i Xω<annotation encoding="application/x-tex">i_X\omega</annotation></semantics> is a closed 1-form. If it happens that it is an exact 1-form, <semantics>i Xω=df<annotation encoding="application/x-tex"> i_X\omega = d f </annotation></semantics> then <semantics>X={f,}<annotation encoding="application/x-tex">X = \{f,\cdot\}</annotation></semantics> is a Hamiltonian vector field. The obstruction to writing <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> as a Hamiltonian vector field is, thus, the de Rham cohomology class, <semantics>[i Xω]H 1(,)<annotation encoding="application/x-tex">[i_X\omega]\in H^1(\mathcal{M},\mathbb{R})</annotation></semantics>.

In the example at hand, that’s exactly what is going on. Any single-valued function, <semantics>H(θ 1,θ 2)<annotation encoding="application/x-tex">H(\theta_1,\theta_2)</annotation></semantics>, is an “integrable” Hamiltonian for the above foliation. But the symmetries, <semantics>X 1=θ 3<annotation encoding="application/x-tex">X_1=\frac{\partial}{\partial\theta_3}</annotation></semantics> and <semantics>X 2=θ 4<annotation encoding="application/x-tex">X_2=\frac{\partial}{\partial\theta_4}</annotation></semantics> are not Hamiltonian vector fields. Hence, there are no corresponding conserved action variables.

by distler (distler@golem.ph.utexas.edu) at May 13, 2015 12:38 AM

May 12, 2015

Symmetrybreaking - Fermilab/SLAC

High adventure physics

Three groups of hardy scientists recently met up in Antarctica to launch experiments into the big blue via balloon.

UC Berkeley grad student Carolyn Kierans recently watched her 5000-pound astrophysics experiment ascend 110,000 feet over Antarctica on the end of a helium-filled balloon the size of a football field.

She had been up since 3 a.m. with the team that prepped and transported the telescope known as COSI—Compton Spectrometer and Imager—across the ice shelf on an oversized vehicle called “The Boss.” They waited hours at the launch site in a thick fog for the winds to die down before getting the go-ahead to fill the balloon.

Then the sky opened up, and they were cleared for launch.

“I was with the crew at the launch pad, in the middle of nowhere, when the clouds disappeared and I could finally see the balloon hundreds of feet up,” she recalls. “I had to stop and say, ‘Wait, I’m doing my PhD in physics right now?’”

Kierans was among three groups of hardy physicists who met up at Antarctica’s McMurdo Station last fall to fly their curious-looking instruments during NASA’s most recent Antarctic Scientific Balloon Campaign.

Fully assembled and flight ready, COSI gets some final adjustments from Carolyn Kierans during testing.

Photo by: Laura Gerwin

For Antarctica’s three summer months, December through February, conditions are right to conduct studies in the upper atmosphere via scientific balloon. The sun never sets during those months, so the balloons are spared nighttime temperatures that would cause significant changes in altitude. And seasonal wind patterns take the balloons on a circular route almost entirely over land.

To allow the balloons enough time to collect data and safely land before conditions change, all launches must take place within a few weeks in December. Near the end of 2014, three teams of physicists arrived at the end of the Earth to try to launch, one after the other, within that small window.

Each team was driven by a different scientific pursuit: COSI set out to capture images of gamma rays for clues to the life and death of stars; ANITA (Antarctic Impulsive Transient Antenna) sought rare signs of ultra-high-energy neutrinos; and SPIDER was probing the cosmic microwave background for evidence of cosmic inflation.

Months of intense preparation, naps on the floor of a barn, competition for launch times during narrow windows of opportunity, and numerous aborted attempts did not dampen spirits. The teams shared meals, supplies, hikes and live music jams with locals at one of two town bars—united by the common pursuit of physics on high.

“The community was like a gigantic family with the same goal of getting those balloons up,” Kierans says.

None could be sure of a successful launch. Nor could they know exactly when or where their balloon would land once it took flight or how they would navigate the icy landscape to retrieve their precious data.

‘The crinkling of Mylar’

Balloon-based physics experiments take many months of preparation. The teams first met up during the summer at the Columbia Scientific Balloon Facility in Palestine, Texas, where they assembled payloads and tested science and flight systems. Then they disassembled their experiments, shipped them in boxes and put them back together at McMurdo starting in October to be launch-ready by early December. Each group had 10 to 20 team members on the continent during peak work efforts.

“We had about eight weeks to get everything back together and perform all the calibrations—it’s an exhausting and stressful period—and a very long time to be away from family,” recalls William Jones, assistant professor of physics at Princeton University and SPIDER lead.

A successful launch depends on the optimal functioning of gear and instruments—and the cooperation of the weather.

First in line was the ANITA experiment. ANITA hunts for the highest energy particles ever observed. Scientists have known about ultra-high-energy neutrinos since the 1960s, but they still don’t know exactly where they come from or how they get their energy. 

“Nothing on Earth can produce such particles right now,” says Harm Schoorlemmer, a postdoctoral fellow at the University of Hawaii from the ANITA team. “They are five to seven orders of magnitude higher in energy than particles we can accelerate in machines like the LHC at CERN.”

Neutrinos travel through the universe barely interacting with anything—until they hit the dense Earth. ANITA’s 48 antennas on a 25-foot-tall gondola fly pointed down to capture radio waves in the Antarctic ice—signs of ultra-high-energy neutrino reactions.

“The ice sheet has the advantage that it is transparent for radio waves,” says Christian Miki, University of Hawaii staff scientist and ANITA on-ice lead. “By flying high—about 120,000 feet up—ANITA can capture a diameter of 600 kilometers all at once.”

Numerous ANITA launch attempts were scrubbed due to weather. It took several hours from hangar to launch at the Long Duration Balloon Facility, and Antarctic weather is known for radical shifts within the hour, Miki says.

ANITA hangs from the The Boss on its way to the launch pad.

Photo by: Harm Schoorlemmer, ANITA

The day before the actual launch, the payload had been brought out of the hanger and checks were being performed when the team noticed an Emperor penguin hanging out on the edge of the launch pad. “We thought this was either good luck—getting a blessing from the Antarctic gods—or bad luck as penguins are flightless birds,” Miki recalls.

Apparently graced, the ANITA team rolled out on December 18 for the real deal. The 4944-pound experiment was loaded onto the The Boss and taken to the launch site. Hours passed as they waited for optimal conditions; all the instruments were checked and double-checked. Finally, they got the go-ahead from NASA.

“It’s hard to grasp the scales involved,” Schoorlemmer says. “The balloon is 800 to 900 feet above The Boss before the line is cut—buildings are about 35 to 40 feet tall. It takes one and a half hours to fill the balloon with helium, and then everything goes quiet. All we could hear is the crinkling of the Mylar and people going ‘Ooh, ooh.’”

Hunting gamma rays

Next up was COSI, a wide-field gamma-ray telescope that studies radiation blasted toward Earth by the most energetic or extreme environments in the universe, such as gamma-ray bursts, pulsars and nuclear decay from supernova remnants. Because gamma rays don’t make it through the Earth’s atmosphere, the telescope must rise above it. Pointed out to space, it can survey 25 percent of the sky at one time for sources of gamma-ray emissions and help detect where these high-energy photons come from. Researchers hope to use its images to learn more about the life and death of stars or the mysterious source of positrons in our galaxy.

Testing gamma ray telescopes like COSI on balloons can help scientists develop technologies that can eventually be used on satellites. The recent COSI launch was the first to use a new ultra-long-duration balloon design in hopes of getting 100 days worth of data.

COSI was launch-ready at the same time as ANITA but waited for it to go up before preparing to do the same. They also experienced several attempts called off due to weather.

COSI's super pressure balloon is finally released from the spool and takes flight.

Photo by: Jeffrey Filippini, SPIDER

“For nine days in a row, we showed up and did all the prep work,” only to abandon the efforts, Kierans says. On one attempt they got as far as laying out the balloon, which was theoretically the point of no return, before the weather turned against them. They somehow managed to put the 1.5-millimeter-thick, 5000-pound balloon back into the box. “It took 10 riggers over an hour of strenuous, delicate work” to put it back, Kierans wrote on her blog.

Finally, on December 27 the silvery white balloon was filled with helium and cut loose, taking COSI up to the dark space above the Earth’s atmosphere.

Jubilation at the successful launch did not last long. Just 40 hours later, a leak in the balloon forced the team to bring it back down. “It will be tough to get science data out of that short flight,” Kierans says. “But we will learn a lot. We made the decision to bring it down where we could get everything back and rebuild.”

COSI was fully recovered by Kierans, who made three trips by twin otter plane to the Polar Plateau just over the Transantarctic Mountains—known as the “great flat white”—to disassemble and load up the instruments.

Every inch of their flesh was covered to prevent frostbite. “This was not what I signed up for when I started out in physics,” she says. “But don’t get me wrong—I love it!”

Big sky, big bang

Last in line was SPIDER, which uses six telescopes designed to create extremely high-fidelity images of the polarization of the sky at certain wavelengths—or “colors”—of light. Scientists will use the images to search for patterns in the cosmic microwave background, the oldest light ever observed. Such patterns could provide evidence for the period of rapid expansion in the early universe known as cosmic inflation.

Rising 118,000 feet above the Earth, the 6500-pound SPIDER is able to observe over six times more sky than Earth-based CMB experiments like BICEP.

“Large sky coverage is the best way to be able to say whether or not the signal appears the same no matter where you look,” explains Jones, SPIDER lead.

With just days remaining in the launch window after the COSI launch, SPIDER took advantage of a good patch of weather on the last possible day—New Year’s Eve in the US.

SPIDER reflects its first rays of Antarctic sun with its Mylar sun shields after being rolled out of the bay.

Photo by: Zigmund Kermish, SPIDER

The team started out at 4 a.m. with what seemed like perfect weather, but the winds higher up were too fast and the launch was put on hold for about five hours. Eventually the winds died down and SPIDER was back on track to fly.

“The launch, in particular the final few minutes once the balloon filled and released, represents the culmination of over eight years of work. It is a thrill. At the same time it is truly frightening,” Jones says.

Princeton University graduate student Anne Gambrel left this note on the experiment's “SPIDER on the Ice” blog: “Over the next couple of hours, we all huddled around our computers, and as each subsystem came online, working as designed, we all cheered. By 9 p.m., we were at float altitude and nothing had gone seriously wrong. I went home and slept like a rock as others got all of the details sorted and started taking data on the CMB.”

Around and around she goes

During the first 24 hours after their launch, the ANITA team constantly observed and tuned the instruments from the base. “There were six of us rotating in and out of the controls, while others were sleeping in cardboard boxes next to commanders,” Schoorlemmer says.

The balloons are tracked in their circular flight around the continent, watched carefully for the optimal time to call them back to Earth.

“Once the balloon is launched, you only have historical record to guide your intuition about where it will go,” Jones says. “No one really knows.”

ANITA was up in the air for 22 days and 9 hours and was able to collect about twice the data of the experiment’s last polar flight.

The instruments came down near the Australian Antarctic Station on January 9. “The Australians volunteered their services in recovering the instruments. They will go on a vessel up to Hobart and be picked up by the team in spring,” Miki says.

SPIDER flew for about 17 days, generating approximately 85 GB of data each day, mainly from snapshots taken at about 120 images per second.

This map shows SPIDER’s flight path and final resting place.

Courtesy of: John Ruhl, SPIDER

“It’s a daunting analysis task,” Jones says. But his team will eventually combine the data to make an image of the southern hemisphere representing about 10 percent of the full sky.

SPIDER was brought down on January 17, 1500 miles from launch location “before it could go over the water and possibly not come back,” Jones says.

The SPIDER team received assistance from the British Antarctic Survey in recovering the data. “Our experiment weighed roughly 6200 pounds, and we got back about 180,” Jones says. The rest, including the science cameras and most electronics, will remain on the West Antarctic plateau over the southern hemisphere winter.

Other discoveries

Finally arriving in New Zealand post-recovery, a few of the scientists went to the botanical gardens to lie on the grass.

“To be able to walk barefoot in it!” Miki says. “I remember landing at 6 o’clock in the morning, walking out of the airport and actually smelling plants and the rain.”

While the landscape, the science, the instruments, engineering and logistics of such balloon experiments are impressive, the Antarctic researchers were just as taken with the stalwart souls that make them happen.

“The biggest surprise for me was the people,” Kierans says. “The contractors who work at McMurdo devote half the year to be in the harshest of continents, and they are some of the most interesting people I’ve ever met.”

Miki concurs. “You’d be surprised who you might find working as support staff there. There was a lawyer taking a break from law; PhDs driving dozers. Some are just out of college and others are seasoned Antarctic veterans.”

The staff is as friendly as they are professional, Miki says. “They’ll invite ‘beakers’ (what they call scientists) to parties, knitting circles, hikes, etc. With a peak population of over 900 people living in close quarters, getting along is essential.”

Miki also reflected on the strong friendships made: “Maybe it’s the 24 hours of sunlight, living in close proximity, minimal privacy, long work hours, the desolation in which we are all immersed. Maybe it’s just that the ice attracts amazing, brilliant, talented people from around the world.”

For Jones, the commitment such adventure-ready researchers show to their work goes above and beyond.

“We were always supportive, always competitive, sometimes strained, sometimes ecstatic,” he says. “It’s an honor to be able to work with such talented people who are selflessly devoted to learning more about how Nature works at a fundamental level.”

 

by Angela Anderson at May 12, 2015 09:55 PM

May 11, 2015

Life as a Physicist

Really? Is it that different?

An article from the New York Times is making its rounds on various social media circles I’m a member of, “What is the Point of a Professor?” It has lots of sucker-punch quotes, like

But as this unique chapter of life closes and they reflect on campus events, one primary part of higher education will fall low on the ladder of meaningful contacts: the professors.

Or this one:

In one national survey, 61 percent of students said that professors frequently treated them “like a colleague/peer,” while only 8 percent heard frequent “negative feedback about their academic work.” More than half leave the graduation ceremony believing that they are “well prepared” in speaking, writing, critical thinking and decision-making.

Obviously implicit is that they aren’t well prepared! This is from an op-ed bit written by Mark Bauerlein, a professor at Emory. He also authored a book titled (which I have not read):

“The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future (or, Don’t Trust Anyone Under 30).”

You can probably already tell this has pissed me off. Smile

This sort of hatchet job of a critique of university students gets it part-right, but, I think, really misses the point. Sorting through the article and trying to pull out a central idea that he wants all professors to adopt, I came away with this quote:

Since the early 2000s, I have made students visit my office every other week with a rough draft of an essay. We appraise and revise the prose, sentence by sentence. I ask for a clearer idea or a better verb; I circle a misplaced modifier and wait as they make the fix.

This one-on-one interaction he stresses as the cure for all the ills he has outlined. Let me just say that if I were devote this much time to each of my students I’d still be single. In the modern day and age of universities and professor’s lives (and jobs), there just isn’t time! Too many people want a university education, and there just isn’t enough money in the education system to fun this sort of interaction (and it is getting worse in many of the national largest publics).

But…!!

But, frankly, if I look at my life and my work, it doesn’t seem that bad. I’m constantly mentoring undergraduates and graduate students. He claims that professors who do research don’t want interaction with their students because it detracts from their research… I doubt it is any different in English than it is in Physics – but that interaction is pretty much the only way I can get good undergraduates to start working with me! And I’m far from alone at the University of Washington.

The two views (I’m doing plenty of mentoring and his that there isn’t enough contact) are compatible: student/professor ratios are an easy explanation. But that isn’t everything – my students are not the same sort of student I was. This quote really irked me as being rather arrogant:

Naturally, students looked to professors for moral and worldly understanding.

Wow. I don’t think he has met most of my students! By the time they get to me they have a pretty good understanding of how the world works. I can help guide them though quantum mechanics and the philosophical questions that raises, but the internet and their friend groups are much stronger influences than I am for everything else!

His book title also makes me think he has missed everything that the new digital age has to offer. It feels like the constant discussion I have when organizing a conference: should we turn off wifi in the conference room and force everyone to listen to the talks, or leave it on? I see benefits and detriments to both – but you can’t hold back progress. Especially as the younger generations grow up and start attending conferences this will not be an option. And they and forward conference organizers will find ways to use it to the attendee’s benefit – the same way I hope it will happen in classrooms. I should say as a caveat, I don’t know anyone has universally cracked that nut yet!

In short:

  • He is right, in large classes can undermine the interaction between students and professors. Blame lies not just with the professors as his article implies here.
  • There is a lot of interaction going on none-the-less. Taking advantage of electronic communication, not just in-person.
  • Undergraduates learn at a university from many sources (e.g. the internet, social groups/media, etc.) in a way they didn’t a generation ago. This is good, not bad.
  • The kids are better than he seems to be giving them credit for. Smile

Edit: I originally saw this post in my fb feed, and my friend Salvatore Rappoccio had a fantastic response. It was private at the time, but now that he has made his reply to the article public”":

What? I can’t hear you over the four undergrad students I’m sending to Fermilab for the summer or the two undergrads per semester I’ve mentored for three years. If you want to chat you’ll have to take a number behind the 20-ish students per semester I sit down with for philosophical discussions or career advice outside of my office hours. I have, in the last semester, discussed physics, career choices, fatherhood, kerbal space program, and drywalling with a 3-tour vet, a guy working full time as a contractor to put himself through school, an electrician going back to school for engineering, and a student practically in tears that I bothered to tell her that she improved a lot over the semester, just to name the most memorable ones.

So What’s the point of a professor, you ask?

To educate, obviously. And not just in the classroom. Maybe it’s just you who falls into the “useless” category.


by gordonwatts at May 11, 2015 01:11 PM

May 08, 2015

Jester - Resonaances

Weekend plot: minimum BS conjecture
This weekend plot completes my last week's post:

It shows the phase diagram for models of natural electroweak symmetry breaking. These models can be characterized by 2 quantum numbers:

  • B [Baroqueness], describing how complicated is the model relative to the standard model;   
  • S [Strangeness], describing the fine-tuning needed to achieve electroweak symmetry breaking with the observed Higgs boson mass. 

To allow for a fair comparison, in all models the cut-off scale is fixed to Λ=10 TeV. The standard model (SM) has, by definition,  B=1, while S≈(Λ/mZ)^2≈10^4.  The principle of naturalness postulates that S should be much smaller, S ≲ 10.  This requires introducing new hypothetical particles and interactions, therefore inevitably increasing B.

The most popular approach to reducing S is by introducing supersymmetry.  The minimal supersymmetric standard model (MSSM) does not make fine-tuning better than 10^3 in the bulk of its parameter space. To improve on that, one needs to introduce large A-terms (aMSSM), or  R-parity breaking interactions (RPV), or an additional scalar (NMSSM).  Another way to decrease S is achieved in models the Higgs arises as a composite Goldstone boson of new strong interactions. Unfortunately, in all of those models,  S cannot be smaller than 10^2 due to phenomenological constraints from colliders. To suppress S even further, one has to resort to the so-called neutral naturalness, where new particles beyond the standard model are not charged under the SU(3) color group. The twin Higgs - the simplest  model of neutral naturalness - can achieve S10 at the cost of introducing a whole parallel mirror world.

The parametrization proposed here leads to a striking observation. While one can increase B indefinitely (many examples have been proposed  the literature),  for a given S there seems to be a minimum value of B below which no models exist.  In fact, the conjecture is that the product B*S is bounded from below:
BS ≳ 10^4. 
One robust prediction of the minimum BS conjecture is the existence of a very complicated (B=10^4) yet to be discovered model with no fine-tuning at all.  The take-home message is that one should always try to minimize BS, even if for fundamental reasons it cannot be avoided completely ;)

by Jester (noreply@blogger.com) at May 08, 2015 11:37 PM

Quantum Diaries

Anti-Matters: The Latest and Greatest from the Alpha Magnetic Spectrometer

This past month in Geneva a conference took place bringing together the world’s foremost experiments in cosmic ray physics and indirect dark matter detection: “AMS Days at CERN”. I took a break from thesis-writing, grabbed a bag of popcorn, and sat down to watch a couple of the lectures via webcast. There was a stellar lineup, including but not limited to talks from IceCube, the Pierre Auger Observatory, H.E.S.S. and CTA, Fermi-LAT, and CREAM. The Alpha Magnetic Spectrometer (AMS) experiment was, of course, the star of the show. It is the AMS and its latest results that I’d like to focus on now.

But first, I’d like to give a brief introduction to cosmic rays, since that’s what AMS studies.

It turns out that space is not as empty as one might think. The Earth is constantly being bombarded by extremely-high-energy particles from all directions.  These cosmic rays were discovered in the early twentieth century by the Austrian physicist Victor Hess. Hess made several balloon-borne measurements of the Earth’s natural radiation at various altitudes and observed that the incidence of ionizing radiation actually increased with ascent, the exact opposite of what you would expect if all radioactivity came from the earth.

Fig. 1: An artist's rendition of cosmic rays . Image from http://apod.nasa.gov/apod/ap060814.html.

Fig. 1: An artist’s rendition of cosmic rays . Image from http://apod.nasa.gov.

The word “ray” is actually something of a misnomer – Cosmic rays are primarily charged matter particles rather than electromagnetic radiation. Their makeup goes as follows: approximately 98% are nuclei, of which 90% of are protons, 9% are alpha particles (helium nuclei), and only a small proportion heavier nuclei; and approximately 2% electrons and positrons. Only very small trace amounts (less than one ten-thousandth the number of protons) of antimatter are present, and of this, it is all positrons and antiprotons – not a single antihelium or heavier anti-nucleus has been discovered. There are two types of cosmic rays: primary rays, which come directly from extrasolar sources, and secondary rays, which come from primary rays crashing into the interstellar medium and forming new particles through processes such as nuclear spallation. Particles resulting from cosmic ray collisions with the Earth’s atmosphere are also considered secondary cosmic rays – these include particles like pions, kaons, and muons, and their decay products.

Fig. 2: Cosmic ray flux vs. particle energy.  Image from http://science.nasa.gov/science-news/science-at-nasa/2001/ast15jan_1/

Fig. 2: Cosmic ray flux vs. particle energy. Image from http://science.nasa.gov/science-news/science-at-nasa/2001/ast15jan_1/

Despite being discovered over a hundred years ago, cosmic rays remain in a lot of ways a big mystery. For one thing, we don’t know exactly where they come from. Because cosmic rays are generally electrically charged, they don’t travel to us straight from the source. Rather, they are accelerated this way and that by magnetic fields in space so that when they finally reach us they could be coming from any direction at all. Indeed, the cosmic ray flux that we see is completely isotropic, or the same in all directions.

Not only do they not come straight from the source, but we don’t even know what that source is. These particles move orders of magnitude faster than particles in our most powerful accelerators on Earth. Astronomers’ best guess is that cosmic rays are accelerated by magnetic shocks from supernovae. But even supernovae aren’t enough to accelerate the highest-energy cosmic rays. Moreover, there are features in the cosmic ray energy spectrum that we just don’t understand (see Fig. 2). Two kinks, a “knee” at about 1016 eV and an “ankle” at about 1018 eV could indicate the turning on or off of some astrophysical process. Experiments like the Pierre Auger Observatory were designed to study these ultra-high-energy particles and hopefully will tell up a little bit more about them in the next few years.

The AMS is primarily interested in lower-energy cosmic rays. For four years, ever since its launch up to the International Space Station, it’s been cruising the skies and collecting cosmic rays by the tens of billions. I will not address the experimental design and software here. Instead I refer the reader to one of my previous articles, “Dark Skies II- Indirect Detection and the Quest for the Smoking Gun”.

In addition to precision studies of the composition and flux of cosmic rays, the AMS has three main science goals: (1) Investigating the matter-antimatter asymmetry by searching for primordial antimatter. (2) Searching for dark matter annihilation products amidst the cosmic rays. And (3), looking for strangelets and other exotic forms of matter.

The very small fraction of cosmic rays made up of antimatter is relevant not just for the first goal but for the second as well. Not many processes that we know about can produce positrons and antiprotons, but as I mention in “Dark Skies II”, dark matter annihilations into Standard Model particles could be one of those processes. Any blips or features in the cosmic ray antimatter spectrum could indicate dark matter annihilations at work.

Fig. 3. The positron fraction measured by AMS.  Image from L. Accardo et al. (AMS Collaboration), September 2014.

Fig. 3. The positron fraction measured by AMS. Image from L. Accardo et al. (AMS Collaboration), September 2014.

On April 14 at “AMS Days at CERN”, Professor Andrei Kounine of MIT presented the latest results from AMS.

The first part of Kounine’s talk focused on a precise characterization of the positron fraction presented by the AMS collaboration in September 2014 and a discussion of the relevant systematics. In the absence of new physics processes, we expect the positron fraction to be smooth and decreasing with energy. As you can see in Fig. 3, however, the positron fraction starts rising at approximately 8 GeV and increases steadily up to about 250 GeV. The curve hits a maximum at about 275 GeV and then appears to begin to turn over, although at these energies the measurements are limited by statistics and more data is needed to determine exactly what happens beyond this point. Models of dark matter annihilation predict a much steeper drop-off than do models where the positron excess is produced by, say, pulsars. Five possible sources of systematic error were identified, all of which have been heavily investigated. These included a small asymmetry in positron and electron acceptance due to slight differences in some of the bits of the tracker; variations in efficiency with respect to energy of the incoming particle; binning errors, which are mitigated due to high experimental resolution; low statistics at the tails of the electron and positron distributions; and “charge confusion”, or the misidentification of electrons as positrons, which happens only in a very small number of cases.

Kounine also presented a never-before-seen, not-yet-published measurement of the antiproton-proton ratio as measured by AMS, which you can see in Fig. 4. This curve represents a total of 290,000 antiprotons selected out of total of 54 billion events collected by AMS over the past 4 years. Many of the same systematics (acceptance asymmetry, charge confusion, and so on) as in the positron measurement are relevant here. Work on the antiproton analysis is ongoing, however, and according to Kounine it’s too soon to try to match models to the data.

Fig. 4. AMS’s latest antiproton-proton ratio measurement, from Prof. Andrei Kounine’s presentation at “AMS Days at CERN”.

Fig. 4. AMS’s latest antiproton-proton ratio measurement, from Prof. Andrei Kounine’s presentation at “AMS Days at CERN”.

As a dark matter physicist, the question in my mind is, do these measurements represent dark matter annihilations? Professor Subir Sarkar of Oxford and the Niels Bohr Institute in Copenhagen thinks not. In his talk at “AMS Days”, Sarkar argues that the dark matter annihilation cross-section necessary to match the positron flux seen by AMS and other experiments such as Fermi-LAT and PAMELA needs to be so large that by all rights the dark matter in the universe should have all annihilated away already. This is inconsistent with the observed dark matter density in our galaxy. You can get around this with theoretical models that incorporate new kinds of long-range forces. However, the observed antiproton flux, according to Sarkar, is consistent with background. Therefore dark matter would have to be able to annihilate into leptons (electrons and positrons, muons, neutrinos, and so on) but not quarks. Such models exist, but now we’re starting to severely restrict our model space. Moreover, dark matter annihilating in the early universe near the time of recombination should leave visible imprints in the Cosmic Microwave Background (CMB), which have not yet been seen. CMB experiments such as Planck therefore disfavor a dark matter explanation for the observed peak in positron fraction.

Sarkar then goes on to present an alternate model where secondary cosmic ray particles such as positrons are accelerated by the same mechanisms (magnetic shocks from supernovae, pulsars, and other cosmic accelerators) that accelerate primary cosmic rays. Then, if there are invisible accelerators in our nearby galactic neighborhood, as seems likely because electrons and positrons can’t propagate very far without losing energy due to interactions with starlight and the CMB, it could be possible to get very large fluctuations in the cosmic ray flux due purely to the randomness of how these accelerators are distributed around us.

Regardless of whether or not the AMS has actually seen a dark matter signal, the data are finally beginning to be precise enough that we can start really pinning down how cosmic rays backgrounds are created and propagated. I encourage you to check out some of the webcasts at “AMS Days at CERN” for yourself. Although the event is over the webcasts are still available in the CERN document archive here.

by Nicole Larsen at May 08, 2015 03:12 PM

Symmetrybreaking - Fermilab/SLAC

Lonely chairs at CERN

Over the past year, the “Lonely Chairs at CERN” photography blog has let the chairs do the talking.

When CMS physicist Rebeca Gonzalez Suarez of the University of Nebraska, Lincoln, created a Tumblr called “Lonely Chairs at CERN” one year ago, she was not expecting the immediate reaction it garnered.  Within days, the blog picked up thousands of followers and had been featured in Gizomodo and The Guardian.

“The response inside CERN was very positive, but the response outside CERN was overwhelming,” she says. “I’ve gotten a lot of followers who are really into science and very excited about CERN. They comment about wanting to work here—sometimes on the ugliest chair I’ve posted.”

The blog showcases an older, perhaps grittier side of the laboratory—one that is very familiar to CERNois but can be somewhat surprising to the rest of the world.

“I like CERN the way it is, and sometimes it is difficult to show what CERN looks like on the inside,” she says. “What makes CERN so unique, and the part I like most, is that it has been here for 60 years and you can tell. That’s a good thing. It helps put you and your work into context. People were working here before you and they were doing the same things that you are doing—maybe even using the same chair.

“Everyone likes to have new things,” she says. “All the new buildings and new elevators are great... but the spirit of CERN is also in the old stuff. New things can be practical and pretty, but they are lacking in history. I like best the character you find in old things.”

As “Lonely Chairs at CERN” nears 20,000 followers, Gonzalez Suarez has no plans to slow down: “I am wondering when people will get tired of chairs, or when I will simply run out. But so far—I still have lots to go.”

As for her own chair? Gonzalez Suarez assures us it is just as bleak: “My chair is really, really old—I have no idea how many physicists have sat there but... a lot.”


A version of this article was originally published by the CERN Bulletin.

 

Like what you see? Sign up for a free subscription to symmetry!

by Katarina Anthony at May 08, 2015 01:00 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
May 23, 2015 02:36 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at