Particle Physics Planet


April 18, 2014

Emily Lakdawalla - The Planetary Society Blog

Intro Astronomy 2014. Class 10: Trans Neptunian Objects including Pluto, KBOs, Comets
Explore the worlds beyond Neptune including Pluto, Kuiper Belt Objects and comets in this video of class 10 of Bruce Betts' Introduction to Planetary Science and Astronomy class.

April 18, 2014 11:42 PM

Christian P. Robert - xi'an's og

走ることについて語るときに僕の語ること [book review]

The English title of this 2007 book of Murakami is “What I talk about when I talk about running”. Which is a parody of Raymond Carver’s collection of [superb] short stories, “What we talk about when we talk about love”. (Murakami translated the complete œuvres of Raymond Carver in Japanese.) It is a sort of diary about Murakami’s running practice and the reasons why he is running. It definitely is not a novel and the style is quite loose or lazy, but this is not a drawback as the way the book is written somehow translates the way thoughts drift away and suddenly switch topics when one is running. At least during low-intensity practice, when I often realise I have been running for minutes without paying any attention to my route. Or when I cannot recall what I was thinking about for the past minutes. During races, the mind concentration is at a different level, first focussing on keeping the right pace, refraining from the deadly rush during the first km, then trying to merge with the right batch of runners, then fighting wind, slope, and eventually fatigue. While the book includes more general autobiographical entries than those related with Murakami’s runner’s life, there are many points most long-distance runners would relate with. From the righteous  feeling of sticking to a strict training and diet, to the almost present depression catching us in the final kms of a race, to the very flimsy balance between under-training and over-training, to the strangely accurate control over one’s pace at the end of a training season, and, for us old runners, to the irremediable decline in one’s performances as years pass by… On a more personal basis, I also shared the pain of hitting one of the slopes in Central Park and the lack of nice long route along Boston’s Charles river. And shared the special pleasure of running near a river or seafront (which is completely uncorrelated with the fact it is flat, I believe!) Overall, what I think this book demonstrates is that there is no rational reason to run, which makes the title more than a parody, as fighting weight, age, health problems, depression, &tc. and seeking solitude, quiet, exhaustion, challenge, performances, zen, &tc. are only partial explanations. Maybe the reason stated in the book that I can relate the most with is this feeling of having an orderly structure one entirely controls (provided the body does not rebel!) at least once a day.  Thus, I am not certain the book appeals to non-runners. And contrary to some reviews of the book, it certainly is not a training manual for novice runners. (Murakami clearly is a strong runner so some of his training practice could be harmful to weaker runners…)


Filed under: Books, Running Tagged: Boston, Central Park, Charles river, depression, Haruki Murakami, Hawai, Japan, marathon, New York City Marathon, running, training, ultra-marathon

by xi'an at April 18, 2014 10:14 PM

The n-Category Cafe

Elementary Observations on 2-Categorical Limits

Guest post by Christina Vasilakopoulou

In the eighth installment of the Kan Extension Seminar, we discuss the paper “Elementary Observations on 2-Categorical Limits” by G.M. Kelly, published in 1989. Even though Kelly’s classic book Basic Concepts of Enriched Category Theory, which contains the abstract theory related to indexed (or weighted) limits for arbitrary <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-categories, was available since 1982, the existence of the present article is well-justifiable.

On the one hand, it constitutes an independent account of the fundamental case <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>, thus it motivates and exemplifies the more general framework through a more gentle, yet meaningful exposition of 2-categorical limits. The explicit construction of specific notable finite limits such as inserters, equifiers etc. promotes the comprehension of the definitions, via a hands-on description. Moreover, these finite limits and particular results concerning 2-categories rather than general enriched categories, such as the construction of the cotensor as a PIE limit, are central for the theory of 2-categories. Lastly, by introducing indexed lax and pseudo limits along with Street’s bilimits, and providing appropriate lax/ pseudo/ bicategorical completeness results, the paper serves also as an indespensable reference for the later “2-Dimensional Monad Theory” by Blackwell, Kelly and Power.

I would like to take this opportunity to thank Emily as well as all the other participants of the Kan Extension Seminar. This has been a unique experience of constant motivation and inspiration for me!

Basic Machinery

Presently, our base of enrichment is the cartesian monoidal closed category <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> of (small) categories, with the usual adjunction <semantics>×𝒜[𝒜,]<annotation encoding="application/x-tex">-\times\mathcal{A}\dashv[\mathcal{A},-]</annotation></semantics>. The very definition of an indexed limit requires a good command of the basic <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>-categorical notions, as seen for example in “Review of the Elements of 2-categories” by Kelly and Street. In particular, a 2-natural transformation <semantics>α:GH<annotation encoding="application/x-tex">\alpha:G\Rightarrow H</annotation></semantics> between 2-functors consists of components which not only satisfy the usual naturality condition, but also the 2-naturality one expressing compatibility with 2-cells. Moreover, a modification between 2-natural transformations <semantics>m:αβ<annotation encoding="application/x-tex">m:\alpha\Rrightarrow\beta</annotation></semantics> has components families of 2-cells <semantics>m A:α Aβ A:GAHA<annotation encoding="application/x-tex">m_A:\alpha_A\Rightarrow\beta_A:GA\to HA</annotation></semantics> compatible with the mapped 1-cells of the domain 2-category, i.e. <semantics>m BGf=Hfm A<annotation encoding="application/x-tex">m_B\cdot Gf=Hf\cdot m_A</annotation></semantics> (where <semantics><annotation encoding="application/x-tex">\cdot</annotation></semantics> is whiskering).

A 2-functor <semantics>F:𝒦Cat<annotation encoding="application/x-tex">F:\mathcal{K}\to\mathbf{Cat}</annotation></semantics> is called representable, when there exists a 2-natural isomorphism <semantics>α:𝒦(K,)F.<annotation encoding="application/x-tex"> \alpha:\mathcal{K}(K,-)\xrightarrow{\quad\sim\quad}F. </annotation></semantics> The components of this isomorphism are <semantics>α A:𝒦(K,A)FA<annotation encoding="application/x-tex">\alpha_A:\mathcal{K}(K,A)\cong FA</annotation></semantics> in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>, and the unit of the representation is the corresponding `element’ <semantics>1FK<annotation encoding="application/x-tex">\mathbf{1}\to FK</annotation></semantics> via Yoneda.

For a general complete symmetric monoidal closed category <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>, the usual functor category <semantics>[𝒜,]<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{B}]</annotation></semantics> for two <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-categories is endowed with the structure of a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category itself, with hom-objects ends <semantics>[𝒜,](T,S)= A𝒜(TA,SA)<annotation encoding="application/x-tex"> [\mathcal{A},\mathcal{B}](T,S)=\int_{A\in\mathcal{A}} \mathcal{B}(TA,SA) </annotation></semantics> (which exist at least when <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> is small). In our context of <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> it is not necessary to employ ends and coends at all, and the hom-category <semantics>[𝒦,](G,H)<annotation encoding="application/x-tex">[\mathcal{K},\mathcal{L}](G,H)</annotation></semantics> of the functor 2-category is evidently the category of 2-natural transformations and modifications. However, we note that computations via (co)ends simplify and are essential for constructions and (co)completeness results for enrichment in general monoidal categories.

The definition of weighted limits for 2-categories

To briefly motivate the definition of a weighted limit, recall that an ordinary limit of a (<semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics>-) functor <semantics>G:𝒫𝒞<annotation encoding="application/x-tex">G:\mathcal{P}\to\mathcal{C}</annotation></semantics> is characterized by an isomorphism <semantics>𝒞(C,limG)[𝒫,𝒞](ΔC,G)<annotation encoding="application/x-tex"> \mathcal{C}(C,limG)\cong[\mathcal{P},\mathcal{C}](\Delta C, G) </annotation></semantics> natural in <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, where <semantics>ΔC:𝒫𝒞<annotation encoding="application/x-tex">\Delta C:\mathcal{P}\to\mathcal{C}</annotation></semantics> is the constant functor on the object <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. In other words, the limit is the representing object of the presheaf <semantics>[𝒫,𝒞](Δ,G):𝒞 opSet.<annotation encoding="application/x-tex"> [\mathcal{P},\mathcal{C}](\Delta -,G):\mathcal{C}^\op\to\mathbf{Set}. </annotation></semantics> Since components of a natural transformation <semantics>ΔCG<annotation encoding="application/x-tex">\Delta C\Rightarrow G</annotation></semantics> (i.e. cones) can be viewed as components of a natural <semantics>Δ1𝒞(C,G):𝒞Set<annotation encoding="application/x-tex">\Delta\mathbf{1}\Rightarrow\mathcal{C}(C,G-):\mathcal{C}\to\mathbf{Set}</annotation></semantics>, the above defining isomorphism can be written as <semantics>𝒞(C,limG)[𝒫,Set](Δ1,𝒞(C,G)).<annotation encoding="application/x-tex"> \mathcal{C}(C,\mathrm{lim}G)\cong[\mathcal{P},\mathbf{Set}](\Delta\mathbf{1},\mathcal{C}(C,G-)). </annotation></semantics> In this form, ordinary limits can be easily seen as particular examples of conical indexed limits for <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics>, and we are able to generalize the concept of a limit by replacing the functor <semantics>Δ1<annotation encoding="application/x-tex">\Delta\mathbf{1}</annotation></semantics> by an arbitrary functor (weight) <semantics>𝒞Set<annotation encoding="application/x-tex">\mathcal{C}\to\mathbf{Set}</annotation></semantics>.

We may thus think of a 2-functor <semantics>F:𝒫Cat<annotation encoding="application/x-tex">F:\mathcal{P}\to\mathbf{Cat}</annotation></semantics> as a (small) indexing type or weight, and a 2-functor <semantics>G:𝒫𝒦<annotation encoding="application/x-tex">G:\mathcal{P}\to\mathcal{K}</annotation></semantics> as a diagram in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> of shape <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics>: <semantics> Cat weight F 𝒫 diagramG 𝒦. <annotation encoding="application/x-tex"> \begin{matrix} & \mathbf{Cat}\quad \\ {}^{weight} \nearrow_{F} & \\ \mathcal{P} & \overset{G}\underset{diagram}{\rightarrow} & \mathcal{K}. \\ \end{matrix} </annotation></semantics> The 2-functor <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> gives rise to a 2-functor <semantics> p[Fp,𝒦(,Gp)]=[𝒫,Cat](F,𝒦(,G)):𝒦 opCat<annotation encoding="application/x-tex"> \int_p [Fp,\mathcal{K}(-,Gp)]=[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\; \mathcal{K}^\op\longrightarrow\mathbf{Cat} </annotation></semantics> which maps a 0-cell <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> to the category <semantics>[𝒫,Cat](F,𝒦(A,G))<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G))</annotation></semantics>. A representation of this contravariant 2-functor is an object <semantics>{F,K}𝒦<annotation encoding="application/x-tex">\{F,K\}\in\mathcal{K}</annotation></semantics> along with 2-natural isomorphism <semantics>𝒦(,{F,G})[𝒫,Cat](F,𝒦(,G))<annotation encoding="application/x-tex">\mathcal{K}(-,\{F,G\})\xrightarrow{\;\sim\;}[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)) </annotation></semantics> with components isomorphisms between categories <semantics>𝒦(A,{F,G})[𝒫,Cat](F,𝒦(A,G)).<annotation encoding="application/x-tex"> \mathcal{K}(A,\{F,G\})\cong[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G-)). </annotation></semantics> The unit of this representation is <semantics>1[𝒫,Cat](F,𝒦({F,G},G))<annotation encoding="application/x-tex">\mathbf{1}\to[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(\{F,G\},G))</annotation></semantics> which corresponds uniquely to a 2-natural transformation <semantics>ξ:F𝒦({F,G},G)<annotation encoding="application/x-tex">\xi:F\Rightarrow\mathcal{K}(\{F,G\},G)</annotation></semantics>.

Via this 2-natural isomorphism, the object <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics> in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> satisfies a universal property which can be expressed in two levels:

  • The 1-dimensional aspect of the universal property states that every natural transformation <semantics>ρ<annotation encoding="application/x-tex">\rho</annotation></semantics> factorizes as <semantics>Fρ 𝒦(A,G) ξ 𝒦(h,1) 𝒦({F,G},G) <annotation encoding="application/x-tex"> \begin{matrix} F \xrightarrow{\rho} & \mathcal{K}(A,G) \\ {}_\xi \searrow & \uparr_{\mathcal{K}(h,1)} \\ & \mathcal{K}(\{F,G\},G) \\ \end{matrix} </annotation></semantics> for a unique 1-cell <semantics>h:A{F,G}<annotation encoding="application/x-tex">h:A\to\{F,G\}</annotation></semantics> in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>, where the vertical arrow is just pre-composition with <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics>.

  • The 2-dimensional aspect of the universal property states that every modification <semantics>θ:ρρ<annotation encoding="application/x-tex">\theta:\rho\Rrightarrow\rho'</annotation></semantics> factorizes as <semantics>𝒦(α,1)ξ<annotation encoding="application/x-tex">\mathcal{K}(\alpha,1)\cdot \xi</annotation></semantics> for a unique 2-cell <semantics>α:hh<annotation encoding="application/x-tex">\alpha:h\Rightarrow h'</annotation></semantics> in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>.

The fact that the 2-dimensional aspect (which asserts an isomorphism of categories) does not in general follow from the 1-dimensional aspect (which asserts a bijection between the hom-sets of the underlying categories) is a recurrent issue of the paper. In fact, things would be different if the underlying category functor <semantics>𝒱(I,)=() 0:𝒱-CatCat<annotation encoding="application/x-tex"> \mathcal{V}(I,-)=(\;)_0:\mathcal{V}\text{-}\mathbf{Cat}\to\mathbf{Cat} </annotation></semantics> were conservative, in which case the 2-dimensional universal property would always imply the 1-dimensional one. Certainly though, this is not the case for <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>: the respective functor discards all the 2-cells and is not even faithful. However, if we know that a weighted limit exists, then the first level of the universal property suffices to detect it up to isomorphism.

Completeness of 2-categories

A 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> is complete when all limits <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics> exist. The defining 2-natural isomorphism extends the mapping <semantics>(F,G){F,G}<annotation encoding="application/x-tex">(F,G)\mapsto\{F,G\}</annotation></semantics> into a functor of two variables (the weighted limit functor) <semantics>{,}:[𝒫,Cat] op×[𝒫,𝒦]𝒦<annotation encoding="application/x-tex"> \{-,-\}:[\mathcal{P},\mathbf{Cat}]^{op}\times[\mathcal{P},\mathcal{K}]\longrightarrow \mathcal{K} </annotation></semantics> as the left parametrized adjoint (actually its opposite) of the functor <semantics>𝒦(,?):𝒦 op×[𝒫,𝒦][𝒫,Cat]<annotation encoding="application/x-tex"> \mathcal{K}(-,?):\mathcal{K}^{op}\times[\mathcal{P},\mathcal{K}]\to[\mathcal{P},\mathbf{Cat}] </annotation></semantics> mapping an object <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and a functor <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> to <semantics>𝒦(A,G)<annotation encoding="application/x-tex">\mathcal{K}(A,G-)</annotation></semantics>. A colimit in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> is a limit in <semantics>𝒦 op<annotation encoding="application/x-tex">\mathcal{K}^op</annotation></semantics>, and the weighted colimit functor is <semantics>*:[𝒫 op,Cat]×[𝒫,𝒦]𝒦.<annotation encoding="application/x-tex"> -\ast-:[\mathcal{P}^op,\mathbf{Cat}]\times[\mathcal{P},\mathcal{K}]\longrightarrow\mathcal{K}. </annotation></semantics> Apart from the evident duality, we observe that often colimits are harder to compute than limits. This may partially be due to the fact that <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics> is determined by the representable <semantics>𝒦(,{F,G})<annotation encoding="application/x-tex">\mathcal{K}(-,\{F,G\})</annotation></semantics> which gives generalized elements of <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics>, whereas the description of <semantics>𝒦(F*G,)<annotation encoding="application/x-tex">\mathcal{K}(F\ast G,-)</annotation></semantics> gives us arrows out of <semantics>F*G<annotation encoding="application/x-tex">F\ast G</annotation></semantics>. For example, limits in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> are easy to compute via <semantics>[𝒜,{F,G}][𝒫,Cat](F,[𝒜,G])[𝒜,[𝒫,Cat](F,G)]<annotation encoding="application/x-tex"> [\mathcal{A},\{F,G\}]\cong[\mathcal{P},\mathbf{Cat}](F,[\mathcal{A},G-])\cong[\mathcal{A},[\mathcal{P},\mathbf{Cat}](F,G)] </annotation></semantics> and in particular, taking <semantics>𝒜=1<annotation encoding="application/x-tex">\mathcal{A}=\mathbf{1}</annotation></semantics> gives us the objects of the category <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics> and <semantics>𝒜=2<annotation encoding="application/x-tex">\mathcal{A}=\mathbf{2}</annotation></semantics> gives us the morphisms. On the contrary, colimits in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> are not straightforward (except than their property <semantics>F*GG*F<annotation encoding="application/x-tex">F\ast G\cong G\ast F</annotation></semantics>).

Notice that like ordinary limits are defined, via representability, in terms of limits in <semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics>, we can define weighted limits in terms of limits of representables in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>: <semantics>𝒦(A,{F,G}){F,𝒦(A,G)},𝒦(F*,G,A){F,𝒦(G,A)}.<annotation encoding="application/x-tex"> \mathcal{K}(A,\{F,G\})\cong\{F,\mathcal{K}(A,G-)\},\quad\mathcal{K}(F\ast ,G,A)\cong\{F,\mathcal{K}(G-,A)\}. </annotation></semantics> On the other hand, if the weights are representables, via Yoneda lemma we get <semantics>{𝒫(P,),G}GP,𝒫(,P)*GGP.<annotation encoding="application/x-tex"> \{\mathcal{P}(P,-),G\}\cong GP, \qquad \mathcal{P}(-,P)\ast G\cong GP. </annotation></semantics>

The main result for general <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-completeness in Kelly’s book says that a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-enriched category is complete if and only if it admits all conical limits (equivalently, products and equalizers) and cotensor products. Explicitly, conical limits are those with weight the constant <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-functor <semantics>ΔI<annotation encoding="application/x-tex">\Delta I</annotation></semantics>, whereas cotensors are those where the domain enriched category <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics> is the unit category <semantics>1<annotation encoding="application/x-tex"> \mathbf{1}</annotation></semantics>, hence the weight and the diagram are determined by objects in <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> and <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> respectively. Once again, for <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> an elementary description of both limits is possible.

Notice that when a 2-category admits tensor products of the form <semantics>2*A<annotation encoding="application/x-tex">\mathbf{2}\ast A</annotation></semantics>, then the 2-dimensional universal property follows from the 1-dimensional for every limit, because of conservativity of the functor <semantics>Cat 0(2,)<annotation encoding="application/x-tex">\mathbf{Cat}_0(\mathbf{2},-)</annotation></semantics> and the definition of tensors. Moreover, the former also implies that the category <semantics>2<annotation encoding="application/x-tex">\mathbf{2}</annotation></semantics> is a strong generator in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>, hence the existence of only the cotensor <semantics>{2,B}<annotation encoding="application/x-tex">\{\mathbf{2},B\}</annotation></semantics> along with conical limits in a 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> is enough to deduce 2-completeness.

<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> itself has cotensor and tensor products, given by <semantics>{𝒜,}=[𝒜,]<annotation encoding="application/x-tex">\{\mathcal{A},\mathcal{B}\}=[\mathcal{A},\mathcal{B}]</annotation></semantics> and <semantics>𝒜*=𝒜×<annotation encoding="application/x-tex">\mathcal{A}\ast\mathcal{B}=\mathcal{A}\times\mathcal{B}</annotation></semantics>. It is ultimately also cocomplete, all colimits being constructed from tensors and ordinary colimits in <semantics>Cat 0<annotation encoding="application/x-tex">\mathbf{Cat}_0</annotation></semantics> (which give the conical limits in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> by the existence of the cotensor <semantics>[2,B]<annotation encoding="application/x-tex">[\mathbf{2},B]</annotation></semantics>).

If we were to make use of ends and coends, the explicit construction of an arbitrary 2-(co)limit in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> as the (co)equalizer of a pair of arrows between (co)products of (co)tensors coincides with <semantics>{F,G}= K{FK,GK},F*G= KFK*GK.<annotation encoding="application/x-tex"> \{F,G\}=\int_K \{FK,GK\}, \qquad F\ast G=\int^K FK\ast GK. </annotation></semantics> Such an approach simplifies the proofs of many useful properties of limits and colimits, such as <semantics>{F,{G,H}}{F*G,H},(G*F)*HF*(G*H)<annotation encoding="application/x-tex"> \{F,\{G,H\}\}\cong\{F\ast G,H\},\;\;(G\ast F)\ast H\cong F\ast(G\ast H) </annotation></semantics> for appropriate 2-functors.

Famous finite 2-limits

The paper provides the description of some important classes of limits in 2-categories, essentially by exhibiting the unit of the defining representation for each particular case. A table which summarizes the main examples included is the following:

graphs

Let’s briefly go through the explicit construction of an inserter in a 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>. The weight and diagram shape are as in the first line of the above table, and denote by <semantics>BgfC<annotation encoding="application/x-tex">B\overset{f}\underset{g}{\rightrightarrows}C</annotation></semantics> the image of the diagram in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>. The standard technique is to identify the form of objects and morphisms of the functor 2-category <semantics>[𝒫,Cat](F,𝒦(A,G))<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G-))</annotation></semantics>, and then state both aspects of the universal property.

An object is a 2-natural transformation <semantics>α:F𝒦(A,G)<annotation encoding="application/x-tex">\alpha:F\Rightarrow\mathcal{K}(A,G-)</annotation></semantics> with components <semantics>α :1𝒦(A,B)<annotation encoding="application/x-tex">\alpha_\bullet:1\to\mathcal{K}(A,B)</annotation></semantics> and <semantics>α :2𝒦(A,C)<annotation encoding="application/x-tex">\alpha_\star:\mathbf{2}\to\mathcal{K}(A,C)</annotation></semantics> satisfying the usual naturality condition (2-naturality follows trivially, since <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics> only has the identity 2-cell). This amounts to the following data:

  • an 1-cell <semantics>Aα B<annotation encoding="application/x-tex">A\xrightarrow{\alpha_\bullet}B</annotation></semantics>, i.e. the object in <semantics>𝒦(A,B)<annotation encoding="application/x-tex">\mathcal{K}(A,B)</annotation></semantics> determined by the functor <semantics>α <annotation encoding="application/x-tex">\alpha_\bullet</annotation></semantics>;

  • a 2-cell <semantics>α 0α α 1<annotation encoding="application/x-tex">{\alpha_\star0}\overset{\alpha_\star}{\Rightarrow}{\alpha_\star1}</annotation></semantics>, i.e. the morphism in <semantics>𝒦(A,C)<annotation encoding="application/x-tex">\mathcal{K}(A,C)</annotation></semantics> determined by the functor <semantics>α <annotation encoding="application/x-tex">\alpha_\star</annotation></semantics>;

  • properties, which make the 1-cells <semantics>α 0,α 1<annotation encoding="application/x-tex">\alpha_\star0,\alpha_\star1</annotation></semantics> factorize as <semantics>α 0=Aα BfC<annotation encoding="application/x-tex">\alpha_\star0=A\xrightarrow{\alpha_\bullet}B\xrightarrow{f}C</annotation></semantics> and <semantics>α 1=Aα BgC<annotation encoding="application/x-tex">\alpha_\star1=A\xrightarrow{\alpha_\bullet}B\xrightarrow{g}C</annotation></semantics>.

We can encode the above data by a diagram <semantics> B α f A α C. α g B <annotation encoding="application/x-tex"> \begin{matrix} & B & \\ {}^{\alpha_\bullet} \nearrow && {\searrow}^f \\ A\; & \Downarrow{\alpha_\star}& \quad C. \\ {}_{\alpha_\bullet} \searrow && \nearrow_g \\ & B & \\ \end{matrix} </annotation></semantics> Now a morphism is a modification <semantics>m:αβ<annotation encoding="application/x-tex">m:\alpha\Rrightarrow\beta</annotation></semantics> between two objects as above. This has components

  • <semantics>m :α β <annotation encoding="application/x-tex">m_\bullet:\alpha_\bullet\Rightarrow\beta_\bullet</annotation></semantics> in <semantics>𝒦(A,B)<annotation encoding="application/x-tex">\mathcal{K}(A,B)</annotation></semantics>;

  • <semantics>m :α β <annotation encoding="application/x-tex">m_\star:\alpha_\star\Rightarrow\beta_\star</annotation></semantics> given by 2-cells <semantics>m 0:α 0β 0<annotation encoding="application/x-tex">m_\star^0:\alpha_\star0\Rightarrow{\beta_\star0}</annotation></semantics> and <semantics>m 1:α 1β 1<annotation encoding="application/x-tex">m_\star^1:\alpha_\star1\Rightarrow\beta_\star1</annotation></semantics> in <semantics>𝒦(A,C)<annotation encoding="application/x-tex">\mathcal{K}(A,C)</annotation></semantics> satisfying naturality <semantics>m 1α =β m 0<annotation encoding="application/x-tex">m^1_\star\circ\alpha_\star=\beta_\star\circ m^0_\star</annotation></semantics>.

The modification condition <semantics>m 0=fm <annotation encoding="application/x-tex">m^0_\star=f\cdot m_\bullet</annotation></semantics> and <semantics>m 1=gm <annotation encoding="application/x-tex">m^1_\star=g\cdot m_\bullet</annotation></semantics> gives the components of <semantics>m <annotation encoding="application/x-tex">m_\star</annotation></semantics> as whiskered composites of <semantics>m <annotation encoding="application/x-tex">m_\bullet</annotation></semantics>. We can thus express such a modification as a 2-cell <semantics>m <annotation encoding="application/x-tex">m_\bullet</annotation></semantics> satisfying <semantics>gm α =fm β <annotation encoding="application/x-tex">gm_\bullet\circ\alpha_\star=fm_\bullet\circ\beta_\star</annotation></semantics> (graphically expressed by pasting <semantics>m <annotation encoding="application/x-tex">m_\bullet</annotation></semantics> accordingly to the sides of <semantics>α ,β <annotation encoding="application/x-tex">\alpha_\star,\beta_\star</annotation></semantics>).

This encoding simplifies the statement of the universal property for <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics>, as the object of in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> through which any natural transformation and modification uniquely factorize in an appropriate way (in fact, through the unit <semantics>ξ<annotation encoding="application/x-tex">\xi</annotation></semantics>). A very similar process can be followed for the identification of the other classes of limits. As an illustration, let’s consider some of these limits in the 2-category <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>.

  • The inserter of two functors <semantics>F,G:𝒞<annotation encoding="application/x-tex">F,G:\mathcal{B}\to\mathcal{C}</annotation></semantics> is a category <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> with objects pairs <semantics>(B,h)<annotation encoding="application/x-tex">(B,h)</annotation></semantics> where <semantics>B<annotation encoding="application/x-tex">B\in\mathcal{B}</annotation></semantics> and <semantics>h:FBGB<annotation encoding="application/x-tex">h:FB\to GB</annotation></semantics> in <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics>. A morphism <semantics>(B,h)(B,h)<annotation encoding="application/x-tex">(B,h)\to(B',h')</annotation></semantics> is an arrow <semantics>f:BB<annotation encoding="application/x-tex">f:B\to B'</annotation></semantics> in <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> such that the following diagram commutes: <semantics>FB Ff FB h h FB Gh GB. <annotation encoding="application/x-tex"> \begin{matrix} FB & \overset{Ff}{\longrightarrow} & FB' \\ {}_h\downarrow && \downarrow_{h'} \\ FB & \underset{Gh}{\longrightarrow} & GB'. \\ \end{matrix} </annotation></semantics> The functor <semantics>α =P:𝒜<annotation encoding="application/x-tex">\alpha_\bullet=P:\mathcal{A}\to\mathcal{B}</annotation></semantics> is just the forgetful functor, and the natural transformation is given by <semantics>(α ) (B,h)=h<annotation encoding="application/x-tex">(\alpha_\star)_{(B,h)}=h</annotation></semantics>.

  • The comma-object of two functors <semantics>F,G<annotation encoding="application/x-tex">F,G</annotation></semantics> is precisely the comma category. If the functors have also the same domain, their inserter is a subcategory of the comma category.

  • The equifier of two natural transformations <semantics>ϕ 1,ϕ 2:FG:𝒞<annotation encoding="application/x-tex">\phi^1,\phi^2:F\Rightarrow G:\mathcal{B}\to\mathcal{C}</annotation></semantics> is the full subcategory <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> of <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> over all objects <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> such that <semantics>ϕ B 1=ϕ B 2<annotation encoding="application/x-tex">\phi^1_B=\phi^2_B</annotation></semantics> in <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics>.

There is a variety of constructions of new classes of limits from given ones, coming down to the existence of endo-identifiers, inverters, iso-inserters, comma-objects, iso-comma-objects, lax/ oplax/pseudo limits of arrows and the cotensors <semantics>{2,K}<annotation encoding="application/x-tex">\{\mathbf{2},K\}</annotation></semantics>, <semantics>{I,K}<annotation encoding="application/x-tex">\{\mathbf{I},K\}</annotation></semantics> out of inserters, equifiers and binary products in the 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>. Along with the substantial construction of arbitrary cotensors out of these three classes, P(roducts)I(nserters)E(quifiers) limits are established as essential tools, also relatively to categories of algebras for 2-monads. Notice that equalizers are `too tight’ to fit in certain 2-categories of importance such as <semantics>Lex<annotation encoding="application/x-tex">\mathbf{Lex}</annotation></semantics>.

Weaker notions of limits in 2-categories

The concept of a weighted 2-limit strongly depends on the specific structure of the 2-category <semantics>[𝒫,Cat]<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}]</annotation></semantics> of 2-functors, 2-natural transformations and modifications, for the 2-categories <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics> and <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>. If we alter this structure by considering lax natural transformations or pseudonatural transformations, we obtain the definition of the lax limit <semantics>{F,G} l<annotation encoding="application/x-tex">\{F,G\}_l</annotation></semantics> and pseudo limit <semantics>{F,G} p<annotation encoding="application/x-tex">\{F,G\}_p</annotation></semantics> as the representing objects for the 2-functors <semantics>Lax[𝒫,Cat](F,𝒦(,G)):𝒦 opCat Psd[𝒫,Cat](F,𝒦(,G)):𝒦 opCat.<annotation encoding="application/x-tex"> \begin{matrix} Lax[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^\op\to\mathbf{Cat} \\ Psd[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^\op\to\mathbf{Cat}. \end{matrix} </annotation></semantics> Notice that the functor categories <semantics>Lax[𝒫,]<annotation encoding="application/x-tex">Lax[\mathcal{P},\mathcal{L}]</annotation></semantics> and <semantics>Psd[𝒫,]<annotation encoding="application/x-tex">Psd[\mathcal{P},\mathcal{L}]</annotation></semantics> are 2-categories whenever <semantics><annotation encoding="application/x-tex">\mathcal{L}</annotation></semantics> is a 2-category, hence the defining isomorphisms are again between categories as before.

An important remark is that any lax or pseudo limit in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> can be in fact expressed as a `strict’ weighted 2-limit. This is done by replacing the original weight with its image under the left adjoint of the incusion functors <semantics>[𝒫,Cat]Lax[𝒫,Cat]<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}]\hookrightarrow Lax[\mathcal{P},\mathbf{Cat}]</annotation></semantics>, <semantics>[𝒫,Cat]Psd[𝒫,Cat]<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}]\hookrightarrow Psd[\mathcal{P},\mathbf{Cat}]</annotation></semantics>. The opposite does not hold: for example, inserters and equifiers are neither lax not pseudo limits.

We can relax the notion of limits in 2-categories even further, and define the bilimit <semantics>{F,G} b<annotation encoding="application/x-tex">\{F,G\}_b</annotation></semantics> of 2-functors <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> as the representing object up to equivalence: <semantics>𝒦(A,{F,G} b)Psd[𝒫,Cat](F,𝒦(A,G)).<annotation encoding="application/x-tex"> \mathcal{K}(A,\{F,G\}_b)\simeq Psd[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G)). </annotation></semantics> This is of course a particular case of general bilimits in bicategories, for which <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics> and <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> are requested to be bicategories and <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> homomorphism of bicategories. The above equivalence of categories expresses a birepresentation of the homomorphism <semantics>Hom[𝒫,Cat](F,𝒦(,G)):𝒦 opCat<annotation encoding="application/x-tex">Hom[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^op\to\mathbf{Cat}</annotation></semantics>.

Evidently, bilimits (firstly introduced by Ross Street) may exist even when pseudo limits do not, since they require an equivalence rather than isomorphism of hom-categories. The following two results sum up the conditions ensuring whether a 2-category has all lax, pseudo and bilimits.

  • A 2-category with products, inserters and equifiers has all lax and pseudo limits (whereas it may not have all strict 2-limits).

  • A 2-category with biproducts, biequalizers and bicotensors is bicategorically complete. Equivalently, it admits all bilimits if and only if for all 2-functors <semantics>F:𝒫Cat<annotation encoding="application/x-tex">F:\mathcal{P}\to\mathbf{Cat}</annotation></semantics>, <semantics>G:𝒫𝒦<annotation encoding="application/x-tex">G:\mathcal{P}\to\mathcal{K}</annotation></semantics> from a small ordinary category <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics>, the above mentioned birepresentation exists.

Street’s construction of an arbitrary bilimit requires a descent object of a 3-truncated bicosimplicial object in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>. An appropriate modification of the arguments exhibits lax and pseudo limits as PIE limits.

These weaker forms of limits in 2-categories are fundamental for the theory of 2-categories and bicategories. Many important constructions such as the Eilenberg-Moore object as well as the Grothendieck construction on a fibration, arise as lax/oplax limits. They are also crucial in 2-monad theory, for example when studying categories of (strict) algebras with non-strict (pseudo or even lax/oplax) morphisms, which are more common in nature.

by riehl (eriehl@math.harvard.edu) at April 18, 2014 07:56 PM

The Great Beyond - Nature blog

Moon dust probe crashes
LADEE

The LADEE mission has ended in a controlled crash.

NASA

A NASA spacecraft that studied lunar dust vaporized into its own cloud of dust when it hit the Moon, as planned, in a mission-ending impact on 17 April. Launched last September, the Lunar Atmosphere and Dust Environment Explorer (LADEE) finished its primary mission in March. In early April, on an extended mission, it made close passes as low as 2 kilometres above the surface, gathering data on more than 100 low-elevation orbits. Mission controllers deliberately crashed it to avoid the chance that, left alone, it might crash and contaminate historic locations such as the Apollo landing sites.

During its lifetime, LADEE made the best measurements yet of the dust generated when tiny meteorites bombard the surface. It is still hunting the mystery of a horizon glow seen by Apollo astronauts. It also carried a test for future laser communications between spacecraft and Earth.

In its final days the probe unexpectedly survived the cold and dark of a total lunar eclipse on 15 April. Just before the eclipse, NASA had the spacecraft perform a final engine burn that determined the crash trajectory. LADEE normally coped with just one hour of darkness every time it looped behind the Moon. The eclipse put it into darkness for some four hours, potentially jeopardizing the ability of its battery-powered heaters to keep the spacecraft from freezing to death. But the spacecraft survived.

NASA has been running a contest to predict the exact date and time of the LADEE impact, and this morning predicted there may be multiple winners. When it hit, the probe was travelling about three times as fast as a rifle bullet. In the coming months the Lunar Reconnaissance Orbiter will take pictures of the crash site, which engineers are still determining.

by Alexandra Witze at April 18, 2014 02:35 PM

arXiv blog

Jupiter's Radio Emissions Could Reveal the Oceans on Its Icy Moons, Say Planetary Geologists

We should be able to use Jupiter’s radio emissions like ground-penetrating radar to study the oceans on Europa, Ganymede, and Callisto, say space scientists.

April 18, 2014 02:00 PM

Symmetrybreaking - Fermilab/SLAC

Is the universe balanced on a pinhead?

New precise measurements of the mass of the top quark bring back the question: Is our universe inherently unstable?

Scientists have known the mass of the heaviest fundamental particle, the top quark, since 1995.

But recent, more precise measurements of this mass have revived an old question: Why is it so huge?

No one is sure, but it might be a sign that our universe is inherently unstable. Or it might be a sign that some factor we don’t yet understand is keeping us in balance.

The top quark’s mass comes from its interaction with the Higgs field—which is responsible for the delicate balance of mass that allows matter to exist in its solid, stable form.

by Sarah Charley at April 18, 2014 01:00 PM

Christian P. Robert - xi'an's og

AI and Statistics 2014

Today, I am leaving Paris for a 8 day stay in Iceland! This is quite exciting, for many reasons: first, I missed the AISTATS 2013 last year as I was still in the hospital;  second, I am giving a short short tutorial on ABC methods which will be more like a long (two hours)  talk; third, it gives me the fantastic opportunity to visit Iceland for a few days, a place that was top of my wish list of countries to visit. The weather forecast is rather bleak but I am carrying enough waterproof layers to withstand a wee bit of snow and rain… The conference proper starts next Tuesday, April 22, with the tutorials taking place next Friday, April 25. Hence leaving me three completely free days for exploring the area near Reykjavik.


Filed under: Kids, Mountains, Statistics, Travel, University life Tagged: ABC, AISTATS 2014, Iceland, Reykjavik, tutorial, vacations

by xi'an at April 18, 2014 12:12 PM

astrobites - astro-ph reader's digest

Arecibo Detects a Fast Radio Burst

  • Title: Fast Radio Burst Discovered in the Arecibo Pulsar ALFA Survey
  • Authors: L. G. Spitler, J. M. Cordes, J. W. T. Hessels, D. R. Lorimer, M. A. McLaughlin, S. Chatterjee, F. Crawford, J. S. Deneva, V. M. Kaspi, R. S. Wharton, B. Allen, S. Bogdanov, A. Brazier, F. Camilo, P. C. C. Freire, F. A. Jenet1, C. Karako–Argaman, B. Knispel, P. Lazarus, K. J. Lee, J. van Leeuwen, R. Lynch, A. G. Lyne, S. M. Ransom, P. Scholz, X. Siemens, I. H. Stairs, K. Stovall, J. K. Swiggum, A. Venkataraman, W. W. Zhu, C. Aulbert, H. Fehrmann
  • First Author’s Institution: Max Planck Institute for Radio Astronomy
  • Status: Accepted for Publication in Astronomy & Astrophysics

Fast radio bursts (FRBs) are no strangers to regular Astrobites readers- these mysterious radio signals are bright bursts of radiation which last a fraction of a second before disappearing, never to repeat again.  Not much is known about them except most (but not all) appear to originate from very far away, outside the galaxy.  Various theories have been proposed as to what may cause these signals.  Many astronomers pointed out the signals may not be real at all: since the first was published in 2007 there have only been six FRBs recorded in the literature, all of which were detected at Parkes Observatory in Australia, while surveys at other radio telescopes came up empty.  Based on this, debate raged as to whether the FRB signals could be from a more mundane origin than pulsing from beyond the galaxy, such as instrumentation noise or some unique phenomena at the Parkes site like unusual weather patterns.

FRB121102

Figure 1: The signal from FRB121102- the big top plot shows the signal in time and frequency (showing its dispersion measure), while the lower plots show the signal to noise ratio with respect to time (left) and frequency (right). The FRB’s properties are the same as those detected previously at Parkes.

Today’s paper is a vindication for the radio astronomers who insisted FRBs are astronomical in nature: it reports on the first-ever FRB detection by a telescope other than Parkes!  Specifically, this FRB was detected at Arecibo Observatory by the Pulsar ALFA Survey, which surveyed the galactic plane at 1.4 GHz in order to detect single pulses from pulsars.  Known as FRB 121102, the pulse was observed in November 2012 in one of 13 beams in the receiver, lasted about 3 ms, and came from the direction of the galactic plane.  Despite this, astronomers think it’s likely the pulse originated from outside the galaxy based on its dispersion measure, or how much the signal’s frequency is “smeared” due to traveling through space (as explained well in this Astrobite, signals arrive later in lower frequencies when traveling long distances, giving an estimate on how far away the signal originated).  This FRB’s dispersion measure was three times greater than the maximum galactic dispersion measure expected in the line of sight from which the FRB was observed, based on the distribution of matter in that part of the galaxy.  While the authors suggest the pulses might be from a rotating radio transient- a special kind of pulsar- no other pulses were detected in follow-up observations so the signal is probably not just an unusually bright pulse from a pulsar.

Instead, the authors point to how FRB 121102′s high dispersion measure- combined with similar properties compared to previously observed FRBs- suggest an extragalactic origin for the pulse.  Using the observed dispersion measure of the pulse and estimates on its scaling intergalactically, the team concludes the FRB originated at a distance of z= 0.26, or 1 Gpc away.  Just what could be creating bright bursts so far away is a definite mystery!

Finally, based on their detection of this FRB the authors estimate a similar event rate to the previous one set by Parkes Observatory, whereby there should be thousands of FRBs in the sky every day. (They are just very hard to detect because they are so brief.) And the fact that this is the first pulse detected by an independent observatory is very encouraging, as it certainly gives credence to the idea that Fast Radio Bursts are a real astronomical phenomenon.  With luck, we will soon hear about many more FRB detections from multiple observatories, which may unravel the secrets of where they come from.

by Yvette Cendes at April 18, 2014 09:54 AM

Tommaso Dorigo - Scientificblogging

Personal Information
Long-time readers of this blog (are there any left ?) know me well since I often used to  write posts about personal matters here and in my previous sites. However, I am aware that readers come and go, and I also realize that lately I have not disclosed much of my personal life here; things like where I work, what's my family like, what I do in my pastime, what are my dreams and my projects for the future. So it is a good idea to write some personal details here.

read more

by Tommaso Dorigo at April 18, 2014 08:59 AM

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 7)

guest post by Steve Easterbrook

(7) To stay below 2 °C of warming, the world must become carbon negative.

Only one of the four future scenarios (RCP2.6) shows us staying below the UN’s commitment to no more than 2 ºC of warming. In RCP2.6, emissions peak soon (within the next decade or so), and then drop fast, under a stronger emissions reduction policy than anyone has ever proposed in international negotiations to date. For example, the post-Kyoto negotiations have looked at targets in the region of 80% reductions in emissions over say a 50 year period. In contrast, the chart below shows something far more ambitious: we need more than 100% emissions reductions. We need to become carbon negative:

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

The graph on the left shows four possible CO2 emissions paths that would all deliver the RCP2.6 scenario, while the graph on the right shows the resulting temperature change for these four. They all give similar results for temperature change, but differ in how we go about reducing emissions. For example, the black curve shows CO2 emissions peaking by 2020 at a level barely above today’s, and then dropping steadily until emissions are below zero by about 2070. Two other curves show what happens if emissions peak higher and later: the eventual reduction has to happen much more steeply. The blue dashed curve offers an implausible scenario, so consider it a thought experiment: if we held emissions constant at today’s level, we have exactly 30 years left before we would have to instantly reduce emissions to zero forever.

Notice where the zero point is on the scale on that left-hand graph. Ignoring the unrealistic blue dashed curve, all of these pathways require the world to go net carbon negative sometime soon after mid-century. None of the emissions targets currently being discussed by any government anywhere in the world are sufficient to achieve this. We should be talking about how to become carbon negative.

One further detail. The graph above shows the temperature response staying well under 2°C for all four curves, although the uncertainty band reaches up to 2°C. But note that this analysis deals only with CO2. The other greenhouse gases have to be accounted for too, and together they push the temperature change right up to the 2°C threshold. There’s no margin for error.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 18, 2014 08:46 AM

April 17, 2014

Symmetrybreaking - Fermilab/SLAC

Not just old codgers

During a day of talks at Stanford University, theoretical physicist Leonard Susskind explained “Why I Teach Physics to Old Codgers, and How It Got to Be a YouTube Sensation.”

Stanford professor Leonard Susskind has a well-deserved reputation among his colleagues as one of the most imaginative theorists working in physics today. During his nearly five decades in the field, he’s taken leading roles in the study of quark confinement, technicolor, black hole complementarity, the holographic principle and string theory. Even now, at the age of 73, he’s still in the thick of it, batting around ideas with his colleagues about firewalls, the latest twist on black holes.

by Lori Ann White at April 17, 2014 10:51 PM

Christian P. Robert - xi'an's og

Dan Simpson’s seminar at CREST

Daniel Simpson gave a seminar at CREST yesterday on his recently arXived paper, “Penalising model component complexity: A principled, practical  approach to constructing priors” written with Thiago Martins, Andrea Riebler, Håvard Rue, and Sigrunn Sørbye. Paper that he should also have given in Banff last month had he not lost his passport in København airport…  I have already commented at length on this exciting paper, hopefully to become a discussion paper in a top journal!, so I am just pointing out two things that came to my mind during the energetic talk delivered by Dan to our group. The first thing is that those penalised complexity (PC) priors of theirs rely on some choices in the ordering of the relevance, complexity, nuisance level, &tc. of the parameters, just like reference priors. While Dan already wrote a paper on Russian roulette, there is also a Russian doll principle at work behind (or within) PC priors. Each shell of the Russian doll corresponds to a further level of complexity whose order need be decided by the modeller… Not very realistic in a hierarchical model with several types of parameters having only local meaning.

My second point is that the construction of those “politically correct” (PC) priors reflects another Russian doll structure, namely one of embedded models, hence would and should lead to a natural multiple testing methodology. Except that Dan rejected this notion during his talk, by being opposed to testing per se. (A good topic for one of my summer projects, if nothing more, then!)


Filed under: Kids, Mountains, Statistics, Travel, University life Tagged: Banff, BiPS, CREST, hierarchical models, model complexity, Paris, penalisation, reference priors, Russian doll, Russian roulette

by xi'an at April 17, 2014 10:14 PM

Quantum Diaries

Searching for Dark Matter With the Large Underground Xenon Experiment

In December, a result from the Large Underground Xenon (LUX) experiment was featured in Nature’s Year In Review as one of the most important scientific results of 2013. As a student who has spent the past four years working on this experiment I will do my best to provide an introduction to this experiment and hopefully answer the question: why all the hype over what turned out to be a null result?

The LUX detector, deployed into the water tank shield

The LUX detector, deployed into its water tank shield 4850 feet underground.

Direct Dark Matter Detection

Weakly Interacting Massive Particles (WIMPs), or particles that interact only through the weak nuclear force and gravity, are a particularly compelling solution to the dark matter problem because they arise naturally in many extensions to the Standard Model. Quantum Diaries did a wonderful series last summer on dark matter, located here, so I won’t get into too many details about dark matter or the WIMP “miracle”, but I would however like to spend a bit of time talking about direct dark matter detection.

The Earth experiences a dark matter “wind”, or flux of dark matter passing through it, due to our motion through the dark matter halo of our galaxy. Using standard models for the density and velocity distribution of the dark matter halo, we can calculate that there are nearly 1 billion WIMPs per square meter per second passing through the Earth. In order to match observed relic abundances in the universe, we expect these WIMPs to have a small yet measurable interaction cross-section with ordinary nuclei.

In other words, there must be a small-but-finite probability of an incoming WIMP scattering off a target in a laboratory in such a way that we can detect it. The goal of direct detection experiments is therefore to look for these scattering events. These events are characterized by recoil energies of a few to tens of keV, which is quite small, but it is large enough to produce an observable signal.

So here’s the challenge: How do you build an experiment that can measure an extremely small, extremely rare signal with very high precision amid large amounts of background?

Why Xenon?

The signal from a recoil event inside a direct detection target typically takes one of three forms: scintillation light, ionization of an atom inside the target, or heat energy (phonons). Most direct detection experiments focus on one (or two) of these channels.

Xenon is a natural choice for a direct detection medium because it is easy to read out signals from two of these channels. Energy deposited in the scintillation channel is easily detectable because xenon is transparent to its own characteristic 175-nm scintillation. Energy deposited in the ionization channel is likewise easily detectable, since ionization electrons under the influence of an applied electric field can drift through xenon for distances up to several meters. These electrons can then be read out by any one of a couple different charge readout schemes.

Furthermore, the ratio of the energy deposited in these two channels is a powerful tool for discriminating between nuclear recoils such as WIMPs and neutrons, which are our signal of interest, and electronic recoils such as gamma rays, which are a major source of background.

Xenon is also particularly good for low-background science because of its self-shielding properties. That is, because liquid xenon is so dense, gammas and neutrons tend to attenuate within just a few cm of entering the target. Any particle that does happen to be energetic enough to reach the center of the target has a high probability of undergoing multiple scatters, which are easy to pick out and reject in software. This makes xenon ideal not just for dark matter searches, but also for other rare event searches such as neutrinoless double-beta decay.

The LUX Detector

The LUX experiment is located nearly a mile underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota. LUX rests on the 4850-foot level of the old Homestake gold mine, which was turned into a dedicated science facility in 2006.

Besides being a mining town and a center of Old West culture (The neighboring town, Deadwood, is famed as the location where Wild Bill Hickok met his demise in a poker game), Lead has a long legacy of physics. The same cavern where LUX resides once held Ray Davis’s famous solar neutrino experiment, which provided some of the first evidence for neutrino flavor oscillations and later won him the Nobel Prize.

A schematic of the LUX detector.

A schematic of the LUX detector.

The detector itself is what is called a two-phase time projection chamber (TPC). It essentially consists of a 370-kg xenon target in a large titanium can. This xenon is cooled down to its condensation point (~165 K), so that the bulk of the xenon target is liquid, and there is a thin layer of gaseous xenon on top. LUX has 122 photomultiplier tubes (PMTs) in two different arrays, one array on the bottom looking up into the main volume of the detector, and one array on the top looking down. Just inside those arrays are a set of parallel wire grids that supply an electric field throughout the detector. A gate grid located between the cathode and anode grid that lies close to the liquid surface allows the electric field in the liquid and gas regions to be separately tunable.

When an incident particle interacts with a xenon atom inside the target, it excites or ionizes the atom. In a mechanism common to all noble elements, that atom will briefly bond with another nearby xenon atom. The subsequent decay of this “dimer” back into its two constituent atoms causes a photon to be emitted in the UV. In LUX, this flash of scintillation light, called primary scintillation light or S1, is immediately detected by the PMTs. Next, any ionization charge that is produced is drifted upwards by a strong electric field (~200 V/cm) before it can recombine. This charge cloud, once it reaches the liquid surface, is pulled into the gas phase and accelerated very rapidly by an even stronger electric field (several kV/cm), causing a secondary flash of scintillation called S2, which is also detected by the PMTs. A typical signal read out from an event in LUX therefore consists of a PMT trace with two tell-tale pulses. 

A typical event in LUX. The bottom plot shows the primary (S1) and secondary (S2) signals from each of the individual PMTs. The top two plots show the total size of the S1 and the S2 pulses.

A typical event in LUX. The bottom plot shows the primary (S1) and secondary (S2) signals from each of the individual PMTs. The top two plots show the total size of the S1 and the S2 pulses.

As in any rare event search, controlling the backgrounds is of utmost importance. LUX employs a number of techniques to do so. By situating the detector nearly a mile underground, we reduce cosmic muon flux by a factor of 107. Next, LUX is deployed into a 300-tonne water tank, which reduces gamma backgrounds by another factor of 107 and neutrons by a factor of between 103 and 109, depending on their energy. Third, by carefully choosing a fiducial volume in the center of the detector, i.e., by cutting out events that happen near the edge of the target, we can reduce background by another factor of 104. And finally, electronic recoils produce much more ionization than do the nuclear recoils that we are interested in, so by looking at the ratio S2/S1 we can achieve over 99% discrimination between gammas and potential WIMPs. All this taken into account, the estimated background for LUX is less than 1 WIMP-like event throughout 300 days of running, making it essentially a zero-background experiment. The center of LUX is in fact the quietest place in the world, radioactively speaking.

Results From the First Science Run

From April to August 2013, LUX ran continuously, collecting 85.3 livedays of WIMP search data with a 118-kg fiducial mass, resulting in over ten thousand kg-days of data. A total of 83 million events were collected. Of these, only 6.5 million were single scatter events. After applying fiducial cuts and cutting on the energy region of interest, only 160 events were left. All of these 160 events were consistent with electronic recoils. Not a single WIMP was seen – the WIMP remains as elusive as the unicorn that has become the unofficial LUX mascot.

So why is this exciting? The LUX limit is the lowest yet – it represents a factor of 2-3 increase in sensitivity over the previous best limit at high WIMP masses, and it is over 20 times more sensitive than the next best limit for low-mass WIMPs.

The 90% confidence upper limit on the spin independent WIMP-nucleon interaction cross section: LUX compared to previous experiments.

The 90% confidence upper limit on the spin independent WIMP-nucleon interaction cross section: LUX compared to previous experiments.

Over the past few years, experiments such as DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si have each reported signals that are consistent with WIMPs of mass 5-10 GeV/c2. This is in direct conflict with the null results from ZEPLIN, COUPP, and XENON100, to name a few, and was the source of a fair amount of controversy in the direct detection community.

The LUX result was able to fairly definitively close the door on this question.

If the low-mass WIMPs favored by DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si do indeed exist, then statistically speaking LUX should have seen 1500 of them!

What’s Next?

Despite the conclusion of the 85-day science run, work on LUX carries on.

Just recently, there was a LUX talk presenting results from a calibration using low-mass neutrons as a proxy for WIMPs interacting within the detector, confirming the initial results from last autumn. Currently, LUX is gearing up for its next run, with the ultimate goal of collecting 300 livedays of WIMP-search data, which will extend the 2013 limit by a factor of five. And finally, a new detector called LZ is in the design stages, with a mass twenty times that of LUX and a sensitivity far greater.

***

For more details, the full LUX press release from October 2013 is located here:

http://www.youtube.com/watch?v=SMzAuhRFNQ0

by Nicole Larsen at April 17, 2014 07:57 PM

ZapperZ - Physics and Physicists

Dark Energy
In case you want an entertaining lesson or information on Dark Energy and why we think it is there, here's a nice video on it.



This video, in conjunction of the earlier video on Dark Matter, should give you some idea on what these "dark" entities are based on what we currently know.

Zz.

by ZapperZ (noreply@blogger.com) at April 17, 2014 03:19 PM

astrobites - astro-ph reader's digest

Crowd-Sourcing Crater Identification

Title: The Variability of Crater Identification Among Expert and Community Crater Analysts
Authors: Stuart J. Robbins and others
First Author’s institution: University of Colorado at Boulder
Status: Published in Icarus

“Citizen scientist” projects have popped up all over the Internet in recent years. Here’s Wikipedia’s list, and here’s our astro-specific list. These projects usually tackle complex visual tasks like mapping neurons, or classifying galaxies (a project we’ve discussed before).

Fig. 1: the near side of the moon, a mosaic of images captured by the Lunar Reconnaissance Orbiter. Several mare and highlands are marked. We now know that the maria (Latin for "seas", which is what early astronomers actually thought they were) were wiped as clean as a first-period chalkboard by lava flows some 3 billion years ago.

Fig. 1: The near side of the moon, a mosaic of images captured by the Lunar Reconnaissance Orbiter. Several mare and highlands are marked. The maria (Latin for “seas”, which is what early astronomers actually thought they were) were wiped as clean as a first-period chalkboard by lava flows some 3 billion years ago. (source: NASA/GFSC/ASU)

This is hard work. Not with all the professional scientists in the world could we achieve some of these tasks, not even with their grad students! But by asking for help from an army of untrained volunteers, scientists get much more data, and volunteers get to contribute to fundamental research and explore the beautiful patterns and eccentricities of nature.

The Moon Mappers project asks volunteers to identify craters on the Moon. One use for this work is to relatively date nearby surfaces. Newer surfaces, recently leveled by lava flows or tectonic activity, have had less time to accumulate craters. For example the crater-saturated highlands on the Moon are older than the less-cratered maria. Another use for this work is to calibrate models used to determine the bombardment history of the Moon. For this task, scientists need a distribution of crater sizes on the real lunar surface.

So how good are the volunteer Moon Mappers at characterizing crater densities and size distributions? For that matter how good are the experts?

Today’s study attempts to answer these questions by having a group of experts analyze images of the Moon from the Lunar Reconnaissance Orbiter Camera. Eight experts participated in the study, analyzing two images. The first image captured a variety of terrain types (both mare and highlands). The second image had already been scoured by Moon Mappers volunteers.

Results

caption

Fig. 2: One of the two images of the lunar surface used in this study. The top panel on the left shows the experts’ clusters, a different color for each expert. The bottom panel on the left shows volunteers’ clusters, all in red. The zoomed-in images to the right show a handful of craters of varying degrees of degradation. As expected, there is a larger spread visible for the volunteers’ clusters. (source: Robbins et al.)

The authors find a 10%-35% disagreement between experts on the number of craters of a given size. The lunar highlands yield the greatest dispersion: they are old and have many degraded features. The mare regions, where the craters are young and well-preserved, yield more consistent counts.

To examine how well analysts agree on the size and location of a given crater, the authors employ a clustering algorithm. To find a cluster the algorithm searches the datasets for craters within some distance threshold of others. The distance threshold is scaled by crater diameter so that, for example, if two analysts marked craters with diameters of ~10 px, and centers 15 px apart, these are considered unique. But if they both marked craters with diameters of ~100 px, and centers 15 px apart, these are considered the same. A final catalog is compiled by excluding the ‘craters’ that only a few analysts found. See Fig. 2 to the right for an example of crater clusters from the experts (top panel) and the volunteers (bottom panel).

caption

Fig. 3: The top panel shows the number of craters larger than a given diameter (horizontal axis), as determined by different analysts. A different color represents each analyst, and in some cases the same analyst using several different crater-counting techniques. The light gray line shows the catalog generated by clustering the volunteers’ datasets. It falls well-within the variations between experts. The bottom panel shows relative deviations from a power-law distribution. (source: Robbins et al.)

The authors find that the experts are in better agreement than the volunteers for any given crater’s diameter and location. This isn’t surprising. The experts have seen many more craters, in many different lighting conditions. And the experts used their own software tools, allowing them to zoom in and change contrast in the image. The Moon Mappers web-based interface is much less powerful.

Finally, the authors find that the size distributions computed from the volunteers’ clustered dataset falls well within the range of expert analysts’ size distributions. Fig. 3 demonstrates this.

In conclusion, the analysis of crater size distributions on a given surface can be done as accurately by a handful of volunteers as by a handful of experts. Furthermore, ages based on counting craters are almost always reported with underestimated errors: they don’t take into account the inherent variation amongst analysts. Properly accounting for errors of this type give uncertainties of a few hundred million years for surface ages of a few billion years. However, this study shows that the uncertainty is smaller when a group of analysts contribute to the count.

Consider becoming a Moon Mapper, Vesta Mapper, or Mercury Mapper yourself!

by Brett Deaton at April 17, 2014 05:05 AM

April 16, 2014

Symmetrybreaking - Fermilab/SLAC

Letter to the editor: Oldest light?

Reader Bill Principe raises an interesting question about the headline of a recent symmetry article.

Dear symmetry,

I am not a physicist, so forgive me if I get my physics wrong.

The most recent issue has an article called “The oldest light in the universe.”

April 16, 2014 11:34 PM

Christian P. Robert - xi'an's og

MCMC for sampling from mixture models

Randal Douc, Florian Maire, and Jimmy Olsson recently arXived a paper on the use of Markov chain Monte Carlo methods for the sampling of mixture models, which contains the recourse to Carlin and Chib (1995) pseudo-priors to simulate from a mixture distribution (and not from the posterior distribution associated with a mixture sampling model). As reported earlier, I was in the thesis defence of Florian Maire and this approach had already puzzled me at the time. In short, a mixture structure

\pi(z)\propto\sum_{m=1}^k \tilde\pi(m,z)

gives rises to as many auxiliary variables as there are components, minus one: namely, if a simulation z is generated from a given component i of the mixture, one can create pseudo-simulations u from all the other components, using pseudo-priors à la Carlin and Chib. A Gibbs sampler based on this augmented state-space can then be implemented:  (a) simulate a new component index m given (z,u);  (b) simulate a new value of (z,u) given m. One version (MCC) of the algorithm simulates z given m from the proper conditional posterior by a Metropolis step, while another one (FCC) only simulate the u‘s. The paper shows that MCC has a smaller asymptotic variance than FCC. I however fail to understand why a Carlin and Chib is necessary in a mixture context: it seems (from the introduction) that the motivation is that a regular Gibbs sampler [simulating z by a Metropolis-Hastings proposal then m] has difficulties moving between components when those components are well-separated. This is correct but slightly moot, as each component of the mixture can be simulated separately and in advance in z, which leads to a natural construction of (a) the pseudo-priors used in the paper, (b) approximations to the weights of the mixture, and (c) a global mixture independent proposal, which can be used in an independent Metropolis-Hastings mixture proposal that [seems to me to] alleviate(s) the need to simulate the component index m. Both examples used in the paper, a toy two-component two-dimensional Gaussian mixture and another toy two-component one-dimensional Gaussian mixture observed with noise (and in absolute value), do not help in perceiving the definitive need for this Carlin and Chib version. Especially when considering the construction of the pseudo-priors.


Filed under: Kids, Statistics, University life Tagged: Carlin, Gaussian mixture, mixtures

by xi'an at April 16, 2014 10:14 PM

Sean Carroll - Preposterous Universe

Twenty-First Century Science Writers

I was very flattered to find myself on someone’s list of Top Ten 21st Century Science Non-Fiction Writers. (Unless they meant my evil twin. Grrr.)

However, as flattered as I am — and as much as I want to celebrate rather than stomp on someone’s enthusiasm for reading about science — the list is on the wrong track. One way of seeing this is that there are no women on the list at all. That would be one thing if it were a list of Top Ten 19th Century Physicists or something — back in the day, the barriers of sexism were (even) higher than they are now, and women were systematically excluded from endeavors such as science with a ruthless efficiency. And such barriers are still around. But in science writing, here in the 21st century, the ladies are totally taking over, and creating an all-dudes list of this form is pretty blatantly wrong.

I would love to propose a counter-list, but there’s something inherently subjective and unsatisfying about ranking people. So instead, I hereby offer this:

List of Ten or More Twenty-First Century Science Communicators of Various Forms Who Are Really Good, All of Whom Happen to be Women, Pulled Randomly From My Twitter Feed and Presented in No Particular Order.

I’m sure it wouldn’t take someone else very long to come up with a list of female science communicators that was equally long and equally distinguished. Heck, I’m sure I could if I put a bit of thought into it. Heartfelt apologies for the many great people I left out.

by Sean Carroll at April 16, 2014 10:02 PM

Emily Lakdawalla - The Planetary Society Blog

The Birth of the Wanderers
How did planets originate? This is a question that has puzzled scientists for centuries, but one which they have been able to tackle directly only in the last few decades, thanks to two major developments: breakthroughs in telescope technology and ever-increasing computing power.

April 16, 2014 07:50 PM

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 6)

guest post by Steve Easterbrook

(6) We have to choose which future we want very soon.

In the previous IPCC reports, projections of future climate change were based on a set of scenarios that mapped out different ways in which human society might develop over the rest of this century, taking account of likely changes in population, economic development and technological innovation. However, none of the old scenarios took into account the impact of strong global efforts at climate mitigation. In other words, they all represented futures in which we don’t take serious action on climate change. For this report, the new ‘RCPs’ have been chosen to allow us to explore the choice we face.

This chart sums it up nicely. If we do nothing about climate change, we’re choosing a path that will look most like RCP8.5. Recall that this is the one where emissions keep rising just as they have done throughout the 20th century. On the other hand, if we get serious about curbing emissions, we’ll end up in a future that’s probably somewhere between RCP2.6 and RCP4.5 (the two blue lines). All of these futures give us a much warmer planet. All of these futures will involve many challenges as we adapt to life on a warmer planet. But by curbing emissions soon, we can minimize this future warming.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

Note also that the uncertainty range (the shaded region) is much bigger for RCP8.5 than it is for the other scenarios. The more the climate changes beyond what we’ve experienced in the recent past, the harder it is to predict what will happen. We tend to use the difference across different models as an indication of uncertainty (the coloured numbers shows how many different models participated in each experiment). But there’s also the possibility of ‘unknown unknowns’—surprises that aren’t in the models, so the uncertainty range is likely to be even bigger than this graph shows.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 16, 2014 02:11 PM

The n-Category Cafe

Enrichment and the Legendre-Fenchel Transform I

The Legendre-Fenchel transform, or Fenchel transform, or convex conjugation, is, in its naivest form, a duality between convex functions on a vector space and convex functions on the dual space. It is of central importance in convex optimization theory and in physics it is used to switch between Hamiltonian and Lagrangian perspectives.

graphs

Suppose that <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is a real vector space and that <semantics>f:V[,+]<annotation encoding="application/x-tex"> f\colon V\to [-\infty ,+\infty ] </annotation></semantics> is a function then the Fenchel transform is the function <semantics>f *:V #[,+]<annotation encoding="application/x-tex"> f^{\ast }\colon V^{#}\to [-\infty ,+\infty ] </annotation></semantics> defined on the dual vector space <semantics>V #<annotation encoding="application/x-tex">V^{#}</annotation></semantics> by <semantics>f *(k)sup xV{k,xf(x)}.<annotation encoding="application/x-tex"> f^{\ast }(k)\coloneqq \sup _{x\in V}\big \{ \langle k,x\rangle -f(x)\big \} . </annotation></semantics>

If you’re a regular reader then you will be unsurprised when I say that I want to show how it naturally arises from enriched category theory constructions. I’ll show that in the next post. In this post I’ll give a little introduction to the Legendre-Fenchel transform.

There is probably no best way to introduce the Legendre-Fenchel transform. The only treatment that I knew for many years was in Arnold’s book Mathematical Methods of Classical Mechanics, but I have recently come across the convex optimization literature and would highly recommend Tourchette’s The Fenchel Transform in a Nutshell — my treatment here is heavily influenced by this paper. I will talk mainly about the one-dimensional case as I think that gives a lot of the intuition.

We will start, as Legendre did, with the special case of a strictly convex differentiable function <semantics>f:<annotation encoding="application/x-tex">f\colon \mathbb{R}\to \mathbb{R}</annotation></semantics>; for instance, the function <semantics>x 2+1/2<annotation encoding="application/x-tex">x^{2}+1/2</annotation></semantics> pictured on the left hand side above. The derviative of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is strictly increasing and so the function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> can be parametrized by the derivative <semantics>k=df/dx<annotation encoding="application/x-tex">k =d f/d x</annotation></semantics> instead of the parameter <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>. Indeed we can write the parameter <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in terms of the slope <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>, <semantics>x=x(k)<annotation encoding="application/x-tex">x=x(k)</annotation></semantics>. The Legendre-Fenchel transform <semantics>f *<annotation encoding="application/x-tex">f^{*}</annotation></semantics> can then be defined to satisfy <semantics>k,x=f(x)+f *(k),<annotation encoding="application/x-tex"> \langle k,x \rangle = f(x) +f^{\ast }(k), </annotation></semantics> where the angle brackets mean the pairing between a vector space and its dual. In this one-dimensional case, where <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> are thought of as real numbers, we just have <semantics>k,x=kx<annotation encoding="application/x-tex">\langle k,x \rangle = k x</annotation></semantics>.

As <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is a function of <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> we can rewrite this as <semantics>f *(k)k,x(k)f(x(k)).<annotation encoding="application/x-tex"> f^{\ast }(k)\coloneqq \langle k,x(k) \rangle -f(x(k)). </annotation></semantics> So the Legendre-Fenchel transform encodes the function is a different way. By differentiating this equation you can see that the <semantics>df */dk=x(k)<annotation encoding="application/x-tex">d f^{\ast }/d k=x(k)</annotation></semantics>, thus we have interchanged the abcissa (the horizontal co-ordinate) and the slope. So if <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> has derivative <semantics>k 0<annotation encoding="application/x-tex">k_{0}</annotation></semantics> at <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> then <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> has derivative <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> at <semantics>k 0<annotation encoding="application/x-tex">k_{0}</annotation></semantics>. This is illustrated in the above picture.

I believe this is what Legendre did and then that what Fenchel did was to generalize this to non-differentiable functions.

For non-differentiable functions, we can’t talk about tangent lines and derivatives, but instead can talk about supporting lines. A supporting line is one which touches the graph at at least one point and never goes above the graph. (The fact that we’re singling out lines not going above the graph means that we have convex functions in mind.)

For instance, at the point <semantics>(x 0,f(x 0))<annotation encoding="application/x-tex">(x_{0},f(x_{0}))</annotation></semantics> the graph pictured below has no tangent line, but has supporting lines with gradient from <semantics>k 1<annotation encoding="application/x-tex">k_{1}</annotation></semantics> to <semantics>k 2<annotation encoding="application/x-tex">k_{2}</annotation></semantics>. A convex function will have at least one supporting line at each point.

graphs

It transpires that the right way to generalize the transform to this non-differentiable case is to define it as follows: <semantics>f *(k)sup x{k,xf(x)}.<annotation encoding="application/x-tex"> f^{\ast }(k)\coloneqq \sup _{x\in \mathbb{R}}\big \{ \langle k,x\rangle -f(x)\big \} . </annotation></semantics> In this case, if <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> has a supporting line of slope <semantics>k 0<annotation encoding="application/x-tex">k_{0}</annotation></semantics> at <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> then <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> has a supporting line of slope <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> at <semantics>k 0<annotation encoding="application/x-tex">k_{0}</annotation></semantics>. In the picture above, at <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics>, the function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> has supporting lines with slope from <semantics>k 1<annotation encoding="application/x-tex">k_{1}</annotation></semantics> to <semantics>k 2<annotation encoding="application/x-tex">k_{2}</annotation></semantics>: correspondingly, the function <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> has supporting lines with slope <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> all the way from <semantics>k 1<annotation encoding="application/x-tex">k_{1}</annotation></semantics> to <semantics>k 2<annotation encoding="application/x-tex">k_{2}</annotation></semantics>.

If we allow the function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> to be not strictly convex then the transform will not alway be finite. For example, if <semantics>f(x)ax+b<annotation encoding="application/x-tex">f(x)\coloneqq ax+b</annotation></semantics> then we have <semantics>f *(a)=b<annotation encoding="application/x-tex">f^{\ast }(a)=-b</annotation></semantics> and <semantics>f *(k)=+<annotation encoding="application/x-tex">f^{\ast }(k)=+\infty </annotation></semantics> for <semantics>ka<annotation encoding="application/x-tex">k\ne a</annotation></semantics>. So we will allow functions taking values in the extended real numbers: <semantics>¯[,+]<annotation encoding="application/x-tex">\overline{\mathbb{R}}\coloneqq [-\infty ,+\infty ]</annotation></semantics>.

We can use the above definition to get the transform of any function <semantics>f:¯<annotation encoding="application/x-tex">f\colon \mathbb{R}\to \overline{\mathbb{R}}</annotation></semantics>, whether convex or not, but the resulting transform <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> is always convex. (When there are infinite values involved we can also say that <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> is lower semi-continuous, but I’ll absorb that into my definition of convex for functions taking infinite values.)

Everything we’ve done for one-dimensional <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics> easily generalizes to any finite dimensional real vector space <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics>, where we should say ‘supporting hyperplane’ instead of ‘supporting line’. From that we get a transform between sets of functions <semantics>(--) *:Fun(V,¯)Fun(V #,¯),<annotation encoding="application/x-tex"> (\text {--})^{\ast }\colon \mathrm{Fun}(V,\overline{\mathbb{R}})\to \mathrm{Fun}(V^{#},\overline{\mathbb{R}}), </annotation></semantics> where <semantics>V #<annotation encoding="application/x-tex">V^{#}</annotation></semantics> is the vector space dual of <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics>. Similarly, we have a reverse transform going the other way, which is traditionally also denoted with a star <semantics>(--) *:Fun(V #,¯)Fun(V,¯),<annotation encoding="application/x-tex"> (\text {--})^{\ast }\colon \mathrm{Fun}(V^{#},\overline{\mathbb{R}})\to \mathrm{Fun}(V,\overline{\mathbb{R}}), </annotation></semantics> for <semantics>g:V #¯<annotation encoding="application/x-tex">g\colon V^{#}\to \overline{\mathbb{R}}</annotation></semantics> we define <semantics>g *(x)sup kV #{k,xg(k)}.<annotation encoding="application/x-tex"> g^{\ast }(x)\coloneqq \sup _{k\in V^{#}}\big \{ \langle k,x\rangle -g(k)\big \} . </annotation></semantics>

This pair of transforms have some rather nice properties, for instance, they are order reversing. We can put a partial order on any set of functions to <semantics>¯<annotation encoding="application/x-tex">\overline{\mathbb{R}}</annotation></semantics> by defining <semantics>f 1f 2<annotation encoding="application/x-tex">f_{1}\ge f_{2}</annotation></semantics> if <semantics>f 1(x)f 2(x)<annotation encoding="application/x-tex">f_{1}(x)\ge f_{2}(x)</annotation></semantics> for all <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>. Then <semantics>f 1f 2f 2 *f 1 *.<annotation encoding="application/x-tex"> f_{1}\ge f_{2} \quad \Rightarrow \quad f_{2}^{\ast }\ge f_{1}^{\ast }. </annotation></semantics> Also for any function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> we have <semantics>f *=f ***<annotation encoding="application/x-tex"> f^{\ast }=f^{\ast \ast \ast } </annotation></semantics> which implies that the operator <semantics>ff **<annotation encoding="application/x-tex">f\mapsto f^{\ast \ast }</annotation></semantics> is idempotent: <semantics>f **=f ****.<annotation encoding="application/x-tex"> f^{\ast \ast }=f^{\ast \ast \ast \ast }. </annotation></semantics> This means that <semantics>ff **<annotation encoding="application/x-tex">f\mapsto f^{\ast \ast }</annotation></semantics> is a closure operation. What it actually does is take the convex envelope of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>, which is the largest convex function less than or equal to <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>. Here’s an example.

graphs

This gives that if <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is already a convex function then <semantics>f **=f<annotation encoding="application/x-tex">f^{\ast \ast }=f</annotation></semantics>. And as a consequence the Legendre-Fenchel transform and the reverse transform restrict to an order reversing bijection between convex functions on <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> and convex functions on its dual <semantics>V #<annotation encoding="application/x-tex">V^{#}</annotation></semantics>. <semantics>Cvx(V,¯)Cvx(V #,¯).<annotation encoding="application/x-tex"> \mathrm{Cvx}(V,\overline{\mathbb{R}})\cong \mathrm{Cvx}(V^{#},\overline{\mathbb{R}}). </annotation></semantics>

There are many other things that can be said about the transform, such as Fenchel duality and the role it plays in optimization, but I don’t understand such things to my own satisfaction yet.

Next time I’ll explain how most of the above structure drops out of the nucleus construction in enriched category theory.

by willerton (S.Willerton@sheffield.ac.uk) at April 16, 2014 02:04 PM

arXiv blog

Hidden Vulnerability Discovered in the World's Airline Network

The global network of links between the world’s airports looks robust but contains a hidden weakness that could lead to entire regions of the planet being cut off.

April 16, 2014 02:00 PM

CERN Bulletin

Voice over IP phone calls from your smartphone
All CERN users do have a Lync account (see here) and can use Instant Messaging, presence and other features. In addition, if your number is activated on Lync IP Phone(1) system then you can make standard phone calls from your computer (Windows/Mac).   Recently, we upgraded the infrastructure to Lync 2013. One of the major features is the possibility to make Voice over IP phone calls from a smartphone using your CERN standard phone number (not mobile!). Install Lync 2013 on iPhone/iPad, Android or Windows Phone, connect to WiFi network and make phone calls as if you were in your office. There will be no roaming charges because you will be using WiFi to connect to CERN phone system(2). Register here to the presentation on Tuesday 29 April at 11 a.m. in the Technical Training Center and see the most exciting features of Lync 2013.   Looking forward to seeing you! The Lync team (1) How to register on Lync IP Phone system: http://information-technology.web.cern.ch/book/lync-ip-phone-service/how-register (2) People activated on Lync IP Phone system can make Voice over IP phone calls from Lync application.

April 16, 2014 09:39 AM

Lubos Motl - string vacua and pheno

Another anti-physics issue of SciAm
High energy physics is undoubtedly the queen and the ultimate reductionist root of all natural sciences. Nevertheless, during the last decade, it has become immensely fashionable for many people to boast that they're physics haters.

The cover of the upcoming May 2014 issue of Scientific American looks doubly scary for every physicist who has been harassed by the communist regime. It resembles a Soviet flag with some deeply misleading propaganda written over it:
A crisis in physics?

If supersymmetry doesn't pan out, scientists need a new way to explain the universe. [In between the lines]
Every part of this claim is pure bullshit, of course. First of all, there is no "crisis in physics". Second of all, chances are high that we won't be any certain whether SUSY is realized in Nature. Either SUSY will be found at the LHC in 2015 or soon afterwards, or it won't be. In the latter case, the status of SUSY will remain qualitatively the same as it is now. Top-down theorists will continue to be pretty much certain that SUSY exists in Nature in one form or another, one scale or another; bottom-up phenomenologists and experimenters will increasingly notice the absence of evidence – which is something else than the evidence for absence, however.

But aside from this delusion, the second part of the second sentence is totally misguided, too. Supersymmetry isn't a "new way to explain the universe". It is another symmetry, one that differs from some other well-known symmetries such as the rotational or Lorentz symmetry by its having fermionic generators but one that doesn't differ when it comes to its being just one aspect of theories. Supersymmetry isn't a theory of the universe by itself (in the same sense as the Standard Model or string theory); supersymmetry is a feature of some candidate theories of the universe.




To be sure that the hostility is repetitive (a lie repeated 100 times becomes the truth, she learned from Mr Goebbels), editor-in-chief Ms Mariette DiChristina introduces the May 2014 issue under the following title:
Does Physics Have a Problem?
What does it even mean for a scientific discipline to have a problem? Claims in science are either right or wrong. Some theories turn out to be right (at least temporarily), others turn out to be wrong. Some theories are viable and compatible with the evidence, others have been falsified. Some scientists are authors of right and/or important and valuable theories, others are authors of wrong ones or no theories at all.

Some classes of questions are considered settled so they are not being researched as "hot topics" anymore; others are behind the frontier where the scientists don't know the answers (and sometimes the questions): they are increasingly confused by the questions behind the frontier. This separation of the realm of questions by a fuzzy frontier of ignorance is a feature of science that applies to every scientific discipline and every moment of its history. One could argue that there can't be "crises in physics" at all but it's doubly bizarre to use this weird word for the current era which is as ordinary era of normal science as one can get.




The main article about popular physics was written by experimenter Maria Spiropulu (CMS, Caltech) and phenomenologist Joseph Lykken (a self-described very smart guy at Fermilab). They're very interesting and sensible folks but I would have objections to many things they wrote down and I think that the same thing holds for most high energy physicists.

They say that most HEP physicists believe that SUSY is true but add:
Indeed, results from the first run of the LHC have ruled out almost all the best-studied versions of supersymmetry. The negative results are beginning to produce if not a full-blown crisis in particle physics, then at least a widespread panic. The LHC will be starting its next run in early 2015, at the highest energies it was designed for, allowing researchers at the ATLAS and CMS experiments to uncover (or rule out) even more massive superpartners. If at the end of that run nothing new shows up, fundamental physics will face a crossroads: either abandon the work of a generation for want of evidence that nature plays by our rules, or press on and hope that an even larger collider will someday, somewhere, find evidence that we were right all along…
I don't have – and I have never had – any strong preference concerning the masses of superpartners i.e. the accessibility of SUSY by the collider experiments. All of them could have been below \(100\GeV\) but they may be at \(100\TeV\) or near the GUT scale, too. Naturalness suggests that they (especially the top squarks, higgsinos, and perhaps gluinos) are closer to the Higgs mass but it is just a vague argument based on Bayesian reasoning that is moreover tied to some specific enough models. Any modification of the SUSY model changes the quantification of the fine-tuning.

But even if it doesn't, the word "natural" is a flexible adjective. If the amount of fine-tuning increases, the model doesn't become unnatural instantly. It is a gradual change. What I find preposterous is the idea presented by the authors that "if the 2015 LHC run finds no proof of SUSY, fundamental physics will face a crossroads; it will either abandon the work altogether or press for a bigger collider".

You can make a 2016 New Year's resolution and say that you will stop thinking about SUSY if there is no evidence from the LHC for SUSY by that time. You may even establish a sect within high energy physics that will share this New Year's resolution with you. But it is just a New Year's resolution, not a science or a decision "implied" by the evidence. There will be other people who will consider your group's New Year's resolution to be premature and just downright stupid. Physics isn't organized by deadlines or five-year plans.

Other people will keep on working on some SUSY models because these models will be attractive and compatible with all the evidence available at that moment. Even if SUSY were experimentally proven to require a 1-in-1,000 fine-tuning – and it really can't be due to the model-dependence of the fine-tuning scores – most people will still rationally think that a 1-in-1,000 fine-tuning is better than the 1-in-1,000,000,000,000,000,000,000,000,000,000 fine-tuning apparently required by the Standard Model. Maria and Joseph know that it is so. In fact, they explicitly mention the "prepared reaction" by Nima Arkani-Hamed that Nima presented in Santa Barbara:
What if supersymmetry is not found at the LHC, [Nima] asked, before answering his own question: then we will make new supersymmetry models that put the superpartners just beyond the reach of the experiments. But wouldn’t that mean that we would be changing our story? That’s okay; theorists don’t need to be consistent—only their theories do.
If SUSY looks attractive enough, of course that phenomenologists will ignore the previous fashionable beliefs about the lightness of the superpartners and (invent and) focus on new models that are compatible with all the evidence at that moment. The relative fraction of hep-ph papers that are dedicated to SUSY model building may decrease in the case of the continuing absence of evidence but only gradually so simply because there are no any major enough alternatives that could completely squeeze the SUSY research. There can't really be any paradigm shift if the status quo continues. You either need some new experimental discoveries or some new theoretical discoveries for a paradigm shift.
This unshakable fidelity to supersymmetry is widely shared. Particle theorists do admit, however, that the idea of natural supersymmetry is already in trouble and is headed for the dustbin of history unless superpartners are discovered soon…
The word "natural" has several meanings and the important differences between these meanings is being (deliberately?) obfuscated by this sentence. It is almost a tautology that any theory that ultimately describes Nature accurately is "natural". But as long as we are ignorant about all the details about the final theory and how it describes Nature, we must be satisfied with approximate and potentially treacherous but operationally applicable definitions of "naturalness". In effective field theory, we assume that the parameters (at the high energy scale) are more or less uniformly distributed in a set and classify very special, unlikely (by this probability distribution) regions to be "unnatural" (typically very small values of some dimensionless parameters that could be of order one).

But the ultimate theory has different rules how to calculate the "probability distribution for the parameters". After all, string theory implies discrete values of all the parameters, so with some discrete information, we may sharpen the probability distribution for low-energy parameters to a higher-dimensional delta-function. We can just calculate the values of all the parameters. The values may be generic or natural according to some sensible enough smooth probability distribution (e.g. in an effective field theory). But if the effective field theory description overlooks some important new particles, interactions, patterns, or symmetries, it may be unnatural, too.

It's important to realize that our ways to estimate whether some values of parameters in some theories are natural are model-dependent and therefore bound to evolve. It is just completely wrong for Maria and Joseph to impose some ideas about physics from some year – 2000 or whatever is the "paradigm" they want everyone to be stuck at – and ban any progress of the thinking. Scientists' thinking inevitably evolves. That's why the scientific research is being done in the first place. So new evidence – including null results – is constantly being taken into account as physicists are adjusting their subjective probabilities of various theories and models, and of various values of parameters within these models.

This process will undoubtedly continue in 2015 and 2016 and later, too. At least, sensible people will continue to adjust their beliefs. If you allow me to say a similar thing as Nima did: theorists are not only allowed to present theories that are incompatible with some of their previous theories or beliefs. They are really obliged to adjust their beliefs – and even at one moment, a sensible enough theorists may (and perhaps should) really be thinking about many possible theories, models, and paradigms. Someone whose expectations turn out to be more accurate and nontrivially agreeing with the later observations should become more famous than others. But it is not a shame to update the probabilities of theories according to the new evidence. It's one of the basic duties that a scientist has to do!

I also feel that the article hasn't taken the BICEP2 results into account and for those reasons, it will already be heavily obsolete when the issue of Scientific American is out. They try to interpret the null results from the LHC as an argument against grand unification or similar physics at the GUT scale. But nothing like that follows from the null results at the LHC and in fact, the BICEP2's primordial gravitational waves bring us quite powerful evidence – if not a proof – that new interesting physics is taking place near the usual GUT scale i.e. not so far from the standard four-dimensional Planck scale.

So in the absence of the SM-violating collider data, the status quo will pretty much continue and the only other way to change it is to propose some so far overlooked alternative paradigm to SUSY that will clarify similar puzzles – or at least a comparable number of puzzles – as SUSY. It is totally plausible that bottom-up particle model builders will have to work with the absence of new collider discoveries – top-down theorists have worked without them for decades, anyway. It works and one can find – and string theorists have found – groundbreaking things in this way, too.

What I really dislike about the article is that – much like articles by many non-physicists – it tries to irrationally single out SUSY as a scapegoat. Even if one should panic about the null results from the LHC, and one shouldn't, these results would be putting pressure on every model or theory or paradigm of bottom-up physics that goes beyond the Standard Model. In fact, SUSY theories are still among the "least constrained ones" among all paradigms that try to postulate some (motivated by something) new physics at low enough energy scales. That's the actual reason why the events cannot rationally justify the elimination or severe reduction of SUSY research as a percentage of hep-ph research.

If someone thinks that it's pointless to do physics without new guaranteed enough experimental discoveries and this kind of physics looks like a "problem" or "crisis" to him or her, he or she should probably better leave physics. Those who would be left are looking for more than just the superficial gloss and low-hanging fruits. The number of HEP experimenters and phenomenologists building their work on a wishful thinking of many collider discoveries in the near future is arguably too high, anyway. But there are other, more emotion-independent approaches to physics that are doing very well.

by Luboš Motl (noreply@blogger.com) at April 16, 2014 09:37 AM

Peter Coles - In the Dark

Interlude

I’m taking a  short holiday over the Easter break and probably won’t be blogging until I get back, primarily because I won’t have an internet connection where I’m going. That’s a deliberate decision, by the way. So, as the saying goes, there will now follow a short intermission….

 

 


by telescoper at April 16, 2014 07:17 AM

Emily Lakdawalla - The Planetary Society Blog

The End of Opportunity and the Burden of Success
The Opportunity rover and the Lunar Reconnaissance Orbiter are both zeroed out in NASA's 2015 budget. Learn why these missions face the axe and why the White House is forcing NASA to choose between existing missions and starting new ones.

April 16, 2014 01:19 AM

April 15, 2014

Quantum Diaries

Ten things you might not know about particle accelerators

A version of this article appeared in symmetry on April 14, 2014.

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

The Large Hadron Collider at CERN laboratory has made its way into popular culture: Comedian John Stewart jokes about it on The Daily Show, character Sheldon Cooper dreams about it on The Big Bang Theory and fictional villains steal fictional antimatter from it in Angels & Demons.

Despite their uptick in popularity, particle accelerators still have secrets to share. With input from scientists at laboratories and institutions worldwide, symmetry has compiled a list of 10 things you might not know about particle accelerators.

There are more than 30,000 accelerators in operation around the world.

Accelerators are all over the place, doing a variety of jobs. They may be best known for their role in particle physics research, but their other talents include: creating tumor-destroying beams to fight cancer; killing bacteria to prevent food-borne illnesses; developing better materials to produce more effective diapers and shrink wrap; and helping scientists improve fuel injection to make more efficient vehicles.

One of the longest modern buildings in the world was built for a particle accelerator.

Linear accelerators, or linacs for short, are designed to hurl a beam of particles in a straight line. In general, the longer the linac, the more powerful the particle punch. The linear accelerator at SLAC National Accelerator Laboratory, near San Francisco, is the largest on the planet.

SLAC’s klystron gallery, a building that houses components that power the accelerator, sits atop the accelerator. It’s one of the world’s longest modern buildings. Overall, it’s a little less than 2 miles long, a feature that prompts laboratory employees to hold an annual footrace around its perimeter.

Particle accelerators are the closest things we have to time machines, according to Stephen Hawking.

In 2010, physicist Stephen Hawking wrote an article for the UK paper the Daily Mail explaining how it might be possible to travel through time. We would just need a particle accelerator large enough to accelerate humans the way we accelerate particles, he said.

A person-accelerator with the capabilities of the Large Hadron Collider would move its passengers at close to the speed of light. Because of the effects of special relativity, a period of time that would appear to someone outside the machine to last several years would seem to the accelerating passengers to last only a few days. By the time they stepped off the LHC ride, they would be younger than the rest of us.

Hawking wasn’t actually proposing we try to build such a machine. But he was pointing out a way that time travel already happens today. For example, particles called pi mesons are normally short-lived; they disintegrate after mere millionths of a second. But when they are accelerated to nearly the speed of light, their lifetimes expand dramatically. It seems that these particles are traveling in time, or at least experiencing time more slowly relative to other particles.

The highest temperature recorded by a manmade device was achieved in a particle accelerator.

In 2012, Brookhaven National Laboratory’s Relativistic Heavy Ion Collider achieved a Guinness World Record for producing the world’s hottest manmade temperature, a blazing 7.2 trillion degrees Fahrenheit. But the Long Island-based lab did more than heat things up. It created a small amount of quark-gluon plasma, a state of matter thought to have dominated the universe’s earliest moments. This plasma is so hot that it causes elementary particles called quarks, which generally exist in nature only bound to other quarks, to break apart from one another.

Scientists at CERN have since also created quark-gluon plasma, at an even higher temperature, in the Large Hadron Collider.

The inside of the Large Hadron Collider is colder than outer space.

In order to conduct electricity without resistance, the Large Hadron Collider’s electromagnets are cooled down to cryogenic temperatures. The LHC is the largest cryogenic system in the world, and it operates at a frosty minus 456.3 degrees Fahrenheit. It is one of the coldest places on Earth, and it’s even a few degrees colder than outer space, which tends to rest at about minus 454.9 degrees Fahrenheit.

Nature produces particle accelerators much more powerful than anything made on Earth.

We can build some pretty impressive particle accelerators on Earth, but when it comes to achieving high energies, we’ve got nothing on particle accelerators that exist naturally in space.

The most energetic cosmic ray ever observed was a proton accelerated to an energy of 300 million trillion electronvolts. No known source within our galaxy is powerful enough to have caused such an acceleration. Even the shockwave from the explosion of a star, which can send particles flying much more forcefully than a manmade accelerator, doesn’t quite have enough oomph. Scientists are still investigating the source of such ultra-high-energy cosmic rays.

Particle accelerators don’t just accelerate particles; they also make them more massive.

As Einstein predicted in his theory of relativity, no particle that has mass can travel as fast as the speed of light—about 186,000 miles per second. No matter how much energy one adds to an object with mass, its speed cannot reach that limit.

In modern accelerators, particles are sped up to very nearly the speed of light. For example, the main injector at Fermi National Accelerator Laboratory accelerates protons to 0.99997 times the speed of light. As the speed of a particle gets closer and closer to the speed of light, an accelerator gives more and more of its boost to the particle’s kinetic energy.

Since, as Einstein told us, an object’s energy is equal to its mass times the speed of light squared (E=mc2), adding energy is, in effect, also increasing the particles’ mass. Said another way: Where there is more “E,” there must be more “m.” As an object with mass approaches, but never reaches, the speed of light, its effective mass gets larger and larger.

The diameter of the first circular accelerator was shorter than 5 inches; the diameter of the Large Hadron Collider is more than 5 miles.

In 1930, inspired by the ideas of Norwegian engineer Rolf Widerøe, 27-year-old physicist Ernest Lawrence created the first circular particle accelerator at the University of California, Berkeley, with graduate student M. Stanley Livingston. It accelerated hydrogen ions up to energies of 80,000 electronvolts within a chamber less than 5 inches across.

In 1931, Lawrence and Livingston set to work on an 11-inch accelerator. The machine managed to accelerate protons to just over 1 million electronvolts, a fact that Livingston reported to Lawrence by telegram with the added comment, “Whoopee!” Lawrence went on to build even larger accelerators—and to found Lawrence Berkeley and Lawrence Livermore laboratories.

Particle accelerators have come a long way since then, creating brighter beams of particles with greater energies than previously imagined possible. The Large Hadron Collider at CERN is more than 5 miles in diameter (17 miles in circumference). After this year’s upgrades, the LHC will be able to accelerate protons to 6.5 trillion electronvolts.

In the 1970s, scientists at Fermi National Accelerator Laboratory employed a ferret named Felicia to clean accelerator parts.

From 1971 until 1999, Fermilab’s Meson Laboratory was a key part of high-energy physics experiments at the laboratory. To learn more about the forces that hold our universe together, scientists there studied subatomic particles called mesons and protons. Operators would send beams of particles from an accelerator to the Meson Lab via a miles-long underground beam line.

To ensure hundreds of feet of vacuum piping were clear of debris before connecting them and turning on the particle beam, the laboratory enlisted the help of one Felicia the ferret.

Ferrets have an affinity for burrowing and clambering through holes, making them the perfect species for this job. Felicia’s task was to pull a rag dipped in cleaning solution on a string through long sections of pipe.

Although Felicia’s work was eventually taken over by a specially designed robot, she played a unique and vital role in the construction process—and in return asked only for a steady diet of chicken livers, fish heads and hamburger meat.

Particle accelerators show up in unlikely places.

Scientists tend to construct large particle accelerators underground. This protects them from being bumped and destabilized, but can also make them a little harder to find.

For example, motorists driving down Interstate 280 in northern California may not notice it, but the main accelerator at SLAC National Accelerator Laboratory runs underground just beneath their wheels.

Residents in villages in the Swiss-French countryside live atop the highest-energy particle collider in the world, the Large Hadron Collider.

And for decades, teams at Cornell University have played soccer, football and lacrosse on Robison Alumni Fields 40 feet above the Cornell Electron Storage Ring, or CESR. Scientists use the circular particle accelerator to study compact particle beams and to produce X-ray light for experiments in biology, materials science and physics.

Sarah Witman

by Fermilab at April 15, 2014 09:34 PM

Symmetrybreaking - Fermilab/SLAC

Ten things you might not know about particle accelerators

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators.

The Large Hadron Collider at CERN laboratory has made its way into popular culture: Comedian John Stewart jokes about it on The Daily Show, character Sheldon Cooper dreams about it on The Big Bang Theory and fictional villains steal fictional antimatter from it in Angels & Demons.

by Sarah Witman at April 15, 2014 07:59 PM

astrobites - astro-ph reader's digest

A Faint Black Hole?

  • Title: Swift J1357.2-0933: the faintest black hole?
  • Authors: M. Armas Padilla, R. Wijnands, N. Degenaar, T. Munoz-Darias, J.Casares, & R.P. Fender
  • First Author’s Institution: University of Amsterdam
  • Paper Status: Submitted to MNRAS

X-ray binaries are stellar systems that are luminous in the x-ray portion of the spectrum. Matter from one star (typically on the Main Sequence) is transferred onto a more massive white dwarf, neutron star, or stellar-mass black hole. This accretion gives rise to the high x-ray luminosities because the infalling matter converts gravitational potential energy to kinetic energy, which is then radiated away as x-rays. Astronomers subdivide these systems into low-mass x-ray binaries (LMXB) and high-mass x-ray binaries (HMXB). This distinction depends on the mass of the star that is donating mass to the other component.

Occasionally these systems can begin to transfer relatively large amounts of mass, changing their mass transfer rate by up to a few orders of magnitude. However, they are typically found in a lower mass-transferring state referred to as quiescence. Since these systems appear to spend most of their time in these quiescent states, it is common to compare various systems using parameters determined when they were in this state.

The authors of today’s paper investigate the nature of one particular black hole LMXB, called Swift J1357.2-0933. They observed the source with the XMM-Newton satellite, which can observe between 0.1 and 15 keV. Concurrent observations were taken in the optical to confirm the source was in a quiescent state by comparing the magnitude to previously known values.

Figure 1:

Figure 1: The x-ray luminosity plotted against the orbital period for neutron star (red stars) and black hole (black circles) x-ray binaries. The source studied in this paper is shown as the white circle. The grey box shows the orbital period and luminosity uncertainty given the uncertainty in the distance. The crossed black circle represents the only other currently known black hole binary around the theoretical switching point for the luminosity-period relation. (From Armas Padilla 2014)

The x-ray spectrum can be used to determine the x-ray luminosity of the source, if you know the distance, through the flux-luminosity relation. Determining the luminosity is complicated by the large uncertainty in the distance. Previous studies of this object have placed this object between 0.5 and 6 kpc, though 1.5 kpc is the commonly stated distance. Assuming a distance of 1.5 kpc, the authors determine this source to be the faintest BH LMXB known, based on x-ray luminosity. If it is further away, the luminosity would be comparable to the faintest BH sources. Figure 1 shows how the distance uncertainty affects the luminosity (grey box).

A more precise distance will help constrain some theory about the behavior of BH binary sources. Binary sources will lose angular momentum due to gravitational radiation, causing the orbital period to decrease. It has been predicted that as the orbital period decreases, the luminosity would likewise decrease, but only down to a certain period, at which the luminosity would again increase. There are not yet enough known and measured BH binary sources to fully investigate this theoretical prediction. The object studied in this paper is around the period at which this luminosity is thought to switch. There is only one other black hole binary at a similar period, shown as the crossed black circle in Figure 1. Accurately determining the distance will allow a more accurate luminosity and feed into if or where this switch occurs.

This system has the potential to help provide an important constraint to our understanding of low-mass x-ray binaries and their evolution over time. The most important task to reach this potential is to more accurately determine the distance, which can be measured through obtaining more optical photometry and spectroscopy of the companion main sequence star.

by Josh Fuchs at April 15, 2014 06:39 PM

Peter Coles - In the Dark

Magnetic River

I stumbled across this yesterday as a result of an email from a friend who shall remain nameless (i.e. Anton). I remember seeing Prof. Eric Laithwaite on the television quite a few times when I was a kid. What I found so interesting about watching this so many years later is that it’s still so watchable and compelling. No frills, no gimmicks, just very clear explanation and demonstrations, reinforced by an aura of authoritativeness that makes you want to listen to him. If only more modern science communication were as direct as this.  I suppose part of the appeal is that he speaks with an immediately identifiable no-nonsense accent, from the part of the Midlands known as Lancashire….

 

 


by telescoper at April 15, 2014 04:18 PM

Sean Carroll - Preposterous Universe

Talks on God and Cosmology

Hey, remember the debate I had with William Lane Craig, on God and Cosmology? (Full video here, my reflections here.) That was on a Friday night, and on Saturday morning the event continued with talks from four other speakers, along with responses by WLC and me. At long last these Saturday talks have appeared on YouTube, so here they are!

First up was Tim Maudlin, who usually focuses on philosophy of physics but took the opportunity to talk about the implications of God’s existence for morality. (Namely, he thinks there aren’t any.)

Then we had Robin Collins, who argued for a new spin on the fine-tuning argument, saying that the universe is constructed to allow for it to be discoverable.

Back to Team Naturalism, Alex Rosenberg explains how the appearance of “design” in nature is well-explained by impersonal laws of physics.

Finally, James Sinclair offered thoughts on the origin of time and the universe.

To wrap everything up, the five of us participated in a post-debate Q&A session.

Enough debating for me for a while! Oh no, wait: on May 7 I’ll be in New York, debating whether there is life after death. (Spoiler alert: no.)

by Sean Carroll at April 15, 2014 03:16 PM

Lubos Motl - string vacua and pheno

Podcast with Lisa Randall on inflation, Higgs, LHC, DM, awe
I want to offer you a yesterday's 30-minute podcast of Huffington Post's David Freeman with Lisa Randall of Harvard
Podcast with Randall (audio over there)
The audio format is thanks to RobinHoodRadio.COM.

They talk about inflation, the BICEP2 discovery, the Higgs boson vs the Higgs field, the LHC, its tunnels, and the risk that the collider would create deadly black holes.




I think her comments are great, I agree with virtually everything, including the tone.




Well, I am not sure what she means by the early inflationary models' looking contrived but that's just half a sentence of a minor disagreement – which may become a major one, of course, if some people focus on this topic.

She is asked about the difference between the Big Bang and inflation, the Higgs boson vs. the Higgs field (who gives masses to other particles). The host asks about the size of the LHC; it is sort of bizarre because the photographs of the LHC have been everywhere in the media and they're very accessible so why would one ask about the size of the tunnel again?

The host also said that there would be "concerns" that the LHC would have created a hungry black hole that would devour our blue, not green planet. I liked Lisa's combative reply: the comment had to be corrected. There were concerns but only among the people who didn't have a clue. The actual calculations of safety – something that scientists are sort of obliged to perform before they do an experiment – end up with the result that we're safe as the rate of such accidents is lower than "one per the age of the universe". It's actually much lower than that but even that should be enough.

They also talk about the multiverse. Lisa says that she's not among those who are greatly interested in the multiverse ideas – she's more focused on things we can measure – but of course that there may be other universes. Just because we haven't see them doesn't mean that they don't exist. (She loves to point the same idea when it comes to dark matter.)

What comes at the end of the universe? She explains that the compact space – a balloon is free of troubles. The host says the usual thing that the laymen always do. The balloon is expanding into something, some preexisting space. But in the case of the universe, there is simply nothing outside it, Lisa warns. The balloon is the whole story. I have some minor understanding for this problem of the laymen because when I was 8, I also had the inclination to imagine that the curved spacetime of general relativity (from the popular articles and TV shows) had to be embedded into some larger, flat one. But this temptation went away a year later or so. The Riemannian geometry is meant to describe "all of space" and it allows curvature. To embed the space into a higher-dimensional flat one is a way (and not the only way) to visualize the curvature but these extra "crutches" are not necessarily physical. And in fact, they are not physical in our real universe.

Now, is dark matter the same thing as antimatter? Based on the frequency at which I have heard this question, I believe that every third layman must be asking the very same question. So Lisa has to say that antimatter is charged and qualitatively behaves just like ordinary matter – and they annihilate – while dark matter has to be new. Is dark matter made of black holes? Every 10th layman has this idea. It's actually an a priori viable one that needs some discussion. One has to look for "small astrophysical objects as dark matter". They would cause some gravitational lensing which is not seen.

So what is dark energy? It's something that is not localizable "stuff". Dark energy is smoothly spread everywhere. Absolute energy matters, Einstein found out. And the C.C. accelerates the expansion of the universe. Can the experiments find dark energy and dark matter? Not dark energy but possibly dark matter. It could be a bigger deal than the Higgs boson.

LHC is upgrading and will be reopened for collision business in 1 year. No one believes that the Higgs boson is everything there is but it is not clear that the other things are achievable by the LHC.

Lisa is now working on dark matter. Lots of theoretical ideas. Dark matter with a more strongly interacting component.

What is it good for? The electron seemed to be useless, too. So there may be unexpected applications. But applications are not the main motivation. She is also asked about being religious. She is not religious and for her, science isn't about the "sense of awe". So she is not religious even in the most general sense. Ultimately, science wants to understand things that clarify the "awe", that make the magnificent things look accessible. It is about solving puzzles and the satisfaction arises from the understanding, from the feeling that things fit together.

The host says that because she writes popular books, she must present the "sense of wonder". Lisa protests again. My books are about science, not the awe! :-) There is clearly a widespread feeling among the laymen that scientists are obliged to lick the buttocks of the stupid laymen in some particular ways. To constantly "admit" (more precisely, to constantly lie) that science knows nothing and spread religious feelings. But scientists are not obliged to do any of these things and in fact, they shouldn't do these things. A good popular book is one that attracts the reader into genuine science – the organized process of learning the truth about Nature – and that communicates some correct science (principles, methods, or results) to the readers. If science implies that the people who are afraid of the destruction of the world by the LHC are imbeciles, and be sure that science does imply that, a good popular scientific book must nicely articulate this point. A good popular scientific book is not one that reinforces the reader's spiritual or even anti-scientific preconceptions (although the book that does reinforce them may be generously praised by the stupid readers and critics).

Is it possible to convey the science without maths? Lisa tends to answer Yes because she appreciates classical music although she has never studied it. But she could still learn something about it from the books, although less than the professional musicians. So it doesn't have to be "all or nothing". People still learn some science even if they don't learn everything. And readers of her book, she believes, may come from many layers and learn the content to various degrees of depth and detail.

There's lots of talk about America's falling behind in STEM fields. LOL, exactly, there is a lot of talk, Lisa replies. 50 years ago, people were inspired by the space research. But the host tries to suggest that there is nothing inspiring in physics or science now or something like that. Lisa says that there are tons of awe-inspiring things – perhaps too many.

What is the most awe-inspiring fact, Lisa is asked? She answers that it's the body and size of all the things we understood in a recent century or so. Nebulae used to be galaxies, the host is amazed. Lisa talks about such cosmological insights for a while.



Incidentally, on Sunday, we finally went to Pilsner Techmania's 3D planetarium. We watched the Astronaut 3D program (trailed above: a movie about all the training that astronauts undergo and dangers awaiting them during the spaceflight) plus a Czech program on the spring sky above Pilsen (constellations and some ancient stories about them: I was never into it much and I am still shaking my head whenever someone looks at 9/15 stars/dots and not only determines that it is a human but also that its gender is female and even that she has never had sex before – that was the Virgo constellation, if you couldn't tell). Technically, I was totally impressed how Techmania has tripled or quadrupled (with the planetarium) in the last 6 months. The 3D glasses look robust and cool although they're based on a passive color system only. Things suddenly look very clean and modern (a year ago, Techmania would still slightly resemble the collapsing Škoda construction halls in Jules Verne's Steel City after a global nuclear war LOL).

On the other hand, I am not quite sure whether the richness of the spiritual charge of the content fully matches the generous superficial appearance (which can't hide that lots of money has clearly gone into it). There were many touch-sensitive tabletop displays in Techmania (e.g. one where you could move photographs of the Milky Way, a woman, and a few more from one side – X-ray spectrum – to the other side – radio waves – and see what it looks like), the "science on sphere" projection system, and a few other things (like a model of a rocket which can shoot something up; a gyroscope with many degrees of freedom for young astronauts to learn how to vomit; scales where you can see how much you weigh on the Moon and all the planets of the Solar System, including fake models of steel weights with apparently varying weights). I haven't seen the interiors of the expanded Techmania proper yet (there is a cool simple sundial before you enter the reception). Also, I think that the projectors in the 3D fulldome could be much stronger (more intense), the pictures were pretty dark relatively to how I remember cinemas. The 3D cosmos-oriented science movies will never be just like Titanic – one can't invest billions into things with limited audiences – but I still hope that they will make some progress because to some extent, these short programs looked like a "proof of a concept" rather than a full-fledged complete experience that should compete with regular movie theaters, among other sources of (less scientific) entertainment. I suppose that many more 3D fulldomes have to be built before the market with the truly impressive programs becomes significant.

by Luboš Motl (noreply@blogger.com) at April 15, 2014 01:12 PM

CERN Bulletin

CERN Bulletin Issue No. 16-17/2014
Link to e-Bulletin Issue No. 16-17/2014Link to all articles in this issue No.

April 15, 2014 09:32 AM

Peter Coles - In the Dark

Bearded Bishop Brentwood welcomed but too late for Beard of Spring poll

telescoper:

I’m still way behind John Brayford (who he?), but there’s definitely signs of a bounce! The Deadline is 19th April. Vote for me!

 

Originally posted on Kmflett's Blog:

Beard Liberation Front
PRESS RELEASE 14th April
Contact Keith Flett 07803 167266
Bearded Bishop Brentwood welcomed but too late for Spring Beard poll

The Beard Liberation Front, the informal network of beard wearers that campaigns against beardism, has welcomed the news that the Pope on Monday appointed Fr Alan Williams FM as the Bishop of Brentwood but say that his appointment is too late for inclusion on the Beard of Spring 2014 poll which concludes on Friday.

The campaigners say that they are certain that the distinguished Bishop will feature in future

The big issue in the days left for voting is whether current leader Sheffield United footballer John Brayford did enough in his team’s defeat to Hull in Sunday’s FA Cup semi-final to take the title or whether challengers such as cosmologist Peter Coles and Editor of the I Paper Olly Duff can catch him

The Beard of Spring…

View original 136 more words


by telescoper at April 15, 2014 07:57 AM

Clifford V. Johnson - Asymptotia

Beautiful Randomness
Spotted in the hills while out walking. Three chairs left out to be taken, making for an enigmatic gathering at the end of a warm Los Angeles Spring day... random_chairs_la_14_04_14 I love this city. -cvj Click to continue reading this post

by Clifford at April 15, 2014 04:30 AM

astrobites - astro-ph reader's digest

Forming Stars in the Stream

Title: Recent Star Formation in the Leading Arm of the Magellanic Stream
Authors: Dana I. Casetti-Dinescu, Christian Moni Bidin, Terrence M. Girard, Rène A. Mèndez, Katherine Vieira, Vladimir I. Korchagin, and William F. van Altena
First Author’s Institution: Dept. of Physics, Southern Connecticut State University, New Haven, CT; Astronomy Dept., Yale University, New Haven, CT.
Paper Status: 
Accepted for publication in ApJ Letters

Our galaxy is not alone. I don’t just mean that there are other galaxies in the Universe, but that there are other galaxies sitting right at our doorstep. The Milky Way is but one of 54 galaxies in our local group (Andromeda is one of these). Many of these galaxies are smaller, dwarf galaxies, and two of the closest and largest of these galaxies are the Small and Large Magellanic Clouds (visible by naked eye if you happen to live in the Southern Hemisphere). These two clouds are thought to be in their first orbit around the Milky Way galaxy, but have been interacting with each other for quite some time. Through tidal interactions, these two clouds have produced large streams of gas that can be seen in radio emission as shown in Fig. 1. The long tail behind the two Magellanic Clouds is called the Magellanic Stream (MS), the gas connecting the two is known as the bridge, and the gas above and to the right the leading arm (LA).

Fig. 1:

Fig. 1: Shown here is a composite image of our Milky Way (optical, center) and the large stream of gas associated with the Magellanic Clouds (radio, red). The Small and Large Magellanic clouds can be seen as the small and large bright spots towards the lower right, connected to each other by a bridge of gas. Behind these galaxies is a long tail known as the Magellanic Stream. Ahead is the branched, leading arm. (Credit: Nidever, et. al., NRAO/AUI/NSF and Meilinger, Leiden-Argentine-Bonn Survey, Parkes Observatory, Westerbork Observatory, Arecibo Observitory)

Stars in the Stream

Astronomers expect the interactions between these two clouds and each other, and their interactions with our Milky Way, to induce star formation within the gas streams. The authors investigate this possibility by looking for young, hot, massive stars in the leading arm (again, to the right in Fig. 1). The authors find 42 stars that could be young stars associated with the leading arm. However, determining exactly what these objects are, and whether or not they are associated with the leading arm itself is not an easy task. The authors obtain spectra for each of these objects  in order to identify the stellar type and their physical location.

Fig. 2 shows a color map of the neutral hydrogen in the leading arm (whose different sections are labelled LAI to LAIV) overlaid with the candidate stars as crosses; the stars are spread between three separate groups labelled A, B, and C. Not all of the candidate stars will be young stars associated with the leading arm. The authors identify contaminating foreground stars by examining the composition of the stars obtained through the spectra. Many stars (22, the white circles in Fig. 2) are in fact foreground stars that turned out to either be small dwarf stars (subdwarf B stars to be exact) or white dwarfs. The rest of the stars (green circles) are the young, hot stars the authors sought. Of these, the authors confirmed that 6 were kinematically (based upon their positions and velocities) associated with the leading arm; these are denoted by red boxes in Fig. 2. These stars have temperatures ranging from 16,000 K to 17,200 K. 5 of these stars are young B stars, and 1 is a subdwarf B star. Given the kinematics of these stars, the authors rule out the possibility that this group of stars are “runaway stars” that actually came from the Milky Way disc.

Fig. 2:

Fig. 2: The leading arm shown in neutral hydrogen along with the 42 candidate stars (crosses). The four segments of the leading arm are marked by LA I – LA IV, and the three groups of stars by A, B, and C. Each star identified as a foreground star is marked with a white circle, while the green circles denote young, hot O stars. Red boxes indicated definite association with the leading arm, while the black star marks the youngest, hottest star in the sample. (Credit: Casetti-Dinescu et. al. 2014)

One of these things is not like the other…

Of their entire sample, the authors find a very special star, marked by the black star in Fig. 2. This is an O type star (O6V to be exact), and has a temperature of 43,700 K, and a mass around 40 solar masses. This is a far younger, and far hotter star from anything else in the sample. At this temperature and mass, this star has a lifetime on the order of 1-2 millions of years. Given this very short lifetime, the authors rule out the possibility that this star came from the Milky Way (it would have to be at least 385 millions years old if it did). In addition, they rule out the possibility that it came from the Large Magellanic Cloud, as it would have to have an unrealistically large velocity to move from the Large Magellanic Cloud to where it is currently located.

Because of this, the authors conclude that they have just discovered for the first time a star that formed very recently within the leading arm of the Magellanic stream. This young, hot star was born out of the interactions between the Milky Way galaxy and the two Magellanic Clouds, but exits completely independent of them.

 

by Andrew Emerick at April 15, 2014 03:33 AM

April 14, 2014

Clifford V. Johnson - Asymptotia

Total Lunar Eclipse!
There is a total eclipse of the moon tonight! It is also at not too inconvenient a time (relatively speaking) if you're on the West Coast. The eclipse begins at 10:58pm (Pacific) and gets to totality by 12:46am. This is good timing for me since I'd been meaning to set up the telescope and look at the moon recently anyway, and a full moon can be rather bright. Now there'll be a natural filter in the way, indirectly - the earth! There's a special event up at the Griffith Observatory if you are interested in making a party out of it. It starts at 7:00pm and you can see more about the [...] Click to continue reading this post

by Clifford at April 14, 2014 09:23 PM

Peter Coles - In the Dark

White in the moon the long road lies

White in the moon the long road lies,
The moon stands blank above;
White in the moon the long road lies
That leads me from my love.

Still hangs the hedge without a gust,
Still, still the shadows stay:
My feet upon the moonlit dust
Pursue the ceaseless way.

The world is round, so travellers tell,
And straight though reach the track,
Trudge on, trudge on, ’twill all be well,
The way will guide one back.

But ere the circle homeward hies
Far, far must it remove:
White in the moon the long road lies
That leads me from my love.

by A.E. Housman (1859-1936)

 


by telescoper at April 14, 2014 07:40 PM

Andrew Jaffe - Leaves on the Line

&ldquo;Public Service Review&rdquo;?

A few months ago, I received a call from someone at the “Public Service Review”, supposedly a glossy magazine distributed to UK policymakers and influencers of various stripes. The gentleman on the line said that he was looking for someone to write an article for his magazine giving an example of what sort of space-related research was going on at a prominent UK institution, to appear opposite an opinion piece written by Martin Rees, president of the Royal Society.

This seemed harmless enough, although it wasn’t completely clear what I (or the Physics Department, or Imperial College) would get out of it. But I figured I could probably knock something out fairly quickly. However, he told me there was a catch: it would cost me £6000 to publish the article. And he had just ducked out of his editorial meeting in order to find someone to agree to writing the article that very afternoon. Needless to say, in this economic climate, I didn’t have an account with an unused £6000 in it, especially for something of dubious benefit. (On the other hand, astrophysicists regularly publish in journals with substantial page charges.) It occurred to me that this could be a scam, although the website itself seems legitimate (although no one I spoke to knew anything about it).

I had completely forgotten about this until this week, when another colleague in our group at Imperial told me had received the same phone call, from the same organization, with the same details: article to appear opposite Lord Rees’; short deadline; large fee.

So, this is beginning to sound fishy. Has anyone else had any similar dealings with this organization?

Update: It has come to my attention that one of the comments below was made under a false name, in particular the name of someone who actually works for the publication in question, so I have removed the name, and will possibly likely the comment unless the original write comes forward with more and truthful information (which I will not publish without permission). I have also been informed of the possibility that some other of the comments below may come from direct competitors of the publication. These, too, may be removed in the absence of further confirming information.

Update II: In the further interest of hearing both sides of the discussion, I would like to point out the two comments from staff at the organization giving further information as well as explicit testimonials in their favor.

by Andrew at April 14, 2014 06:41 PM

The n-Category Cafe

universo.math

A new Spanish language mathematical magazine has been launched: universo.math. Hispanophones should check out the first issue! There are some very interesting looking articles which cover areas from art through politics to research-level mathematics.

The editor-in-chief is my mathematical brother Jacob Mostovoy and he wants it to be a mix of Mathematical Intellingencer, Notices of the AMS and the New Yorker, together with less orthodox ingredients; the aim is to keep the quality high.

Besides Jacob, the contributors to the first issue that I recognise include Alberto Verjovsky, Ernesto Lupercio and Edward Witten, so universo.math seems to be off to a high quality start.

by willerton (S.Willerton@sheffield.ac.uk) at April 14, 2014 05:16 PM

Matt Strassler - Of Particular Significance

A Lunar Eclipse Overnight

Overnight, those of you in the Americas and well out into the Pacific Ocean, if graced with clear skies, will be able to observe what is known as “a total eclipse of the Moon” or a “lunar eclipse”. The Moon’s color will turn orange for about 80 minutes, with mid-eclipse occurring simultaneously in all the areas in which the eclipse is visible: 3:00-4:30 am for observers in New York, 12:00- 1:30 am for observers in Los Angeles, and so forth. [As a bonus, Mars will be quite near the Moon, and about as bright as it gets; you can't miss it, since it is red and much brighter than anything else near the Moon.]

Since the Moon is so bright, you will be able to see this eclipse from even the most light-polluted cities. You can read more details of what to look for, and when to look for it in your time zone, at many websites, such as http://www.space.com/25479-total-lunar-eclipse-2014-skywatching-guide.html  However, many of them don’t really explain what’s going on.

One striking thing that’s truly very strange about the term “eclipse of the Moon” is that the Moon is not eclipsed at all. The Moon isn’t blocked by anything; it just becomes less bright than usual. It’s the Sun that is eclipsed, from the Moon’s point of view. See Figure 1. To say this another way, the terms “eclipse of the Sun” and “eclipse of the Moon”, while natural from the human-centric perspective, hide the fact that they really are not analogous. That is, the role of the Sun in a “solar eclipse” is completely different from the role of the Moon in a “lunar eclipse”, and the experience on Earth is completely different. What’s happening is this:

  • a “total eclipse of the Sun” is an “eclipse of the Sun by the Moon that leaves a shadow on the Earth.”
  • a “total eclipse of the Moon” is an “eclipse of the Sun by the Earth that leaves a shadow on the Moon.”

In a total solar eclipse, lucky humans in the right place at the right time are themselves, in the midst of broad daylight, cast into shadow by the Moon blocking the Sun. In a total lunar eclipse, however, it is the entire Moon that is cast into shadow; we, rather than being participants, are simply observers at a distance, watching in our nighttime as the Moon experiences this shadow. For us, nothing is eclipsed, or blocked; we are simply watching the effect of our own home, the Earth, eclipsing the Sun for Moon-people.

Fig. 1: In a "total solar eclipse", a small shadow is cast by the Moon upon the Earth; at that spot the Sun appears to be eclipsed.  In a "total lunar eclipse", the Earth casts a huge shadow across the entire Moon;

Fig. 1: In a “total solar eclipse”, a small shadow is cast by the Moon upon the Earth; at that spot the Sun appears to be eclipsed by the Moon. In a “total lunar eclipse”, the Earth casts a huge shadow across the entire Moon; on the near side of the Moon, the Sun appears to be eclipsed by the Earth.   The Moon glows orange because sunlight bends around the Earth through the Earth’s atmosphere; see Figure 2.  Picture is not to scale; the Sun is 100 times the size of the Earth, and much further away than shown.

Simple geometry, shown in Figure 1, assures that the first type of eclipse always happens at “new Moon”, i.e., when the Moon would not be visible in the Earth’s sky at night. Meanwhile the second type of eclipse, also because of geometry, only occurs on the night of the “full Moon”, when the entire visible side of the Moon is (except during an eclipse) in sunlight. Only then can the Earth block the Sun, from the Moon’s point of view.

An total solar eclipse — an eclipse of the Sun by the Moon, as seen from the Earth — is one of the nature’s most spectacular phenomena. [I am fortunate to speak from experience; put this on your bucket list.] That is both because we ourselves pass into darkness during broad daylight, creating an amazing light show, and even more so because, due to an accident of geometry, the Moon and Sun appear to be almost the same size in the sky: the Moon, though 400 times closer to the Earth than the Sun, happens to be just about 400 times smaller in radius than the Sun. What this means is that the Sun’s opaque bright disk, which is all we normally see, is almost exactly blocked by the Moon; but this allows the dimmer (but still bright!) silvery corona of the Sun, and the pink prominences that erupt off the Sun’s apparent “surface”, to become visible, in spectacular fashion, against a twilight sky. (See Figure 2.) This geometry also implies, however, that the length of time during which any part of the Earth sees the Sun as completely blocked is very short — not more than a few minutes — and that very little of the Earth’s surface actually goes into the Moon’s shadow (see Figure 1).

No such accident of geometry affects an “eclipse of the Moon”. If you were on the Moon, you would see the Earth in the sky as several times larger than the Sun, because the Earth, though about 400 times closer to the Moon than is the Sun, is only about 100 times smaller in radius than the Sun. Thus, the Earth in the Moon’s sky looks nearly four times as large, from side to side (and 16 times as large in apparent area) as does the Moon in the Earth’s sky.  (In short: Huge!) So when the Earth eclipses the Sun, from the Moon’s point of view, the Sun is thoroughly blocked, and remains so for as much as a couple of hours.

But that’s not to say there’s no light show; it’s just a very different one. The Sun’s light refracts through the Earth’s atmosphere, bending around the earth, such that the Earth’s edge appears to glow bright orange or red (depending on the amount of dust and cloud above the Earth.) This ring of orange light amid the darkness of outer space must be quite something to behold! Thus the Moon, instead of being lit white by direct sunlight, is lit by the unmoonly orange glow of this refracted light. The orange light then reflects off the Moon’s surface, and some travels back to Earth — allowing us to see an orange Moon. And we can see this from any point on the Earth for which the Moon is in the sky — which, during a full Moon, is (essentially) anyplace where the Sun is down.  That’s why anyone in the Americas and eastern Pacific Ocean can see this eclipse, and why we all see it simultaneously [though, since we're in different time zones, our clocks don't show the same hour.]

Since lunar eclipses (i.e. watching the Moon move into the Earth’s shadow) can be seen simultaneously across any part of the Earth where it is dark during the eclipse, they are common. I have seen two lunar eclipses at dawn, one at sunset, and several in the dark of night; I’ve seen the moon orange, copper-colored, and, once, blood red. If you miss one total lunar eclipse due to clouds, don’t worry; there will be more. But a total solar eclipse (i.e. standing in the shadow of the Moon) can only be seen and appreciated if you’re actually in the Moon’s shadow, which affects, in each eclipse, only a tiny fraction of the Earth — and often a rather inaccessible fraction. If you want to see one, you’ll almost certainly have to plan, and travel. My advice: do it.  Meanwhile, good luck with the weather tonight!


Filed under: Astronomy Tagged: astronomy

by Matt Strassler at April 14, 2014 05:08 PM

Peter Coles - In the Dark

Matzo Balls

This evening sees the start of the Jewish Festival of the Passover (Pesach) which made me think of posting this piece of inspired silliness by the legendary Slim Gaillard to wish you all a Chag Sameach.

Slim Gaillard was a talented musician in his own right, but also a wonderful comedian and storyteller. He’s most famous for the novelty jazz acts he formed with musicians such as Slam Stewart and, later, Bam Brown; their stream of consciousness vocals ranged far afield from the original lyrics along with wild interpolations of nonsense syllables such as MacVoutie and O-reeney; one such performance figures in the 1957 novel On the Road by Jack Kerouac.

In later life Slim Gaillard travelled a lot in Europe – he could speak 8 languages in addition to English – and spent long periods living in London. He died there, in fact, in 1991, aged 75. I saw him a few times myself when I used to go regularly to Ronnie Scott’s Club. A tall, gangly man with a straggly white beard and wonderful gleam in his eye, he cut an unmistakeable figure in the bars and streets of Soho. He rarely had to buy himself a drink as he was so well known and such an entertaining fellow that a group always formed around him  in order to enjoy his company whenever he went into a pub. You never quite knew what he was going to do next, in fact. I once saw him sit down and play a piano with his palms facing upwards, striking the notes with the backs of his fingers. Other random things worth mentioning are that Slim Gaillard’s daughter was married to Marvin Gaye and it is generally accepted that the word “groovy” was coined by him (Slim). I know it’s a cliché, but he really was a larger-than-life character and a truly remarkable human being.

They don’t make ‘em like Slim any more, but you can get a good idea of what a blast he was by listening to this record, which is bound to bring a smile even to the  most crabbed of faces….

 

 

 

 

 


by telescoper at April 14, 2014 05:06 PM

Symmetrybreaking - Fermilab/SLAC

CERN's LHCb experiment sees exotic particle

An analysis using LHC data verifies the existence of an exotic four-quark hadron.

Last week, the Large Hadron Collider experiment LHCb published a result confirming the existence of a rare and exotic particle. This particle breaks the traditional quark model and is a “smoking gun” for a new class of hadrons.

The Belle experiment at the KEK laboratory in Japan had previously announced the observation of such a particle, but it came into question when data from sister experiment BaBar at SLAC laboratory in California did not back up the result.

Now scientists at both the Belle and BaBar experiments consider the discovery confirmed by LHCb.

by Sarah Charley at April 14, 2014 05:02 PM

Emily Lakdawalla - The Planetary Society Blog

Pretty picture: Sunset over Gale crater
Imagine yourself on a windswept landscape of rocks and red dust with mountains all around you. The temperature -- never warm on this planet -- suddenly plunges, as the small Sun sets behind the western range of mountains.

April 14, 2014 03:38 PM

arXiv blog

How to Detect Criminal Gangs Using Mobile Phone Data

Law enforcement agencies are turning to social network theory to better understand the behaviors and habits of criminal gangs.


The study of social networks is providing dramatic insights into the nature of our society and how we are connected to one another. So it’s no surprise that law enforcement agencies want to get in on the act.

April 14, 2014 02:00 PM

ZapperZ - Physics and Physicists

Learn Quantum Mechanics From Ellen DeGeneres
Hey, why not? :)



Although, there isn't much of "quantum mechanics" in here, but rather more on black holes and general relativity. Oh well!

Zz.

by ZapperZ (noreply@blogger.com) at April 14, 2014 01:34 PM

ZapperZ - Physics and Physicists

Science Is Running Out Of Things To Discover?
John Horgan is spewing out the same garbage again in his latest opinion piece (and yes, I'm not mincing my words here). His latest lob into this controversy is the so-called evidence that in physics, the time difference between the original work and when the Nobel prize is finally awarded is getting longer, and thus, his point that physics, especially "fundamental physics", is running out of things to discover.

In their brief Nature letter, Fortunato and co-authors do not speculate on the larger significance of their data, except to say that they are concerned about the future of the Nobel Prizes. But in an unpublished paper called "The Nobel delay: A sign of the decline of Physics?" they suggest that the Nobel time lag "seems to confirm the common feeling of an increasing time needed to achieve new discoveries in basic natural sciences—a somewhat worrisome trend."

This comment reminds me of an essay published in Nature a year ago, "After Einstein: Scientific genius is extinct." The author, psychologist Dean Keith Simonton, suggested that scientists have become victims of their own success. "Our theories and instruments now probe the earliest seconds and farthest reaches of the universe," he writes. Hence, scientists may produce no more "momentous leaps" but only "extensions of already-established, domain-specific expertise." Or, as I wrote in The End of Science, "further research may yield no more great revelations or revolutions, but only incremental, diminishing returns."
So, haven't we learned anything from the history of science? The last time someone thought that we knew all there was to know about an area of physics, and all that we could do was simply to make incremental understanding of the area,  it was pre-1985 before Mother Nature smacked us right in the face with the discovery of high-Tc superconductors.

There is a singular problem with this opinion piece. It equates "fundamental physics" with elementary particle/high energy/cosmology/string/etc. This neglects the fact that (i) the Higgs mechanism came out of condensed matter physics, (ii) "fundamental" understanding of various aspects of quantum field theory and other exotica such as Majorana fermions and magnetic monopole are coming out of condensed matter physics, (iii) the so-called "fundamental physics" doesn't have a monopoly on the physics Nobel prizes. It is interesting that Horgan pointed out the time lapse between the theory and Nobel prizes for superfluidity (of He3), but neglected the short time frame between discovery and the Nobel prize for graphene, or high-Tc superconductors.

As we know more and more, the problems that remain and new ones that popped up become more and more difficult to decipher and observe. Naturally, this will make the confirmation/acceptance up to the level of Nobel prize to be lengthier, both in terms of peer-reviewed evaluation and in time. But this metric does NOT reflect on whether we lack things to discover. Anyone who had done scientific research can tell you that as you try to solve something, other puzzling things pop up! I can guarantee you that the act of trying to solve the Dark Energy and Dark Matter problem will provide us with MORE puzzling observations, even if we solve those two. That has always been the pattern in scientific discovery from the beginning of human beings trying to decipher the world around us! In fact, I would say that we have a lot more things we don't know of now than before, because we have so many amazing instruments that are giving us more puzzling and unexpected things.

Unfortunately, Horgan seems to dismiss whole areas of physics as being unimportant and not "fundamental".

Zz.

by ZapperZ (noreply@blogger.com) at April 14, 2014 01:26 PM

Quantum Diaries

Moriond 2014 : de nouveaux résultats, de nouvelles explorations… mais pas de nouvelle physique

Même avant mon départ pour La Thuile (Italie), les résultats des Rencontres de Moriond remplissaient déjà les fils d’actualités. La session de cette année sur l’interaction électrofaible, du 15 au 22 mars, a débuté avec la première « mesure mondiale » de la masse du quark top, basée sur la combinaison des mesures publiées jusqu’à présent par les expériences Tevatron et LHC. La semaine s’est poursuivie avec un résultat spectaculaire de CMS sur la largeur du Higgs.

Même si elle approche de son 50e anniversaire, la conférence de Moriond est restée à l’avant-garde. Malgré le nombre croissant de conférences incontournables en physique des hautes énergies, Moriond garde une place de choix dans la communauté, pour des raisons en partie historiques : cette conférence existe depuis 1966 et elle s’est imposée comme l’endroit où les théoriciens et les expérimentateurs viennent pour voir et être vus. Regardons maintenant ce que les expériences du LHC nous ont réservé cette année…

Nouveaux résultats­­­

Cette année, le clou du spectacle à Moriond a bien entendu été l’annonce de la meilleure limite à ce jour pour la largeur du Higgs, à < 17 MeV avec 95 % de confiance, présentée aux deux sessions de Moriond par l’expérience CMS. La nouvelle mesure, obtenue par une nouvelle méthode d’analyse basée sur les désintégrations du Higgs en deux particules Z, est environ 200 fois plus précise que les précédentes. Les discussions sur cette limite ont porté principalement sur la nouvelle méthode utilisée pour l’analyse. Quelles hypothèses étaient nécessaires ? La même technique pouvait-elle être appliquée à un Higgs se désintégrant en deux bosons W ? Comment cette nouvelle largeur allait-elle influencer les modèles théoriques pour la nouvelle physique ? Nous le découvrirons sans doute à Moriond l’année prochaine…

L’annonce du premier résultat mondial conjoint pour la masse du quark top a aussi suscité un grand enthousiasme. Ce résultat, qui met en commun les données du Tevatron et du LHC, constitue la meilleure valeur jusqu’ici, au niveau mondial, à 173,34 ± 0,76 GeV/c2. Avant que l’effervescence ne soit retombée à la session de QCD de Moriond, CMS a annoncé un nouveau résultat préliminaire fondé sur l’ensemble des données collectées à 7 et 8 TeV. Ce résultat est à lui seul d’une précision qui rivalise avec celle de la moyenne mondiale, ce qui démontre clairement que nous n’avons pas encore atteint la plus grande précision possible pour la masse du quark top.

ot0172hCe graphique montre les quatre mesures de la masse du quark top publiées respectivement par les collaborations ATLAS, CDF, CMS et D0, ainsi que la mesure la plus précise à ce jour obtenue grâce à l’analyse conjointe.

D’autres nouveautés concernant le quark top, entre autres les nouvelles mesures précises de son spin et de sa polarisation issues du LHC, ainsi que les nouveaux résultats d’ATLAS pour la section efficace du quark top isolé dans le canal de désintégration t, ont été présentés par Kate Shaw le mardi 25 mars. La période II du LHC permettra d’approfondir encore notre compréhension du sujet.

Une mesure fondamentale et délicate permettant d’explorer la nature de la brisure de la symétrie électrofaible portée par le mécanisme de Brout-Englert-Higgs est celle de la diffusion de deux bosons vecteurs massifs. Cet événement est rare, mais en l’absence du boson de Higgs sa fréquence augmenterait fortement avec l’énergie de la collision, jusqu’à enfreindre les lois de la physique. Un indice de la collision d’un boson vecteur de force électrofaible a été détecté pour la première fois par ATLAS dans des événements impliquant deux leptons de même charge et deux jets présentant une grande différence de rapidité.

S’appuyant sur l’augmentation du volume de données et une meilleure analyse de celles-ci, les expériences du LHC s’attaquent à des états finaux multi-particules rares et difficiles qui font intervenir le boson de Higgs. ATLAS en a présenté un excellent exemple, avec un nouveau résultat dans la recherche de la production d’un Higgs associé à deux quarks top et se désintégrant en une paire de quarks b. Avec une limite prévue de 2,6 fois la prédiction du Modèle standard pour ce seul canal et une intensité de signal relative observée de 1,7 ± 1,4, la future exploitation à haute énergie du LHC, avec laquelle la fréquence de cet événement augmentera, suscite de grands espoirs.

Dans le même temps, dans le monde des saveurs lourdes, l’expérience LHCb a présenté des analyses supplémentaires de l’état exotique X(3872). L’expérience a confirmé de manière non ambiguë que ses nombres quantiques Jpc sont 1++ et a mis en évidence sa désintégration en ψ(2S)γ.

L’étude du plasma de quarks et de gluons se poursuit dans l’expérience ALICE, et les discussions ont porté surtout sur les résultats de l’exploitation du LHC en mode proton-plomb (p-Pb). En particulier, la « double crête » nouvellement observée dans les collisions p-Pb est étudiée en détail, et des analyses du pic de ses jets, de sa distribution de masse et de sa dépendance à la charge ont été présentées.

Nouvelles explorations

Grâce à notre nouvelle compréhension du boson de Higgs, le LHC est entré dans l’ère de la physique du Higgs de précision. Notre connaissance des propriétés du Higgs – par exemple les mesures de son spin et de sa largeur – s’est améliorée, et les mesures précises des interactions et des désintégrations du Higgs ont elles aussi bien progressé. Des résultats relatifs à la recherche d’une physique au-delà du Modèle standard ont également été présentés, et les expériences du LHC continuent de s’investir intensément dans la recherche de la supersymétrie.

En ce qui concerne le secteur de Higgs, de nombreux chercheurs espèrent trouver les cousins supersymétriques du Higgs et des bosons électrofaibles, appelés neutralinos et charginos, par l’intermédiaire de processus électrofaibles. ATLAS a présenté deux nouveaux articles résumant de multiples recherches en quête de ces particules. L’absence d’un signal significatif a été utilisée pour définir des limites d’exclusion pour les charginos et les neutralinos, soit 700 GeV – s’ils se désintègrent via des partenaires supersymétriques intermédiaires de leptons – et 420 GeV – quand ils se désintègrent seulement via des bosons du Modèle standard.

Par ailleurs, pour la première fois, une recherche du mode électrofaible le plus difficile à observer, produisant une paire de charginos qui se désintègrent en bosons W, a été entreprise par ATLAS. Ce mode ressemble à celui de la production de paires de W du Modèle standard, dont le taux mesuré actuellement paraît légèrement plus élevé que prévu.

Dans ce contexte, CMS a présenté de nouveaux résultats dans la recherche de la production d’une paire électrofaible de higgsinos via leur désintégration en un Higgs (à 125 GeV) et un gravitino de masse presque nulle. L’état final montre une signature caractéristique de jets de quatre quarks b, compatible avec une cinématique de double désintégration du Higgs. Un léger excès du nombre d’événements candidats signifie que l’expérience ne peut pas exclure un signal de higgsino. On établit des limites supérieures de l’intensité du signal d’environ deux fois la prédiction théorique pour des masses du higgsino comprises entre 350 et 450 GeV.

Dans plusieurs scénarios de supersymétrie, les charginos peuvent être métastables et ils pourraient potentiellement être détectés sous la forme de particules à durée de vie longue. CMS a présenté une recherche innovante de particules génériques chargées à durée de vie longue, effectuées en cartographiant l’efficacité de détection en fonction de la cinématique de la particule et de la perte d’énergie dans le trajectographe. Cette étude permet non seulement d’établir des limites strictes pour divers modèles supersymétriques qui prédisent une durée de vie du chargino (c*tau) supérieure à 50 cm mais elle fournit également un puissant outil à la communauté des théoriciens pour tester de manière indépendante les nouveaux modèles prédisant des particules chargées à durée de vie longue.

Afin d’être aussi général que possible dans la recherche de la supersymétrie, CMS a également présenté les résultats de nouvelles recherches, dans lesquelles un grand sous-ensemble des paramètres de la supersymétrie, tels que les masses du gluino et du squark, sont testés pour vérifier leur compatibilité statistique avec différentes mesures expérimentales. Cela a permis d’établir une carte des probabilités dans un espace à 19 dimensions. Cette carte montre notamment que les modèles prédisant des masses inférieures à 1,2 TeV pour le gluino et inférieures à 700 GeV pour le sbottom et le stop sont fortement défavorisés.

mais pas de nouvelle physique

Malgré toute ces recherches minutieuses, ce qu’on a le plus entendu à Moriond, c’était: « pas d’excès observé » – « cohérent avec le Modèle standard ». Tous les espoirs reposent maintenant sur la prochaine exploitation du LHC, à 13 TeV. Si vous souhaitez en savoir davantage sur les perspectives ouvertes par la deuxième exploitation du LHC, consultez l’article suivant du Bulletin du CERN: “La vie est belle à 13 TeV“.

En plus des divers résultats des expériences du LHC qui ont été présentés, des nouvelles ont aussi été rapportées à Moriond par les expériences du Tevatron, de BICEP, de RHIC et d’autres expériences. Pour en savoir plus, consultez les sites internet de la conférence, Moriond EW et Moriond QCD.

by CERN (Francais) at April 14, 2014 01:25 PM

Emily Lakdawalla - The Planetary Society Blog

Interview with a Mars Explorer
A conversation with Dr. Sarah Milkovich, HiRISE Investigation Scientist.

April 14, 2014 01:03 PM

Quantum Diaries

On the Shoulders of…

My first physics class wasn’t really a class at all. One of my 8th grade teachers noticed me carrying a copy of Kip Thorne’s Black Holes and Time Warps, and invited me to join a free-form book discussion group on physics and math that he was holding with a few older students. His name was Art — and we called him by his first name because I was attending, for want of a concise term that’s more precise, a “hippie” school. It had written evaluations instead of grades and as few tests as possible; it spent class time on student governance; and teachers could spend time on things like, well, discussing books with a few students without worrying about whether it was in the curriculum or on the tests. Art, who sadly passed some years ago, was perhaps best known for organizing the student cafe and its end-of-year trip, but he gave me a really great opportunity. I don’t remember learning anything too specific about physics from the book, or from the discussion group, but I remember being inspired by how wonderful and crazy the universe is.

My second physics class was combined physics and math, with Dan and Lewis. The idea was to put both subjects in context, and we spent a lot of time on working through how to approach problems that we didn’t know an equation for. The price of this was less time to learn the full breadth subjects; I didn’t really learn any electromagnetism in high school, for example.

When I switched to a new high school in 11th grade, the pace changed. There were a lot more things to learn, and a lot more tests. I memorized elements and compounds and reactions for chemistry. I learned calculus and studied a bit more physics on the side. In college, where the physics classes were broad and in depth at the same time, I needed to learn things fast and solve tricky problems too. By now, of course, I’ve learned all the physics I need to know — which is largely knowing who to ask or which books to look in for the things I need but don’t remember.

There are a lot of ways to run schools and to run classes. I really value knowledge, and I think it’s crucial in certain parts of your education to really buckle down and learn the facts and details. I’ve also seen the tremendous worth of taking the time to think about how you solve problems and why they’re interesting to solve in the first place. I’m not a high school teacher, so I don’t think I can tell the professionals how to balance all of those goods, which do sometimes conflict. What I’m sure of, though, is that enthusiasm, attention, and hard work from teachers is a key to success no matter what is being taught. The success of every physicist you will ever see on Quantum Diaries is built on the shoulders of the many people who took the time to teach and inspire them when they were young.

by Seth Zenz at April 14, 2014 12:25 PM

Lubos Motl - string vacua and pheno

Andrei Linde: universe or multiverse?
Some time ago, before the BICEP2 discovery (in July 2012, weeks after the Higgs discovery), Andrei Linde gave an 82-minute talk at SETI, a center to search for ETs.



Because Linde and his theories – even some more specific theories – seem to be greatly vindicated by the BICEP2 announcement, it may be interesting to listen to his more general ideas about the subject. Linde is a pretty entertaining speaker – the audience is laughing often, too.




He starts with jokes about the word "principle" and comments about the cosmological principle, the uniformity principle, the big bang theory, possible global shapes of the universe and fates of the expanďing universe, and so on.




Linde employs plain English – with his cute sofťish Russian accent – to clarify many stupid questions. Why are so many people doing SEŤI? Why is the universe so large? Why energy is not conserved in cosmology?

But he ultimately gets to the multiverse and other controversial topics near the cutting edge. Amusingly enough, Linde mentions Hawking's old proposal to explain the uniformity of the universe anthropically. If it were non-uniform, it would become lethally non-uniform, and we couldn't live here and ask stupid questions. Except that Linde shows that Hawking's explanation doesn't really work and there is a more satisfying one, anyway.

Linde is surprised that the simple solutions for inflation etc. were only understood so recently, 30 years ago or so. He spends some time by explaining why the young universe was red (he is from Russia) or black (Henry Ford:I didn't quite understand this remark on Ford LOL: but the final point is that a largely expanded universe looks color uniform even if it is not). Linde prefers to believe in the multiverse (containing inequivalent vacua) because diversity is more generic.

At the end, he talked about the cosmological observations as a "time machine", the fractal nature of the universe, the cosmological mutation arising from the landscape etc. Some of his humor is childishly cute. The regions of the multiverse are separated not by border patrols but by domain walls and if you are young enough, energetic, and stupid, you go through the wall and die. ;-) Around 53:00, string theory is finally discussed, with the claim that there are 10500 colors of the universe. KKLT. Users of iPhone are parts of the silicon life created in the Silicon Valley.

Guth made a comment about the free lunch and the Soviet man Linde was deeply impressed by the free lunches. So he improved the inflation as the eternal feast where all possible dishes are served. ;-)

During the talk, Linde says lots of philosophical things about verification of theories etc. He knew inflation was right but he didn't expect that proofs would be found. So he was amazed by the experimenters. Concerning the "unfalsifiability" claims, he debunks them by saying that not even the U.S. courts work in this way. For example, a murder (of his wife) suspect is not given a new wife and a knife to repeatedly try whether he would kill her again. ;-) They just eliminate options and release a verdict. But reasoning doesn't require repeatable experiments.

Around 1:10:00, he spends some time with funny musings about Einstein's "the most incomprehensible thing about the Universe is that it is comprehensible", Wigner's "incredibly efficient mathematics", and some comments about the unexpectedly inefficient biology. Those things are explained anthropically as tautologies, too. Physicists can't exist at places where physics doesn't work etc. That's nice except that millions of things we have already understood also have a better, less tautological, more unequivocal, and more nontrivial explanation, and the same may be true for many currently unexplained patterns in Nature, too.

Questions begin at 1:12:55. Someone is puzzled whether Linde is for or against the anthropic reasoning. He is against the non-inflationary anthropic arguments. In inflation, things are different. He says that 10500 options is much better than the single 1 candidate on the Soviet ballots. ;-) In the second question, he explains that we know the theory of structure formation that produces the right filaments etc.; the small non-flatness of the spectrum is important in that, too. Someone with a seemingly similar Russian accent asks whether the initial wave function of the universe applies just to our universe or the whole multiverse. I think that Linde didn't understand the question so he talked about the many-world interpretation of quantum mechanics (just an interpretation, not a key insight etc.; MWI ignores the key role of conscious observers in QM, and so on; I completely agree with Linde here, even though he is answering a wrong question). The man asks the question whether entanglement between particles in 2 universes can exist. Linde says it can but he says it can exist on 2 islands. However, the entanglement behind the cosmic horizon may be unphysical due to the cosmic horizon complementarity principle, I would add.

At any rate, a fun talk.

by Luboš Motl (noreply@blogger.com) at April 14, 2014 11:44 AM

Tommaso Dorigo - Scientificblogging

Aldo Menzione And The Design Of The Silicon Vertex Detector
Below is a clip from a chapter of my book where I describe the story of the silicon microvertex detector of the CDF experiment. CDF collected proton-antiproton collisions from the Tevatron collider in 1985, 1987-88, 1992-96, and 2001-2011. Run 1A occurred in 1992, and it featured for the first time in a hadron collider a silicon strip detector, the SVX. The SVX would prove crucial for the discovery of the top quark.

read more

by Tommaso Dorigo at April 14, 2014 09:21 AM

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 5)

guest post by Steve Easterbrook

(5) Current rates of ocean acidification are unprecedented.

The IPCC report says:

The pH of seawater has decreased by 0.1 since the beginning of the industrial era, corresponding to a 26% increase in hydrogen ion concentration. [...] It is virtually certain that the increased storage of carbon by the ocean will increase acidification in the future, continuing the observed trends of the past decades. [...] Estimates of future atmospheric and oceanic carbon dioxide concentrations indicate that, by the end of this century, the average surface ocean pH could be lower than it has been for more than 50 million years.

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings. [The numbers indicate the number of models used in each ensemble.]

Ocean acidification has sometimes been ignored in discussions about climate change, but it is a much simpler process, and is much easier to calculate (notice the uncertainty range on the graph above is much smaller than most of the other graphs). This graph shows the projected acidification in the best and worst case scenarios (RCP2.6 and RCP8.5). Recall that RCP8.5 is the “business as usual” future.

Note that this doesn’t mean the ocean will become acid. The ocean has always been slightly alkaline—well above the neutral value of pH 7. So “acidification” refers to a drop in pH, rather than a drop below pH 7. As this continues, the ocean becomes steadily less alkaline. Unfortunately, as the pH drops, the ocean stops being supersaturated for calcium carbonate. If it’s no longer supersaturated, anything made of calcium carbonate starts dissolving. Corals and shellfish can no longer form. If you kill these off, the entire ocean food chain is affected. Here’s what the IPCC report says:

Surface waters are projected to become seasonally corrosive to aragonite in parts of the Arctic and in some coastal upwelling systems within a decade, and in parts of the Southern Ocean within 1–3 decades in most scenarios. Aragonite, a less stable form of calcium carbonate, undersaturation becomes widespread in these regions at atmospheric CO2 levels of 500–600 ppm.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 14, 2014 07:56 AM

Andrew Jaffe - Leaves on the Line

Academic Blogging Still Dangerous?

Nearly a decade ago, blogging was young, and its place in the academic world wasn’t clear. Back in 2005, I wrote about an anonymous article in the Chronicle of Higher Education, a so-called “advice” column admonishing academic job seekers to avoid blogging, mostly because it let the hiring committee find out things that had nothing whatever to do with their academic job, and reject them on those (inappropriate) grounds.

I thought things had changed. Many academics have blogs, and indeed many institutions encourage it (here at Imperial, there’s a College-wide list of blogs written by people at all levels, and I’ve helped teach a course on blogging for young academics). More generally, outreach has become an important component of academic life (that is, it’s at least necessary to pay it lip service when applying for funding or promotions) and blogging is usually seen as a useful way to reach a wide audience outside of one’s field.

So I was distressed to see the lament — from an academic blogger — “Want an academic job? Hold your tongue”. Things haven’t changed as much as I thought:

… [A senior academic said that] the blog, while it was to be commended for its forthright tone, was so informal and laced with profanity that the professor could not help but hold the blog against the potential faculty member…. It was the consensus that aspiring young scientists should steer clear of such activities.

Depending on the content of the blog in question, this seems somewhere between a disregard for academic freedom and a judgment of the candidate on completely irrelevant grounds. Of course, it is natural to want the personalities of our colleagues to mesh well with our own, and almost impossible to completely ignore supposedly extraneous information. But we are hiring for academic jobs, and what should matter are research and teaching ability.

Of course, I’ve been lucky: I already had a permanent job when I started blogging, and I work in the UK system which doesn’t have a tenure review process. And I admit this blog has steered clear of truly controversial topics (depending on what you think of Bayesian probability, at least).

by Andrew at April 14, 2014 06:26 AM

April 13, 2014

Clifford V. Johnson - Asymptotia

Participatory Art!
As you know I am a big fan of sketching, and tire easily of the remark people make that they "can't draw" - an almost meaningless thing society trains most people to think and say, with the result that they miss out on a most wonderful part of life. Sketching is a wonderful way of slowing down and really looking at the world around you and recording your impressions. If you can write you certainly can draw. It is a learned skill, and the most important part of it is learning to look*. But anyway, I was pleased to see this nice way of getting people of all ages involved in sketching for fun at the book festival! So I reached in and grabbed a photo for you. artists_row_latfob_12th_april_2014 [...] Click to continue reading this post

by Clifford at April 13, 2014 09:14 PM

Clifford V. Johnson - Asymptotia

Young Author Meeting!
20140413-110946.jpgIt is nice to see the variety of authors at a book fair event like this one, and it's great to see people's enthusiasm about meeting people who've written works they've spent a lot of time with. The long lines for signings are remarkable! As you might guess, I'm very much a supporter of the unsung authors doing good work in their own small way, not anywhere near the spotlight. An interesting booth caught my notice as I was wandering... The word "science" caught my eye. Seems that a mother and daughter team wrote a science book to engage children to become involved in science... Hurrah! So Jalen Langie (the daughter, amusingly wearing a lab coat) gets to be [...] Click to continue reading this post

by Clifford at April 13, 2014 06:08 PM

The Great Beyond - Nature blog

IPCC report calls for climate mitigation action now, not later

The world is heading towards possibly dangerous levels of global warming despite increasing efforts to promote the transition to a low-carbon economy, the Intergovernmental Panel on Climate Change (IPCC) warns in its latest report today.

As the concentration of greenhouse gases in the atmosphere continues to rise to unprecedented levels, the groups says only major institutional and technological change will give the world a better than even chance of staying below 2C warming – the widely accepted threshold to dangerous climate change. Stabilizing greenhouse gas concentrations at 450 parts per million CO2 equivalent – a level which scientists think is needed to limit warming to 2C – will require a three to four-fold increase in the share of low-carbon energies, such as renewables and nuclear, in the global power mix. Improvements in energy efficiency and, possibly, the use of carbon capture and storage technology will be needed to assist the process, the IPCC says.

The report was produced by the IPCC’s Working Group III, which has been tasked with looking into the mitigation of climate change. Its 33-page Summary for Policymakers was approved, line by line, by hundreds of IPCC authors and representatives of 195 governments over the past week in Berlin. Launching the report at a presentation in the city, Ottmar Edenhofer, the co-chair of the working group, admitted the discussions were at times nerve-rackingly tense.

To assess the options, costs and possible adverse side-effects of different pathways to stabilizing emissions at safe levels, the 235 lead authors of the report analysed close to 1,200 scenarios of socioeconomic development and cited almost 10,000 scientific papers. The resulting work, although phrased in rather technical language, is unambiguous in its message that the challenge of climate change is mounting as time proceeds.

“Global emissions have increased despite the recent economic crisis and remarkable mitigation efforts by some countries,” Edenhofer says. “Economic growth and population growth have outpaced improvements in energy efficiency – and since the turn of the century coal has become competitive again in many parts of the world.”

The report makes clear that it would be wise to act now rather than later. But, in line with the IPCC’s mandate to be policy-neutral, it includes no specific recommendations as to the energy and related policies that individual countries should follow.

“Substantial investment in clean energies is needed in all sectors of the global economy, including in some parts of the world in nuclear power,” says Edenhofer. “But it would be inappropriate for the IPCC to prescribe reduction targets or energy policies to specific countries.”

Doing nothing is not an option, he says. In a business-as-usual scenario run without meaningful mitigation policies, greenhouse gas concentrations double by the end of the century, the working group found. This would result in global warming of 4C to 5C above the pre-industrial (1750) level with possibly dramatic consequences on natural systems and human welfare.

Mitigating climate change would lead to a roughly 5% reduction in global consumption, according to the report. But, says Edenhofer, this does not mean that the world has to sacrifice economic growth. In fact, the group found that action to keep temperature rises at bay would reduce global economic growth by no more than 0.06% per year. This figure excludes the benefits of climate mitigation, such as from better air quality and health, which are thought to lower the actual costs of mitigation.

The full report outlines in great detail over 16 chapters the emission reduction potential of sectors including energy production and use, industry, transport and building and land use, and describes how mitigation efforts in one sector determine the needs in others. The IPCC has also assessed the potential of carbon capture and storage technology, which it says would be essential for achieving low-stabilization targets. More ambitious geoengineering possibilities, such as proposals to deliberately reduce the amount of sunlight reaching the Earth’s surface, have not been assessed in the report.

“There is a whole portfolio of mitigation options that can be combined in ways that meet the political priorities of individual countries,” says Edenhofer. “The means to tackle the problem exist, but we need to use them.”

Effective climate mitigation, adds Rajendra Pachauri, the chairman of the IPCC, will not be achieved if individual nations and agents advance their own interests independently. Nations hope to agree on binding emission reduction targets at a United Nations climate meeting in 2015 in Paris.

Delaying action is getting increasingly risky and will only lead to tougher requirements and higher costs at a later stage, says Pachauri.

“We haven’t done nearly enough yet,” he says. “A high-speed mitigation train needs to leave the station soon and all of global society needs to get on board.”

 

by Quirin Schiermeier at April 13, 2014 03:42 PM

Geraint Lewis - Cosmic Horizons

Gravitational lensing in WDM cosmologies: The cross section for giant arcs
We've had a pretty cool paper accepted for publication in the Monthly Notices of the Royal Astronomical Society  which tackles a big question in astronomy, namely what is the temperature of dark matter. Huh, you might say "temperature", what do you mean by "temperature"? I will explain.

The paper is by Hareth Mahdi, a PhD student at the Sydney Institute for Astronomy. Hareth's expertise is in gravitational lensing, using the huge amounts of mass in galaxy clusters to magnify the view of the distant Universe. Gravitational lenses are amongst the most beautiful things in all of astronomy. For example:
Working out how strong the lensing effect is reveals the amount of mass in the cluster, showing that there is a lot of dark matter present.

Hareth's focus is not "real" clusters, but clusters in "synthetic" universes, universes we generate inside supercomputers. The synthetic universes look as nice as the real ones; here's one someone made earlier (than you Blue Peter).

 Of course, in a synthetic universe, we control everything, such as the laws of physics and the nature of dark matter.

Dark matter is typically treated as being cold, meaning that the particles that make up dark matter move at speeds much lower than the speed to light. But we can also consider hot dark matter, which travels at speeds close to the speed of light, or warm dark matter, which moves at speeds somewhere in between.

What's the effect of changing the temperature of dark matter? Here's an illustration
With cold at the top, warmer in the middle, and hottest at the bottom. And what you can see is that as we wind up the temperature, the small scale structure in the cluster gets washed out. Some think that warm dark matter might be the solution to missing satellite problem.

Hareth's had two samples of clusters, some from cold dark matter universes and some from warm, and he calculated the strength of gravitational lensing in both. The goal is to see if changing to warm dark matter can help fix another problem in astronomy, namely that the clusters we observe seem to be more efficient at producing lensed images than the ones we have in our simulated universes.

We can get some pictures of the lensing strengths of these clusters, which looks like this
This shows the mass distributions in cold dark matter universes, with a corresponding cluster in the warm dark matter universe. Because the simulations were set up with similar initial conditions, these are the same clusters seen in the two universe.

You can already see that there are some differences, but what about lensing efficiency? There are a few ways to characterise this, but one way is the cross-section to lensing. When we compare the two cosmologies, we get the following:

There is a rough one-to-one relationship, but notice that the warm dark matter clusters sit mainly above the black line. This means that the warm dark matter clusters are more efficient at lensing than their cold dark matter colleagues.

This is actually an unexpected result. Naively, we would expect warm dark matter to remove structure and make clusters puffy, and hence less efficient at lensing. So what is happening?

It took a bit of detective work, but we tracked it down. Yes, in warm dark matter clusters, the small scale structure is wiped out, but where does the mass go? It actually goes in to the larger mass halo, making them more efficient at lensing. Slightly bizarre, but it does mean that we have a way, if we can measure enough real clusters, it could give us a test of the temperature of dark matter!

But alas, even though the efficiency is stronger with warm dark matter, it is not strong enough to fix the lensing efficiency problem. As ever, there is more work to do, and I'll report it here.

Until then, well done Hareth!

Gravitational lensing in WDM cosmologies: The cross section for giant arcs

The nature of the dark sector of the Universe remains one of the outstanding problems in modern cosmology, with the search for new observational probes guiding the development of the next generation of observational facilities. Clues come from tension between the predictions from {\Lambda}CDM and observations of gravitationally lensed galaxies. Previous studies showed that galaxy clusters in the {\Lambda}CDM are not strong enough to reproduce the observed number of lensed arcs. This work aims to constrain the warm dark matter cosmologies by means of the lensing efficiency of galaxy clusters drawn from these alternative models. The lensing characteristics of two samples of simulated clusters in the warm dark matter ({\Lambda}WDM) and cold dark matter ({\Lambda}CDM) cosmologies have been studied. The results show that even though the CDM clusters are more centrally concentrated and contain more substructures, the WDM clusters have slightly higher lensing efficiency than their CDM counterparts. The key difference is that WDM clusters have more extended and more massive subhaloes than CDM analogues. These massive substructures significantly stretch the critical lines and caustics and hence they boost the lensing efficiency of the host halo. Despite the increase in the lensing efficiency due to the contribution of massive substructures in the WDM clusters, this is not enough to resolve the arc statistics problem.

by Cusp (noreply@blogger.com) at April 13, 2014 01:02 AM

April 11, 2014

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 4)

guest post by Steve Easterbrook

(4) Most of the heat is going into the oceans

The oceans have a huge thermal mass compared to the atmosphere and land surface. They act as the planet’s heat storage and transportation system, as the ocean currents redistribute the heat. This is important because if we look at the global surface temperature as an indication of warming, we’re only getting some of the picture. The oceans act as a huge storage heater, and will continue to warm up the lower atmosphere (no matter what changes we make to the atmosphere in the future).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. See text for data sources. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

(Box 3.1 Fig 1) Plot of energy accumulation in zettajoules within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

Note the relationship between this figure (which shows where the heat goes) and the figure from Part 2 that showed change in cumulative energy budget from different sources:

The Earth's energy budget from 1970 to 2011. Cumulative energy flux (in zettajoules) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

(Box 13.1 fig 1) The Earth’s energy budget from 1970 to 2011. Cumulative energy flux (in zettajoules) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

Both graphs show zettajoules accumulating over about the same period (1970-2011). But the graph from Part 1 has a cumulative total just short of 800 zettajoules by the end of the period, while today’s new graph shows the earth storing “only” about 300 zettajoules of this. Where did the remaining energy go? Because the earth’s temperature rose during this period, it also lost increasingly more energy back into space. When greenhouse gases trap heat, the earth’s temperature keeps rising until outgoing energy and incoming energy are in balance again.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 11, 2014 06:01 PM

The Great Beyond - Nature blog

Shorter list for gamma-ray telescope sites, but no home yet

Concept illustration of Cherenkov Telescope Array

Where will the world’s next generation ground-based γ-ray detector, the Cherenkov Telescope Array (CTA), be built? No one yet knows. But a panel of funders have narrowed the field slightly, following a meeting in Munich, Germany, this week.

Scientists had originally hoped to select two sites — a large one in the Southern Hemisphere and a smaller one in the North — by the end of 2013. But the selection process for the €200-million ($276-million) project has taken longer than originally foreseen.

At a meeting on 10 April, representatives from 12 government ministries narrowed the potential southern sites from five to two: Aar, a site in Southern Namibia; and Armazones, in Chile’s Atacama desert. They also picked a reserve site in Argentina.

The committee, a panel of representatives from Argentina, Austria, Brazil, France, Germany, Italy, Namibia, Poland, Spain, South Africa, Switzerland and the United Kingdom, decided that all four possible northern sites — in Mexico, Spain and the United States — needed further analysis. A statement from the board said that a final site decision will happen “as soon as possible”.

If the CTA is built, its two sites will contain around 120 telescopes, which will look for the faint blue light emitted when very-high-energy photons slam into Earth’s atmosphere and create cascades of particles. By triangulating the data from various detectors, astrophysicists hope to piece together the energy and path of such photons. This should help them not only identify the sources of the γ-rays — extreme environments such as supermassive black holes — but also answer fundamental questions about dark matter and quantum gravity.

Like many astronomy projects, the best site for the CTA would be a high-altitude, remote location with clear skies. But the site decision must also take into account environmental risks, such as earthquakes and high winds, and projected operational costs. How much each host country would be prepared to contribute is also a factor.

Last year, an evaluation by representatives of the CTA’s 1,000-strong consortium rated Aar in Southern Namibia as the best southern site, which would contain 99 telescopes spread out over 10 square kilometres. Two sites tied for second: another Namibian site, which already hosts the High Energy Stereoscopic System (HESS) γ-ray telescope; and Armazones, where the European Southern Observatory already has a base and plans to build the European Extremely Large Telescope. The group equally ranked the four contenders for the northern site, which would be a 19-telescope array spread out over one square kilometre. Mexico is already building the High-Altitude Water Cherenkov Observatory (HAWC),  a γ-ray observatory of different type.

Although the consortium’s ranking was based largely on the science case and observing conditions, the latest decisions follows the report of an external site selection committee, which also took into account political and financial factors. Further decisions will rest on detailed negotiations, including host country contributions and tax exemptions at the various sites.

The CTA now aims to pick a final southern site by the end of the year. Board chair Beatrix Vierkorn-Rudolph, of Germany’s Federal Ministry of Education and Research, told Nature it was not yet clear whether the same will be possible for the northern site.

by Elizabeth Gibney at April 11, 2014 03:46 PM

astrobites - astro-ph reader's digest

Mugged by a Passing Star

Fig 1:

Fig 1: (Top) Face-on view of a simulated disk after an encounter with a mass ratio of 1 and a periastron distance of 2 rinit. The vertical lines show the measured size of the disk. (Bottom) Initial (thin) and final (thick) particle density (solid lines) and mass density (dashed lines) of the disk. The vertical line shows the point of steepest mass density decrease.  (From Breslau et al. 2014)

Close Encounters of the Stellar Kind

Stars often form in clusters, where the chances of a close encounter with another star are high. “Close encounter” just means that the passing star comes close enough that its gravity significantly affects the motion of the other star. The distance between the two stars at their closest point during the encounter is called the periastron distance, and these “close” encounters can occur at periastron distances of tens or even hundreds of AU.

The protoplanetary disk of gas and dust that surrounds a young star can be affected by the gravity of the passing star and will often end up smaller than it was before the encounter. For example, the density of material in our own Solar System drops off greatly past about 30 AU. Since the Sun formed in a cluster, this truncation might have been caused by a stellar encounter after the Sun’s disk had formed but before the cluster dispersed. According to previous models, the other star needed to come within about 100 AU to truncate the Sun’s disk at 30 AU.

This disk truncation scenario has been studied by various theorists in the past, who modeled what happens to a disk during an encounter between equal-mass stars and found that disks are typically truncated to about 1/2-1/3 the periastron distance. The authors of this paper wanted to explore a much larger parameter space and determine what relationship the final disk size has to the mass ratio of the two stars and the periastron distance.

Simulations and Results

The authors ran over 100 N-body simulations of a disk of particles experiencing an encounter with another star. Each simulation had a different stellar mass ratio and periastron distance. The authors restricted their parameters to ensure their systems were realistic. For example, they chose from a range of mass ratios based on values estimated by observing young dense clusters in our solar neighborhood. They also chose periastron distances that were small enough that the disks were perturbed but not so close that the disks were destroyed, because unaffected or completely absent disks wouldn’t be very helpful to their investigation.

Disk edges are fuzzy and don’t have clear cutoffs, so the authors had to define what they meant by ‘disk size’ before they could measure it. For this paper, they defined the disk radius as the distance from the star at which they measured the steepest drop in mass surface density (see Fig 1). You can see the resulting measurements of final disk size vs. periastron distance for several different mass ratios and different initial disk sizes in Fig 2.

From their simulation results, the authors derived the following simple relationship between final disk size (rdisk), stellar mass ratio (m12), and periastron distance (rperi):

rdisk = 0.28 m12-0.32 rperi.

Notice that according to these results, the finial disk radius does not depend on initial disk radius! The only caveat is that the disks in these simulations never grow in size. For example, the equation above might predict that the final disk size should be 150 AU, but if the initial disk size was 100 AU, it will still be 100 AU at the end of the simulation.

Fig 2:

Fig 2: Final disk radius vs. periastron distance for simulated disks with initial radii of 100 AU (black squares) and 200 AU (white diamonds) for different mass ratios. (From Breslau et al. 2014)

What Does It All Mean?

Disks are truncated through gravitational perturbations in two ways. Some dust is removed by the passing star, but other dust  simply moves inwards as its angular momentum is transferred to the passing star. The authors show that this second effect is more significant than mass removal in truncating disks by noting that some of their simulated disks got smaller with no loss of mass. They also compared their results with the commonly-used approximations from previous equal-mass star studies and found that these approximations tend to overestimate the size of the truncated disk. However, the authors did confirm previous results that a disk like the Solar System could be truncated to 30 AU with a 100 AU periastron encounter with an equal-mass star, supporting the theory that the Solar System may have been affected by a stellar encounter. They also show that a larger or smaller mass star could be the culprit depending on its periastron distance.

The new ALMA observatory will let us measure disk sizes to much greater accuracy and allow us to study their dependence on mass ratio and periastron distance. This paper will prove very helpful in understanding those results and the encounter history of observed disks.

 

by Erika Nesvold at April 11, 2014 02:35 PM

CERN Bulletin

ELENA gets a roof over its head

Today, Friday 11 April, CERN inaugurated the ELENA building (393) after less than a year's construction work.

 

Tacked on to the side of the Antiproton Decelerator (AD), this building will soon house a cleaning room, workshops and generators for the kickers in order to free space in the AD hall, where the future Extra Low ENergy Antiproton ring, ELENA, will be installed.

“Today we’re celebrating the completion of a project which, I’m happy to say, has gone very well,” exclaims François Butin, technical coordinator of the ELENA project (EN-MEF Group). “The deadlines and budgets have been perfectly respected and the building fully complies with our specifications. A great vote of thanks to GS-SE and the outside contractors who have enabled us to complete this project.”

Some 10,000 tonnes of earth had to be moved by around 500 trucks. The presence of the TT2 transfer tunnel directly beneath the building posed a number of technical challenges. An 800-mm thick shielding slab was implemented to protect the building from radiation.

Christian Carli, ELENA Project Leader (BE-ABP Group), adds: "The installation of the ELENA machine is approaching fast. The project's Technical Design Report has just been published and the work is progressing well, including on the transfer-line side.” ELENA’s magnetic deceleration ring, 30 m in circumference, will be installed in the AD hall mid-2015 and its research programme should begin two years later.


For more information on the ELENA project, read our article in the Bulletin issue 26-27/2012.

April 11, 2014 01:04 PM

Symmetrybreaking - Fermilab/SLAC

Tufte’s Feynman sculptures come to Fermilab

Edward Tufte, celebrated statistician and master of informational graphics, transforms physics notations into works of art.

If you ask a physicist how particles interact and you have a drawing surface handy, the explanation will likely come in the form of a series of lines, arrows, squiggles and loops.

These drawings, called Feynman diagrams, help organize a calculation. They represent the mathematical formulas of how particles interact, beginning to end, and also the rate at which the interaction happens.

A new exhibit at Fermi National Accelerator Laboratory examines the beauty and simplicity of this shorthand. 

by Amanda Solliday at April 11, 2014 01:00 PM

Axel Maas - Looking Inside the Standard Model

News about the state of the art
Right now, I am at workshop in Benasque, Spain. This workshop is called 'After the Discovery: Hunting for a non-standard Higgs Sector'. The topic is essentially this: We now have a Higgs. How can we find what else is out there? Or at least assure that it is currently out of our reach? That there is something more is beyond doubt. We know too many cases where our current knowledge is certainly limited.

I will not go on with describing all what is presented on this workshop. This is too much. And there are certainly other places on the web, where this is done. In this entry I will therefore just describe how what is discussed at the workshop relates to my own research.

One point is certainly what experiments find. At such specialized workshops, you can get much more details of what they actually do. Since any theoretical investigation is to some extent approximative, it is always good to know, what is known experimentally. Hence, if I get a result in disagreement with the experiment, I know that there is something wrong. Usually, it is the theory, or the calculations performed. Some assumption being too optimistic, some approximation being too drastic.

Fortunately, so far nothing is at odds with what I have. That is encouraging. Though no reason for becoming overly confident.

The second aspect is to see what other peoples do. To see, which other ideas still hold up against experiment, and which failed. Since different people do different things, combining the knowledge, successes and failures of the different approaches helps you. It helps not only in avoiding too optimistic assumptions or other errors. But other people's successes provide new input.

One particular example at this workshop is for me the so-called 2-Higgs-Doublet models. Such models assume that there exists besides the known Higgs another set of Higgs particles. Though this is not obvious, the doublet in the name indicates that they have four more Higgs particles, one of them being just a heavier copy of the one we know. I have recently considered to look also into such models, though for quite different reasons. Here, I learned how they can be motivated for entirely different reasons, and especially why there are so interesting for ongoing experiments. I also learned much about their properties, and what is known (and not known) about them. This gives me quite some new perspectives, and some new ideas.

Ideas, I will certainly realize, once being back.

Finally, collecting all the talks together, they draw the big picture. They tell me, where we are now. What we know about the Higgs, what we do not know, and where there is room (left) for much more than just the 'ordinary' Higgs. It is an update for my own knowledge about particle physics. And it finally delivers the list, of what will become looked at in the next couple of months and years. I now know better where to look for the next result relevant for my research, and relevant for the big picture.

by Axel Maas (noreply@blogger.com) at April 11, 2014 11:57 AM

CERN Bulletin

CERN Library | Author Talk: Quinn Slobodian | 15 April
Quinn Slobodian from the Dahlem Humanities Center, Freie Universität Berlin will give a presentation at the CERN Library about "The Laboratory of the World Economy: Globalization Theory around 1900".   Quinn Slobodian researches the history of Germany in the world with a focus on international political economy and transnational social movements. He is really interested to hear from CERN physicists and engineers about their responses to his theories about the Global Economy at the turn of the century: We may think of globalization theory as a recent phenomenon. Yet in the decades around 1900, German-speaking economists were already trying to make sense of an entity they called “the world economy,” coining a term that would not enter other languages until after the First World War. What was the nature of the world economy? How could one visualize and represent it? What was the status of national autonomy in an era of globalized communication and trade? My talk explores these questions through the work of German, Austrian, and Swiss economists around 1900. I follow a central debate. On one side were those who used maps and statistics to see the world economy as a globe-spanning “organism,” anticipating later sociological theories of the “network society.” On the other were those, including Joseph Schumpeter, who saw the stock exchange as a laboratory of the world economy. They believed that one could draw conclusions about the world economy at large by observing price movements on the major commodities markets. Serial snapshots of the world market, taken in the laboratory of the stock exchange, could be brought into motion to produce a vision of the world economy in movement along the path of a line graph, prefiguring the later optic of macroeconomics and finance. Their debate produced a primary division in the way we see the world economy that lasts until the present day. About the author Quinn Slobodian is assistant professor of modern European history at Wellesley College and currently Andrew W. Mellon Foundation and Volkswagen Stiftung Postdoctoral Fellow at the Dahlem Humanities Center at the Freie Universität Berlin.   Quinn Slobodian will give his presentation at the CERN Library on Tuesday 15 April at 4 p.m. *Coffee will be served at 3.30 p.m.* For more information, visit: https://indico.cern.ch/event/313789/

by CERN Library at April 11, 2014 09:09 AM

CERN Bulletin

Interfon
Découvrez nos news sur notre site  www.interfon.fr « News »  Interfon  « News »   Interfon « Au service de nos adhérents »   Interfon est un groupement de fonctionnaires nationaux et internationaux (y compris retraités) de la zone franco-genevoise regroupant plus de 6 000 adhérents. Mise en place avec le statut de « Coopérative » créée dans les années 60, elle a pour mission d’informer, de recommander et de faire bénéficier à ses sociétaires, d’un certain nombre de services et de prix préférentiels auprès de plus de 80 entreprises de la région. Gérée par des administrateurs bénévoles. Interfon s’est développée en créant une branche “mutuelle” d’assurance complémentaire maladie-chirurgie (ADREA). L’ensemble du personnel et des administrateurs vivent très proches de leurs adhérents afin de favoriser la communication.Nous vous invitons à découvrir nos services en consultant notre site internet  www.interfon.fr Nos bureaux du CERN et de St Genis (Technoparc) sont ouverts du lundi au vendredi (voir nos horaires au bas de cette page). Notre personnel, toujours disponible est là pour vous accueillir et répondre à vos questions, vous présenter nos offres, nos fournisseurs (brochures à votre disposition), enregistrer votre adhésion si vous souhaitez nous rejoindre, prendre vos commandes de fioul, de bois ou de vins/champagnes, les volailles de Bresse, marrons glacés, foie gras, etc… Vous pouvez aussi venir déposer vos courriers pour la Mutuelle (que nous transmettons) et payer vos factures fournisseurs. VITAM : Nous vendons également des billets d’entrées au VITAM de Neydens (74) à un prix préférentiel (SPA, Aquatique/Escalade enfant ou adulte). Séléction de vins :  Vous pouvez choisir dans nos bureaux quelques crus de différents fournisseurs. Vous pouvez acheter au bureau de Saint Genis, qui possède un espace (cave) dédié aux vins et champagnes. Au bureau du CERN, nous tenons à votre disposition les tarifs et nous pouvons enregistrer vos commandes. Ne manquez pas de venir déguster notre sélection à nos 2 journées «Portes ouvertes» annuelles (fin mars et début octobre) Venez nous rencontrer à l’un de nos deux bureaux :   Permanences du CERN (Bât 504) Interfon : du lundi au vendredi (12 h 30 à 15 h 30) tél. 73339 e-mail : interfon@cern.ch. Mutuelle : les jeudis « semaines impaires » (13 h 00 à 15 h 30) tél. 73939, e-mail : interfon@adrea-paysdelain.fr. Bureaux du Technoparc à Saint-Genis-Pouilly : Interfon et Mutuelle : du lundi au vendredi (13 h 30 à 17 h 30) Coopérative : 04 50 42 28 93 interfon@cern.ch Mutuelle : 04 50 42 74 57 interfon@adrea-paysdelain.fr.

by Interfon at April 11, 2014 09:01 AM

April 10, 2014

The Great Beyond - Nature blog

Former NIH stem-cell chief joins New York foundation

Stem-cell biologist Mahendra Rao, who resigned last week as director of the Center for Regenerative Medicine (CRM) at the US National Institutes of Health (NIH), has a new job. On 9 April, he was appointed vice-president for regenerative medicine at the New York Stem Cell Foundation (NYSCF), a non-profit organization that funds embryonic stem-cell research.

Rao left the NIH abruptly on 28 March, apparently because of disagreements about the number of clinical trials of stem-cell therapies that the NIH’s intramural CRM programme would conduct. The CRM was established in 2010 to shepherd therapies using induced pluripotent stem cells (iPS cells) — adult cells that have been reprogrammed to an embryonic state — into clinical translation. One of the CRM’s potential therapies, which will use iPS cells to treat macular degeneration of the retina, will continue moving towards clinical trials at the NIH, although several others were not funded. NIH officials say that the CRM will not continue in its current direction, but the fate of the centre’s remaining budget and resources is undecided.

Rao says that he wants to move more iPS cell therapies towards trials than the NIH had been willing to do. He has already joined the advisory boards of several stem-cell-therapy companies: Q Therapeutics, a Salt Lake City-based neural stem cell company he co-founded; and Cesca Therapeutics (formerly known as ThermoGenesis) of Rancho Cordova, California, and Stemedica of San Diego, California, both of which are developing cell-based therapies for cardiac and vascular disorders.

Rao says that his initial focus at the NYSCF will be developing iPS cell lines for screening, and formulating a process for making clinical-grade cell lines from a patient’s own cells.

by Sara Reardon at April 10, 2014 09:47 PM

ZapperZ - Physics and Physicists

Graphene Closer To Commercial Use
When an article related to physics makes it to the CNN website, you know it is a major news.

This article covers the recent "breakthrough" in graphene that may make it even more viable for commercial use. I'm highlighting it here in case you or someone else needs more evidence of the "application" of physics, and if anyone who thinks that something that got awarded the Nobel Prize in is usually esoteric and useless.

Zz.

by ZapperZ (noreply@blogger.com) at April 10, 2014 02:47 PM

Symmetrybreaking - Fermilab/SLAC

From quark soup to ordinary matter

Scientists have gained new insight into how matter can change from a hot soup of particles to the matter we know today.

The early universe was a trillion-degree soup of subatomic particles that eventually cooled into matter as it is today.

This process is called “freezing out.” In the early universe, it was a smooth transition. But a group of scientists at Brookhaven National Laboratory recently found that, under the right conditions, it can occur differently.

The new research may offer valuable insight into the strong force, which accounts for 99.9 percent of the mass of visible matter in today’s world.

by Karen McNulty Walsh, Brookhaven National Laboratory at April 10, 2014 01:00 PM

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 3)

guest post by Steve Easterbrook

(3) The warming is largely irreversible

The summary for policymakers says:

A large fraction of anthropogenic climate change resulting from CO2 emissions is irreversible on a multi-century to millennial time scale, except in the case of a large net removal of CO2 from the atmosphere over a sustained period. Surface temperatures will remain approximately constant at elevated levels for many centuries after a complete cessation of net anthropogenic CO2 emissions.

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

The conclusions about irreversibility of climate change are greatly strengthened from the previous assessment report, as recent research has explored this in much more detail. The problem is that a significant fraction of our greenhouse gas emissions stay in the atmosphere for thousands of years, so even if we stop emitting them altogether, they hang around, contributing to more warming. In simple terms, whatever peak temperature we reach, we’re stuck at for millennia, unless we can figure out a way to artificially remove massive amounts of CO2 from the atmosphere.

The graph is the result of an experiment that runs (simplified) models for a thousand years into the future. The major climate models are generally too computational expensive to be run for such a long simulation, so these experiments use simpler models, so-called EMICS (Earth system Models of Intermediate Complexity).

The four curves in this figure correspond to four “Representative Concentration Pathways”, which map out four ways in which the composition of the atmosphere is likely to change in the future. These four RCPs were picked to capture four possible futures: two in which there is little to no coordinated action on reducing global emissions (worst case—RCP8.5 and best case—RCP6) and two on which there is serious global action on climate change (worst case—RCP4.5 and best case—RCP 2.6). A simple way to think about them is as follows. RCP8.5 represents ‘business as usual’—strong economic development for the rest of this century, driven primarily by dependence on fossil fuels. RCP6 represents a world with no global coordinated climate policy, but where lots of localized clean energy initiatives do manage to stabilize emissions by the latter half of the century. RCP4.5 represents a world that implements strong limits on fossil fuel emissions, such that greenhouse gas emissions peak by mid-century and then start to fall. RCP2.6 is a world in which emissions peak in the next few years, and then fall dramatically, so that the world becomes carbon neutral by about mid-century.

Note that in RCP2.6 the temperature does fall, after reaching a peak just below 2°C of warming over pre-industrial levels. That’s because RCP2.6 is a scenario in which concentrations of greenhouse gases in the atmosphere start to fall before the end of the century. This is only possible if we reduce global emissions so fast that we achieve carbon neutrality soon after mid-century, and then go carbon negative. By carbon negative, I mean that globally, each year, we remove more CO2 from the atmosphere than we add. Whether this is possible is an interesting question. But even if it is, the model results show there is no time within the next thousand years when it is anywhere near as cool as it is today.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 10, 2014 09:37 AM

The n-Category Cafe

The Modular Flow on the Space of Lattices

Guest post by Bruce Bartlett

The following is the greatest math talk I’ve ever watched!

  • Etienne Ghys (with pictures and videos by Jos Leys), Knots and Dynamics, ICM Madrid 2006. [See below the fold for some links.]

Etienne Ghys A modular knot

I wasn’t actually at the ICM; I watched the online version a few years ago, and the story has haunted me ever since. Simon and I have been playing around with some of this stuff, so let me share some of my enthusiasm for it!

The story I want to tell here is how, via modular flow of lattices in the plane, certain matrices in <semantics>SL(2,)<annotation encoding="application/x-tex">\SL(2,\mathbb{Z})</annotation></semantics> give rise to knots in the 3-sphere less a trefoil knot. Despite possibly sounding quite scary, this can be easily explained in an elementary yet elegant fashion.

As promised above, here are some links related to Ghys’ ICM talk.

I’m going to focus on the last third of the talk — the modular flow on the space of lattices. That’s what produced the beautiful picture above (credit for this and other similar pics below goes to Jos Leys; the animation is Simon’s.)

Lattices in the plane

For us, a lattice is a discrete subgroup of <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics>. There are three types: the zero lattice, the degenerate lattices, and the nondegenerate lattices:

Lattices

Given a lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> and an integer <semantics>n4<annotation encoding="application/x-tex">n \geq 4</annotation></semantics> we can calculate a number — the Eisenstein series of the lattice: <semantics>G n(L)= ωL,ω01ω n.<annotation encoding="application/x-tex"> G_{n}(L) = \sum _{\omega \in L, \omega \neq 0} \frac{1}{\omega ^{n}}. </annotation></semantics> We need <semantics>n3<annotation encoding="application/x-tex">n \geq 3</annotation></semantics> for this sum to converge. For, roughly speaking, we can rearrange it as a sum over <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> of the lattice points on the boundary of a square of radius <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics>. The number of lattice points on this boundary scales with <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics>, so we end up computing something like <semantics> r0rr n<annotation encoding="application/x-tex">\sum _{r \geq 0} \frac{r}{r^{n}}</annotation></semantics> and so we need <semantics>n3<annotation encoding="application/x-tex">n \geq 3</annotation></semantics> to make the sum converge.

Note that <semantics>G n(L)<annotation encoding="application/x-tex">G_{n}(L)</annotation></semantics> = 0 for <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> odd since every term <semantics>ω<annotation encoding="application/x-tex">\omega </annotation></semantics> is cancelled by the opposite term <semantics>ω<annotation encoding="application/x-tex">-\omega </annotation></semantics>. So, the first two nontrivial Eisenstein series are <semantics>G 4<annotation encoding="application/x-tex">G_{4}</annotation></semantics> and <semantics>G 6<annotation encoding="application/x-tex">G_{6}</annotation></semantics>. We can use them to put `Eisenstein coordinates’ on the space of lattices.

Theorem: The map <semantics>{lattices} 2 L (G 4(L),G 6(L))<annotation encoding="application/x-tex"> \begin{aligned} \{ \text{lattices} \} &\rightarrow \mathbb{C}^{2} \\ L & \mapsto (G_{4} (L), \, G_{6}(L)) \end{aligned} </annotation></semantics> is a bijection.

The nicest proof is in Serre’s A Course in Arithmetic, p. 89. It is a beautiful application of the Cauchy residue theorem, using the fact that <semantics>G 4<annotation encoding="application/x-tex">G_{4}</annotation></semantics> and <semantics>G 6<annotation encoding="application/x-tex">G_{6}</annotation></semantics> define modular forms on the upper half plane <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics>. (Usually, number theorists set up their lattices so that they have basis vectors <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> and <semantics>τ<annotation encoding="application/x-tex">\tau </annotation></semantics> where <semantics>τH<annotation encoding="application/x-tex">\tau \in H</annotation></semantics>. But I want to avoid this ‘upper half plane’ picture as far as possible, since it breaks symmetry and mystifies the geometry. The whole point of the Ghys picture is that not breaking the symmetry reveals a beautiful hidden geometry! Of course, sometimes you need the ‘upper half plane’ picture, like in the proof of the above result.)

Lemma: The degenerate lattices are the ones satisfying <semantics>20G 4 349G 6 2=0<annotation encoding="application/x-tex">20 G_{4}^{3} - 49G_{6}^{2} = 0</annotation></semantics>.

Let’s prove one direction of this lemma — that the degenerate lattices do indeed satisfy this equation. To see this, we need to perform a computation. Let’s calculate <semantics>G 4<annotation encoding="application/x-tex">G_{4}</annotation></semantics> and <semantics>G 6<annotation encoding="application/x-tex">G_{6}</annotation></semantics> of the lattice <semantics><annotation encoding="application/x-tex">\mathbb{Z} \subset \mathbb{C}</annotation></semantics>. Well, <semantics>G 4()= n01n 4=2ζ(4)=2π 490<annotation encoding="application/x-tex"> G_{4}(\mathbb{Z}) = \sum _{n \neq 0} \frac{1}{n^{4}} = 2 \zeta (4) = 2 \frac{\pi ^{4}}{90} </annotation></semantics> where we have cheated and looked up the answer on Wikipedia! Similarly, <semantics>G 6()=2π 6945<annotation encoding="application/x-tex">G_{6}(\mathbb{Z}) = 2 \frac{\pi ^{6}}{945}</annotation></semantics>.

So we see that <semantics>20G 4() 349G 6() 2=0<annotation encoding="application/x-tex">20 G_{4}(\mathbb{Z})^{3} - 49 G_{6}(\mathbb{Z})^{2} = 0</annotation></semantics>. Now, every degenerate lattice is of the form <semantics>t<annotation encoding="application/x-tex">t \mathbb{Z}</annotation></semantics> where <semantics>t<annotation encoding="application/x-tex">t \in \mathbb{C}</annotation></semantics>. Also, if we transform the lattice via <semantics>LtL<annotation encoding="application/x-tex">L \mapsto t L</annotation></semantics>, then <semantics>G 4t 4G 4<annotation encoding="application/x-tex">G_{4} \mapsto t^{-4} G_{4}</annotation></semantics> and <semantics>G 6t 6G 6<annotation encoding="application/x-tex">G_{6} \mapsto t^{-6} G_{6}</annotation></semantics>. So the equation remains true for all the degenerate lattices, and we are done.

Corollary: The space of nondegenerate lattices in the plane of unit area is homeomorphic to the complement of the trefoil in <semantics>S 3<annotation encoding="application/x-tex">S^{3}</annotation></semantics>.

The point is that given a lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> of unit area, we can scale it <semantics>LλL<annotation encoding="application/x-tex">L \mapsto \lambda L</annotation></semantics>, <semantics>λ +<annotation encoding="application/x-tex">\lambda \in \mathbb{R}^{+}</annotation></semantics> until <semantics>(G 4(L),G 6(L))<annotation encoding="application/x-tex">(G_{4}(L), G_{6}(L))</annotation></semantics> lies on the 3-sphere <semantics>S 3={(z,w):|z| 2+|w| 2=1} 2<annotation encoding="application/x-tex">S^{3} = \{ (z,w) : |z|^{2} + |w|^{2} = 1\} \subset \mathbb{C}^{2}</annotation></semantics>. And the equation <semantics>20z 349w 2=0<annotation encoding="application/x-tex">20 z^{3} - 49 w^{2} = 0</annotation></semantics> intersected with <semantics>S 3<annotation encoding="application/x-tex">S^{3}</annotation></semantics> cuts out a trefoil knot… because it is “something cubed plus something squared equals zero”. And the lemma above says that the nondegenerate lattices are precisely the ones which do not satisfy this equation, i.e. they represent the complement of this trefoil.

Since we have not divided out by rotations, but only by scaling, we have arrived at a 3-dimensional picture which is very different to the 2-dimensional moduli space (upper half-plane divided by <semantics>SL(2,)<annotation encoding="application/x-tex">\SL(2,\mathbb{Z})</annotation></semantics>) picture familiar to a number theorist.

The modular flow

There is an intriguing flow on the space of lattices of unit area, called the modular flow. Think of <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> as sitting in <semantics> 2<annotation encoding="application/x-tex">\mathbb{R}^{2}</annotation></semantics>, and then act on <semantics> 2<annotation encoding="application/x-tex">\mathbb{R}^{2}</annotation></semantics> via the transformation <semantics>(e t 0 0 e t),<annotation encoding="application/x-tex"> \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ), </annotation></semantics> dragging the lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> along for the ride. (This isn’t just some formula we pulled out the blue — geometrically this is the ‘geodesic flow on the unit tangent bundle of the modular orbifold’.)

We are looking for periodic orbits of this flow.

“Impossible!” you say. “The points of the lattice go off to infinity!” Indeed they do… but disregard the individual points. The lattice itself can ‘click’ back into its original position:

animation

How are we to find such periodic orbits? Start with an integer matrix <semantics>A=(a b c d)SL(2,)<annotation encoding="application/x-tex"> A = \left ( \begin{array}{cc} a & b \\ c & d \end{array}\right ) \in \SL(2, \mathbb{Z}) </annotation></semantics> and assume <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is hyperbolic, which simply means <semantics>|a+d|2<annotation encoding="application/x-tex">|a + d| \geq 2</annotation></semantics>. Under these conditions, we can diagonalize <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> over the reals, so we can find a real matrix <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> such that <semantics>PAP 1=±(e t 0 0 e t)<annotation encoding="application/x-tex"> P A P^{-1} = \pm \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ) </annotation></semantics> for some <semantics>t<annotation encoding="application/x-tex">t \in \mathbb{R}</annotation></semantics>. Now set <semantics>LP( 2)<annotation encoding="application/x-tex">L \coloneqq P(\mathbb{Z}^{2})</annotation></semantics>. We claim that <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> is a periodic orbit of period <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>. Indeed: <semantics>L t =(e t 0 0 e t)P( 2) =±PA( 2) =±P( 2) =L.<annotation encoding="application/x-tex"> \begin{aligned} L_{t} &= \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ) P (\mathbb{Z}^{2}) \\ &= \pm PA (\mathbb{Z}^{2}) \\ &= \pm P (\mathbb{Z}^{2}) \\ &= L. \end{aligned} </annotation></semantics> We have just proved one direction of the following.

Theorem: The periodic orbits of the modular flow are in bijection with the conjugacy classes of hyperbolic elements in <semantics>SL(2,)<annotation encoding="application/x-tex">\SL(2, \mathbb{Z})</annotation></semantics>.

These periodic orbits produce fascinating knots in the complement of the trefoil! In fact, they link with the trefoil (the locus of degenerate lattices) in fascinating ways. Here are two examples, starting with different matrices <semantics>ASL(2,)<annotation encoding="application/x-tex">A \in \SL(2, \mathbb{Z})</annotation></semantics>.

animation

The trefoil is the fixed orange curve, while the periodic orbits are the red and green curves respectively.

Ghys proved the following two remarkable facts about these modular knots.

  • The linking number of a modular knot with the trefoil of degenerate lattices equals the Rademacher function of the corresponding matrix in <semantics>SL(2,)<annotation encoding="application/x-tex">\SL(2, \mathbb{Z})</annotation></semantics> (the change in phase of the Dedekind eta function).
  • The knots occuring in the modular flow are the same as those occuring in the Lorenz equations!

Who would have thought that lattices in the plane could tell the weather!!

I must say I have thought about many aspects of these closed geodesics, but it had never crossed my mind to ask which knots are produced. – Peter Sarnak

by willerton (S.Willerton@sheffield.ac.uk) at April 10, 2014 08:54 AM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
April 19, 2014 02:21 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at