Particle Physics Planet


April 25, 2014

Symmetrybreaking - Fermilab/SLAC

Massive thoughts

The Higgs boson and the neutrino fascinate the general public and particle physicists alike. Why is that?

If there are two particles that everyone has read about in the news lately, it’s the Higgs boson and the neutrino. Why do we continue to be fascinated by these two particles?

As just about everyone now knows, the Higgs boson is integrally connected to the field that gives particles their mass. But the excitement of this discovery isn’t over; now we need to figure out how this actually works and whether it explains everything about how particles get their mass. With time, this knowledge is likely to affect daily life.

by Nigel Lockyer, Director of Fermilab at April 25, 2014 03:57 AM

April 24, 2014

Emily Lakdawalla - The Planetary Society Blog

Spitzer Space Telescope Observations of Bennu
What can studying the thermal emission of Bennu with the Spitzer Space Telescope tell us about its physical properties?

April 24, 2014 03:06 PM

arXiv blog

How Nanoexplosives Could Help Solve One of the Biggest Mysteries of Astrophysics

Particles of dark matter should trigger nanoexplosions in certain materials, an idea that could lead to an entirely new generation of detectors, say physicists.

April 24, 2014 02:00 PM

Peter Coles - In the Dark

Astronomy (and Particle Physics) Look-alikes, No. 92

Although it’s not strictly an astronomical observation, I am struck by the resemblance between the distinguished particle physicist and blogger Professor Alfred E. Neuman, of University College London, and the iconic cover boy of Mad Magazine, Jon Butterworth. This could explain a lot about the Large Hadron Collider.

PP_Lookalike


by telescoper at April 24, 2014 01:07 PM

Peter Coles - In the Dark

Why Graduate Teaching Assistantships Should Be Scrapped

There’s an interesting piece in today’s Times Higher about the variability in pay and working conditions for Graduate Teaching Assistants across the UK Higher Education sector. For those of you not up with the lingo, Graduate Teaching Assistants (GTAs) are (usually) PhD students who fund their doctoral studies by doing some teaching for the department in which they are studying. As the piece makes clear, the use of GTAs varies widely between one university and another across the country and indeed between one department and another within the same university. The use of such positions is higher in arts and humanities departments than in science and engineering, because the latter general have more opportunity to fund scholarships for PhD students, either from one of the Research Councils or elsewhere. Such scholarships pay a stipend (tax-free) as well as the fee for studying as a PhD student.

When I arrived at the University of Sussex last year I found that the School of Mathematical and Physical Sciences operated a GTA scheme in parallel with Research Council bursaries. Students funded by a research council scholarship received a stipend paid at a national rate of about £13,600 per annum, but were able to top this up by undertaking a limited amount of teaching in the School (e.g. marking coursework, helping with workshops, or demonstrating in the teaching laboratories). Externally funded students did teaching on a voluntary basis. The GTAs on the other hand were required to undertake a fixed amount of teaching without remuneration in order to cover their fees.

I found this two-tier system unfair and divisive, with students funded as GTAs clearly treated as second-class citizens. One of the first major decisions I made as Head of School was to phase out the GTA scheme and replace it with bursaries on exactly the same terms as externally-funded ones with the same opportunity to top up the stipend with some teaching income. I announced this at a School meeting recently and it was met with broad approval, the only reservation being that it would be difficult if too few students opted to do extra teaching to cover the demand. I think that’s unlikely, actually, because although the stipend is not taxable so is equivalent to a somewhat higher amount in salary terms, Brighton is quite an expensive part of the country and most students would opt for a bit of extra dosh. Also, it is actually very good for a PhD student to have teaching experience on their CV when it comes to looking for a job.

Existing GTA schemes make it too easy for departments engage in exploitative behaviour, by dumping a huge amount of their teaching duties on underpaid and unqualified PhD students. It’s also unfair for the undergraduate students, nowadays paying enormously high fees, to be fobbed off onto PhD students instead of being taught by full-time, experienced and properly trained staff. Of course the system I’m advocating will be difficult to implement in departments that lack external funding for PhD students. Having to pay a full-stipend for each student will be more expensive and will consequently lead to a reduction in the number of PhD students that can be funded, but that’s not necessarily a bad thing. Indeed, the whole structure of undergraduate teaching will have to change in many departments. From what I’ve seen in the National Student Survey, that isn’t necessarily a bad thing either…

As I’ve argued a number of times on this blog, the current system drastically overproduces PhD students. The argument, a matter of simple arithmetic, is that on average in a steady state each potential PhD supervisor in the university system will, over their entire career, produce just one PhD student who will get a job in academia. In many fields the vast majority of PhDs have absolutely no chance of getting a permanent job in academia. Some know this, of course, and take their skills elsewhere when they’ve completed, which is absolutely fine. But I get the strong feeling that many bright students are lured into GTAs by the prospect that an illustrious career as an academic awaits them when really they’re just being hired as cheap labour. The result is a growing pool of disillusioned and disaffected people with PhDs who feel they’ve been duped by the system.

The British system of postgraduate research study is that it basically takes three years to do a degree. In the United States it usually takes much longer, so the employment of students as GTAs has less of an impact on their ability to complete their thesis on schedule. Although there are faults with the UK’s fast-track system, there is also much to recommend it. Not, however, if the student is encumbered with a heavy teaching load for the duration. The GTA scheme (which incidentally didn’t exist when I did my PhD nearly thirty years ago) is a damaging American import. In much of continental Europe there are far fewer PhD students and in many countries, especially in Scandinavia, PhD students are actually paid a decent wage. I think that’s the way we should go.


by telescoper at April 24, 2014 12:45 PM

Peter Coles - In the Dark

Elsevier journals — some facts

telescoper:

Read this, and weep as you learn that Elsevier’s ruthless profiteering continues unabated…

Originally posted on Gowers's Weblog:

A little over two years ago, the Cost of Knowledge boycott of Elsevier journals began. Initially, it seemed to be highly successful, with the number of signatories rapidly reaching 10,000 and including some very high-profile researchers, and Elsevier making a number of concessions, such as dropping support for the Research Works Act and making papers over four years old from several mathematics journals freely available online. It has also contributed to an increased awareness of the issues related to high journal prices and the locking up of articles behind paywalls.

However, it is possible to take a more pessimistic view. There were rumblings from the editorial boards of some Elsevier journals, but in the end, while a few individual members of those boards resigned, no board took the more radical step of resigning en masse and setting up with a different publisher under a new name (as some journals have…

View original 10,674 more words


by telescoper at April 24, 2014 11:47 AM

Lubos Motl - string vacua and pheno

A quantum proof of a Bousso bound
An aspect of holography is demystified. Perhaps too much.

In the early 1970s, Jacob Bekenstein realized that the black hole event horizons have to carry some entropy. And in fact, it's the highest entropy among all localized or bound objects of a given size or mass. This "hegemony" of the black holes is understandable for a simple reason: in classical physics, black holes are the ultimate phase of a stellar collapse and the entropy has to increase by the second law which means that it is maximized at the end – for black holes.

The entropy \[

S = k \frac{A}{4G\hbar}

\] (where we only set \(c=1\)) is the maximum one that you can squeeze inside the surface \(A\), kind of. This universal Bekenstein-Hawking entropy applies to black holes – i.e. static spacetimes. The term "Bekenstein bound" is often used for inequalities that may involve other quantities such as the mass or the size (especially one of them that I don't want to discuss) but they effectively express the same condition – black holes maximize the entropy.

Is there a generalization of the inequality to more general time-dependent geometries? The event horizons are null hypersurfaces so in the late 1990s, Raphael Bousso proposed a generalization of the inequality that says that the entropy crossing a null hypersurface that is shrinking everywhere into the future (and may have to be truncated to obey this condition) is also at most \(kA/4G\); yes, \(k\) is always the Boltzmann constant that I decided to restore. I remember those days very well – my adviser Tom Banks was probably the world's most excited person when Raphael Bousso published those papers.

Various classical thermodynamic proofs were given for this inequality. I suppose that they would use Einstein's equations as well as some energy conditions (saying that the energy density is never negative, or some more natural cousins of this simple condition). Finally, there is also a quantum proof of the statement.




Today, Raphael Bousso, Horacio Casini, Zachary Fisher, and Juan Maldacena released a new preprint called
Proof of a Quantum Bousso Bound
They prove that\[

S_{\rm state} - S_{\rm vacuum} \leq k \frac{A-A'}{4G\hbar}

\] where the entropies \(S\) are measured as the entropies (of the actual state and/or the vacuum state, as indicated by the subscript) crossing the light sheet, \(A,A'\) are initial and final areas on the boundary of the light sheet, and the light sheet is a shrinking null hypersurface connecting these two areas.




One must be aware of the character of their proof. The entropies are computed as the von Neumann entropies\[

S = -k\,{\rm Tr}\, \ln (\rho \ln \rho)

\] so the proof uses the methods of quantum statistical physics. Also, they assume that the entropy is carried by free i.e. non-interacting (a quadratic action obeying) non-gravitational fields propagating on a curved gravitational background. The backreaction is neglected, too. Some final portions of the paper are dedicated to musings about possible generalizations to the case of significant backreaction; and the interacting fields.

They are not using any energy conditions which makes the proof "strong". Also, they say that they are not using any relationship between the energy and entropy. I think that this is misleading. They are and must be using various types of the Hamiltonian to say something about the entropy. Otherwise, Newton's constant couldn't possibly get to the inequality at all! After all, the evolution is dictated by the Hamiltonian and they need to know it to make the geometry relevant. Moreover, I think that the proof must be a rather straightforward translation of a classical or semiclassical proof to the quantum language.

Under some conditions, the inequality has to be right even in the interacting and backreacting cases. I haven't understood the proof in detail but I feel that it's a technical proof that had to exist and one isn't necessarily learning something conceptual out of it. By this claim, I am not trying to dispute that holography plays a fundamental role in quantum gravity. It undoubtedly does. But particular "holographic inequalities" such as this one are less canonical or unique or profound than the original Heisenberg uncertainty principle in quantum mechanics\[

\Delta x \cdot \Delta p \geq \frac{\hbar}{2}.

\] This inequality more or less "directly inspires" the commutator\[

[x,p] = xp - px = i\hbar

\] which conveys pretty much all the new physics of quantum mechanics. While the upper bounds for the entropy are the quantum gravity analogues of the Heisenberg inequality above, they are less unique and they don't seem to directly imply any comprehensible equation similar to one for the commutator – an equation that could be used to directly "construct" a theory of quantum gravity. At least it looks so to me. So quantum gravity is a much less "constructible" theory than quantum mechanics of one (or several) non-relativistic particles.

On the other hand, I still think that the power of Bousso-like inequalities hasn't been depleted yet.

Note that similar Bousso-like inequalities and similar games talk about the areas in the spacetime so they depend on the isolation of the metric tensor degrees of freedom from the rest of physics. This is why they are pretty much inseparably tied to the general relativistic approximation of the physics. String/M-theory unifies the spacetime geometry with all other matter fields in physics but this unification has to be cut apart before we discuss the geometric quantities which we have to do before we formulate things like the Bousso inequality and many other results. In other words, it seems likely that there cannot be any "intinsically stringy" proof of this inequality because the inequality seems to depend on some common non-stringy approximations of physics.

by Luboš Motl (noreply@blogger.com) at April 24, 2014 11:13 AM

The n-Category Cafe

Finite Products Theories

Here’s a classic theorem about finite products theories, also known as algebraic theories or Lawvere theories. You can find it in Toposes, Triples and Theories, but it must go way back to Lawvere’s thesis. In my work with Nina Otter on phylogenetic trees, we need a slight generalization of it… and if true, this generalization must already be known. So I really just want a suitable reference!

Theorem. Suppose <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a category with finite products that is single-sorted: every object is a finite product of copies of a single object <semantics>xC<annotation encoding="application/x-tex">x \in C</annotation></semantics>. Let <semantics>Mod(C)<annotation encoding="application/x-tex">Mod(C)</annotation></semantics> be the category of models of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>: that is, product-preserving functors

<semantics>ϕ:CSet<annotation encoding="application/x-tex"> \phi : C \to Set </annotation></semantics>

and natural transformations between these. Let

<semantics>U:Mod(C)Set<annotation encoding="application/x-tex"> U : Mod(C) \to Set </annotation></semantics>

be the functor sending any model to its underlying set:

<semantics>U(ϕ)=ϕ(x)<annotation encoding="application/x-tex"> U(\phi) = \phi(x) </annotation></semantics>

Then <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> has a left adjoint

<semantics>F:SetMod(C)<annotation encoding="application/x-tex"> F : Set \to Mod(C) </annotation></semantics>

and <semantics>Mod(C)<annotation encoding="application/x-tex">Mod(C)</annotation></semantics> is equivalent to the category of algebras of the monad

<semantics>UF:SetSet<annotation encoding="application/x-tex"> U F : Set \to Set </annotation></semantics>

As a result, any model <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> can be written as a coequalizer

<semantics>FUFU(ϕ)FU(ϕ)ϕ<annotation encoding="application/x-tex"> F U F U(\phi) \stackrel{\longrightarrow}{\longrightarrow} F U(\phi) \longrightarrow \phi </annotation></semantics>

where the arrows are built from the counit of the adjunction

<semantics>ϵ:FU1 Mod(C)<annotation encoding="application/x-tex"> \epsilon : F U \to 1_{Mod(C)} </annotation></semantics>

in the obvious ways: <semantics>FU(ϵ FU(ϕ))<annotation encoding="application/x-tex"> F U (\epsilon_{F U (\phi)})</annotation></semantics>, <semantics>ϵ FUFU(ϕ)<annotation encoding="application/x-tex">\epsilon_{F U F U(\phi)}</annotation></semantics> and <semantics>ϵ ϕ<annotation encoding="application/x-tex">\epsilon_\phi</annotation></semantics>.

The generalization we need is quite mild, I hope. First, we need to consider multi-typed theories. Second, we need to consider models in <semantics>Top<annotation encoding="application/x-tex">Top</annotation></semantics> rather than <semantics>Set<annotation encoding="application/x-tex">Set</annotation></semantics>. So, this is what we want:

Conjecture. Suppose <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a category with finite products that is <semantics>Λ<annotation encoding="application/x-tex">\Lambda</annotation></semantics>-sorted: every object is a finite product of copies of certain objects <semantics>x λ<annotation encoding="application/x-tex">x_\lambda</annotation></semantics>, where <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics> ranges over some index set <semantics>Λ<annotation encoding="application/x-tex">\Lambda</annotation></semantics>. Let <semantics>Mod(C)<annotation encoding="application/x-tex">Mod(C)</annotation></semantics> be the category of topological models of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>: that is, product-preserving functors

<semantics>ϕ:CTop<annotation encoding="application/x-tex"> \phi : C \to Top </annotation></semantics>

and natural transformations between these. Let

<semantics>U:Mod(C)Top Λ<annotation encoding="application/x-tex"> U : Mod(C) \to Top^\Lambda </annotation></semantics>

be the functor sending any model to its underlying spaces:

<semantics>U(ϕ)=(ϕ(x λ)) λΛ<annotation encoding="application/x-tex"> U(\phi) = (\phi(x_\lambda))_{\lambda \in \Lambda} </annotation></semantics>

Then <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> has a left adjoint

<semantics>F:Top ΛMod(C)<annotation encoding="application/x-tex"> F : Top^\Lambda \to Mod(C) </annotation></semantics>

and <semantics>Mod(C)<annotation encoding="application/x-tex">Mod(C)</annotation></semantics> is equivalent to the category of algebras of the monad

<semantics>UF:Top ΛTop Λ<annotation encoding="application/x-tex"> U F : Top^\Lambda \to Top^\Lambda </annotation></semantics>

As a result, any model <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> can be written as a coequalizer

<semantics>FUFU(ϕ)FU(ϕ)ϕ<annotation encoding="application/x-tex"> F U F U(\phi) \stackrel{\longrightarrow}{\longrightarrow} F U(\phi) \longrightarrow \phi </annotation></semantics>

where the arrows are built from the counit of the adjunction in the obvious ways.

Comments

1) There shouldn’t be anything terribly special about <semantics>Top<annotation encoding="application/x-tex">Top</annotation></semantics> here: any sufficiently nice category should work… but all I need is <semantics>Top<annotation encoding="application/x-tex">Top</annotation></semantics>. What counts as ‘sufficiently nice’? If we need to replace <semantics>Top<annotation encoding="application/x-tex">Top</annotation></semantics> ‘convenient category of topological spaces’, to make it cartesian closed, that’s fine with me—just let me know!

2) I’m sure this conjecture, if true, follows from some super-duper-generalization. I don’t mind hearing about such generalizations, and I suspect some of you will be unable to resist mentioning them… so okay, go ahead, impress me…

… but remember: all I need is this puny little result!

In fact, all I really need is a puny special case of this puny result. If it works, it will be a small, cute application of algebraic theories to biology.

by john (baez@math.ucr.edu) at April 24, 2014 11:00 AM

April 23, 2014

Christian P. Robert - xi'an's og

controlled thermodynamic integral for Bayesian model comparison

Reykjavik1Chris Oates, Theodore Papamarkou, and Mark Girolami (all from the University of Warwick) just arXived a paper on a new form of thermodynamic integration for computing marginal likelihoods. (I had actually discussed this paper with the authors on a few occasions when visiting Warwick.) The other name of thermodynamic integration is path sampling (Gelman and Meng, 1998). In the current paper, the path goes from the prior to the posterior by a sequence of intermediary distributions using a power of the likelihood. While the path sampling technique is quite efficient a method, the authors propose to improve it through the recourse to control variates, in order to decrease the variance. The control variate is taken from Mira et al. (2013), namely a one-dimensional temperature-dependent transform of the score function. (Strictly speaking, this is an asymptotic control variate in that the mean is only asymptotically zero.) This control variate is then incorporated within the expectation inside the path sampling integral. Its arbitrary elements are then calibrated against the variance of the path sampling integral. Except for the temperature ladder where the authors use a standard geometric rate, as the approach does not account for Monte Carlo and quadrature errors. (The degree of the polynomials used in the control variates is also arbitrarily set.) Interestingly, the paper mixes a lot of recent advances, from the zero variance notion of Mira et al. (2013) to the manifold Metropolis-adjusted Langevin algorithm of Girolami and Calderhead (2011), uses as a base method pMCMC (Jasra et al., 2007). The examples processed in the paper are regression (where the controlled version truly has a zero variance!) and logistic regression (with the benchmarked Pima Indian dataset), with a counter-example of a PDE interestingly proposed in the discussion section. I quite agree with the authors that the method is difficult to envision in complex enough models. I also did not see mentions therein of the extra time involved in using this control variate idea.


Filed under: Books, pictures, Running, Statistics, University life Tagged: advanced Monte Carlo methods, arXiv, control variate, Iceland, MCMC algorithms, Monte Carlo Statistical Methods, path sampling, Pima Indians, pMCMC, Riemann manifold, thermodynamic integration

by xi'an at April 23, 2014 10:14 PM

Tommaso Dorigo - Scientificblogging

Meteoroid Caught Free-Falling On Video ? No, A Stone In The Parachute Pack
A meteor caught on film during its non-luminous free fall at terminal velocity ? Or an elaborate hoax ? Or something else ? I must admit that when I saw the video posted in the internet a few weeks ago I was intrigued, and operated a willful suppression of disbelief. The footage showed a free-falling black stone that really looked like a meteoroid, passing by the owner of the camera hanging on a parachute, on the skies of Norway. I wanted to believe!


Above: sum of frames from the video shot by the parachuters

read more

by Tommaso Dorigo at April 23, 2014 07:45 PM

Emily Lakdawalla - The Planetary Society Blog

Days before its crash, LADEE saw zodiacal light above the lunar horizon
LADEE ended its mission as planned with a crash into the lunar surface on April 17. Just days prior, it turned its star tracker camera toward the lunar horizon and captured a striking series of images of the lunar sunrise and zodiacal light.

April 23, 2014 05:55 PM

Peter Coles - In the Dark

A Note to the Physics REF Panel

I’ve just been skimming through an interesting report about the strength of UK physics research. One of the conclusions of said document is that UK physics research is the best in the world in terms of quality.

I couldn’t resist a brief post to point this out to any members of the Physics panel involved in the 2014 Research Excellence Framework. My motivation for doing this is that the Physics panel of the 2008 Research Assessment Exercise evidently came to the conclusion that UK physics research wasn’t very good at all, awarding a very much lower fraction of 4* (world-leading) grades than many other disciplines, including chemistry. I’ve never understood why the Panel arrived at such a low opinion of its own discipline, but there you go..

Physics departments across the country have fought very hard to recover from the financial and reputational damage inflicted by the 2008 RAE panel’s judgement. Let’s just hope there isn’t another unwarranted slap in the face in store when the 2014 results are announced later this year…

 

UPDATE: I’m grateful to Paul Crowther for pointing out a surprising fact based on a talk given by the Chairman of the Physics RAE Panel in 2008, Sir John Pendry. Here are the slides in full, but the pertinent fact is the distribution of 4*, 3* and 2* grades across disciplines shown in this table:

Bl8OKv0CYAE6EMq

You can see that they are in fact broadly, similar across disciplines. However, what is clear is that the highest scoring departments in Chemistry did much better than the highest-scoring in Physics; for example top of the table for Physics was Lancaster with 25% of its outputs graded 4* while top in Chemistry was Cambridge with 40%. Is it really justifiable that the top physics departments were so much worse than the top chemistry departments? Suspicion remains that the Physics scores were downgraded systematically to produce the uncannily similar profiles shown in the table. Since all the RAE documents have been shredded, we’ll never know whether that happened or not…


by telescoper at April 23, 2014 04:45 PM

astrobites - astro-ph reader's digest

How Weird Is Our Solar System?

Title: The Solar System and the Exoplanet Orbital Eccentricity – Multiplicity Relation
Authors: Mary Anne Limbach, Edwin L. Turner
First Author’s Institution: Department of Astrophysical Sciences and Department of Mechanical and Aerospace Engineering, Princeton University
Paper Status: Submitted to Proceedings of the National Academy of Sciences

Earth and its Solar System compatriots all have nearly circular orbits, but many exoplanets orbit their stars on wildly eccentric paths. Is our home system strange? Or is our sense of the data skewed? 

The nearly circular orbits in our solar system, not drawn to scale.

The nearly circular orbits in our solar system, not drawn to scale. VERY not-drawn-to-scale. (Source: NASA)

In looking for and learning about planets beyond our solar system, there is always the question: are we special? On the one hand, per the Copernican Principle, the history of science is practically just a string of discoveries about how we’re not unique: we’re not the center of the Solar System, the Solar System isn’t the center of the universe, our galaxy is one of many, and heck, this might not even be the only universe. Planets, we’re discovering, are a dime a dozen. A dime a billion.

But in one very big way, we’re afraid we’re unique: in being alive. The question of Earth’s specialness is at the center of questions about life elsewhere in the universe. Do other planets have water? Are these planets in their habitable zones? Good atmosphere? Plate tectonics? The list of qualifications seems to go on and on, leading us to wonder if maybe at some point the Copernican Principle breaks down.

One such arena for debate has been the structure of our Solar System—as we’ve gained an understanding of other planetary systems, ours has started to look, rather than mediocre, downright unique. We don’t know if our exact situation is necessary for life, but with a sample size of 1, it’s been dismaying to see how different other planetary systems seem to be from our own. Similarly, we built our first theories of planet formation off what we saw in the Solar System. Exoplanet discoveries threw all that into question. We’ve seen hot Jupiters orbiting up close to their stars, planets around binary stars, and, in the focus of today’s paper, a prevalence of very eccentric orbits.

While the planets in the Solar System show consistently low eccentricities in their orbits—their orbits are nearly circular—exoplanets have a much wider range of eccentricities, with a much higher average eccentricity overall. Theories of planetary system evolution have had to work to take this range into account. One approach has been to model system evolution with an eye on multi-planet interaction; prior research has predicted that multi-planet systems might have less eccentric orbits, thanks to planet-planet interactions. Today’s paper uses the data we have on exoplanets to test whether multiplicity—the presence of many planets in a system—could indeed dampen eccentricity.

Figure 1: Mean and median eccentricities in RV exoplanet systems and the solar system as a function of multiplicity (number of planets in the system). As the number of planets increases, eccentricity decreases. The plateau in eccentricity at low multiplicity may be due to contamination of the one-planet data with higher-multiplicity systems. The "SS" denotes solar system planets.

Mean and median eccentricities in exoplanet systems and the Solar System as a function of multiplicity (number of planets in the system). As the number of planets increases, eccentricity decreases. The plateau in eccentricity at low multiplicity may be due to contamination of the one-planet data with higher-multiplicity systems. The “SS” denotes solar system planets.

The authors analyzed data for 403 cataloged planets that have been found using the radial velocity technique. (They focused on planets found via radial velocity because this detection method allows for relatively reliable measurement of eccentricity.) While most of these planets orbit their stars alone or with one other planet, some systems with four, five, and even six planets have been found; our own Solar System, with its eight planets, was also included. The authors found a strong negative correlation between multiplicity and eccentricity: the more planets a system had, the less eccentric their orbits were. This correlation is most visible when the mean and median eccentricity are plotted as a function of multiplicity (as shown to the right). The Solar System fits the trend nicely. Maybe we’re not so special after all.

The deviation from the trend was found in one- and two-planet systems, some of which weren’t as eccentric as these findings would predict. These systems may have additional planets as yet undetected; they could be good targets for future searches for companions to known exoplanets.

This statistical analysis lends support to models of planetary formation in which planet-planet interactions within a system contribute to the circularizing of planets’ orbits.  The authors suggest constraints for the relationship of multiplicity and orbital eccentricity that future models should take into account. And overall, this paper reassures us of our ongoing mediocrity. For a multi-planet Solar System, our planets’ low orbital eccentricities aren’t so eccentric after all.

 

 

by Jaime Green at April 23, 2014 03:40 PM

The Great Beyond - Nature blog

Sponsor a fish and save Canada’s experimental lakes
Photo2.jpg

Canada’s Experimental Lakes Area is now raising crowdfunding donations.

Government of Ontario

Posted on behalf of Brian Owens.

Fans of environmental science can now have a direct role in helping Canada’s unique Experimental Lakes Area (ELA) continue to do the research it has done for decades.

The International Institute for Sustainable Development (IISD), based in Winnipeg, took over running the ELA on 1 April, after the federal government eliminated funding for the decades-old environmental research facility (see ‘Test lakes face closure’ and ‘Last minute reprieve for Canada’s research lakes’). The Canadian provinces of Ontario and Manitoba have stepped in to provide money to run the facility and conduct research for the next several years, but more cash is needed to restore research at the ELA to its former levels.

So the IISD has turned to the public. It launched an appeal on the crowdfunding site Indiegogo seeking contributions to expand research and make the ELA less dependent on government largesse.

Along with the usual array of magnets and t-shirts offered as perks by typical crowdfunding campaigns, the IISD has a few unique offers. For CAN$100 (US$90) you can sponsor a plankton count in one sample of lake water, and for CAN$200 you can sponsor a fish in one of the lakes. When your trout, white sucker or pike is caught and tagged, researchers will send you a photo of it and keep you updated on its life for the next five years, every time it is recaptured.

And if you’re feeling really generous (and have a high tolerance for mosquito bites), CAN$2,000 gets you a spot on a tour of the ELA. If that seems a bit steep, there is a cheaper option. Send them CAN$60 and a digital photo of yourself, and they will Photoshop you into a picture of Lake 239.

by Davide Castelvecchi at April 23, 2014 03:13 PM

Peter Coles - In the Dark

Sonnet No. 30

The exact date of Shakespeare’s birth is not known but, by tradition, it is celebrated on 23rd April, St George’s Day. Today therefore marks the 450th anniversary of his birth.

This sonnet is clearly closely related to the one preceding it, No. 29, and is thought to have been written to the Earl of Southampton. I picked it for today not just because it’s beautiful, but also because it provides an example of how deeply embedded in our language certain phrases from Shakespeare have become; the standard English translation of Marcel Proust’s A la recherche du temps perdu is entitled The Remembrance of Things Past, though I have never felt it was a very apt rendering. English oneupmanship, perhaps?

When to the sessions of sweet silent thought
I summon up remembrance of things past,
I sigh the lack of many a thing I sought,
And with old woes new wail my dear time’s waste:
Then can I drown an eye, unused to flow,
For precious friends hid in death’s dateless night,
And weep afresh love’s long since cancelled woe,
And moan the expense of many a vanished sight:
Then can I grieve at grievances foregone,
And heavily from woe to woe tell o’er
The sad account of fore-bemoanèd moan,
Which I new pay as if not paid before.
But if the while I think on thee, dear friend,
All losses are restored and sorrows end.

by William Shakespeare (1564-1616).


by telescoper at April 23, 2014 02:58 PM

ZapperZ - Physics and Physicists

Helium Balloon In An Accelerating Vehicle
A while back, in a Part 6 of my Revamping Intro Physics Lab series, I mentioned an "experiment" that students can do involving a suspended helium balloon in an accelerating vehicle. I mentioned that this would be an excellent example of something where the students get to guess what would happen, and at first, what actually happens does not make sense.

Well now, we have a clear demonstration of this effect on video.



There is a good explanation of why this occurs in the video. It is also nice that he included a hanging pendulum in the beginning for comparison, and that this is what most of us are expecting to occur.

Might be a nice one to quiz your kids if you are teaching basic, intro physics.

Zz.

by ZapperZ (noreply@blogger.com) at April 23, 2014 01:52 PM

Symmetrybreaking - Fermilab/SLAC

A 'crack in the cosmic egg'

The recent BICEP2 discovery of evidence for cosmic inflation might point to new physics.

Last month, scientists on the BICEP2 experiment announced the first hard evidence for cosmic inflation, the process by which the infant universe swelled from microscopic to cosmic size in an instant.

Scientists have thought for more than three decades that we might someday find such a signal, so the discovery was not entirely unexpected. What was unexpected, however, was just how strong the signal turned out to be.

April 23, 2014 01:00 PM

Lubos Motl - string vacua and pheno

Neutron spectroscopy constrains axions, chameleons
Tobias Jenke of Vienna and 11 co-authors from Austria, Germany, and France have performed an interesting experiment with neutrons in the gravitational field (although they have done similar experiments in the past) and their new preprint was just published in the prestigious PRL (Physical Review Letters):
Gravity Resonance Spectroscopy Constrains Dark Energy and Dark Matter Scenarios (arXiv, PRL)

Semi-popular: APS, ArsTechnica, Huff. Post
Recall that neutrons' wave functions in the Earth's gravitational field have previously been mentioned on this blog as a way to debunk the "gravity as an entropic force": LM, Archil Kobakhidze.



Click to zoom in: outline and results.

What have they done?




Well, they have prepared some very cold neutrons and sent them in between two horizontal mirrors which were separated by \(\Delta z = 30\,\mu{\rm m}\) in altitude. As you know, this is a nice and simple system in undergraduate non-relativistic quantum mechanics, a potential well.




If the walls were infinitely tall and there were no gravity, the energy eigenstates would be\[

\psi_n(z) = C_n \sin \zav{ \frac{\pi n z}{\Delta z} }, \quad n=1,2,3,\dots

\] The spectrum is discrete. If the gravitational field is added, the wave functions are no longer simple sines. Instead, they are combinations of the Airy functions \({\rm Ai}(z)\) of a sort – with the right coefficients and the right boundary conditions to make everything work. It means that the \(n\)-th wave function is more likely to be found near the bottom wall (mirror) and the wave function is more quickly oscillating over there. Note that the unrestricted linear potential has a continuous energy spectrum (just shifting the wave function in the \(z\)-direction adds some energy) while the mirrors make the spectrum discrete.

These states are discrete but they also apply some frequency – in a way that you know from Rabi spectroscopy – I think that they finally tickled the mirrors in some way although they had wanted to use some variable magnetic gradients, too. In fact, it means that the height of the walls (from the mirrors) isn't infinite but a finite and oscillating as \(\cos \omega t \) with some frequency between 50 and 800 Hertz that they may adjust. This extra periodic, time-dependent disturbance may be treated as a perturbation of the original quantum mechanical system that allows the transitions between the energy levels and they measure their probabilities. For example, the transition \(\ket{2}\leftrightarrow \ket{4}\) has been measured for the first time.

Everything agrees with quantum mechanics supplemented with Newton's potential.

It follows that they may eliminate at the 95% confidence level some (previously viable) models for dark matter and dark energy, namely "chameleon fields" (a species of quintessence) and axions with masses between \(10\,\mu{\rm eV}\) and \(1\eV\) – which would add a Yukawa potential with the Yukawa length between \(2\,{\rm cm}\) and \(0.2\,\mu{\rm m}\) as long as the coupling constant is greater than something like \(g_s g_p \geq 4\times 10^{-16}\).



Some people have proposed that these animals are in between the mirrors everywhere around us and they are responsible for the accelerated expansion of the Universe.

Note that the axions would mediate new Yukawa-like spin-dependent forces between the neutron inside the mirrors and nucleons in between the walls. The Yukawa wavelength is directly linked to the distance between the mirrors. I find it plausible that such axions may exist but the "unnaturally" tiny interaction constants that are required by the experiments make them less likely. The chameleon scenario would produce some additional potential as well and it is excluded for certain ranges of parameters, too.

This GRS (gravity resonance spectroscopy) approach is a powerful way to test models of very light and weakly interacting fields and particles and extra contributions to Newton's gravitational force. There is some overlap with the experiments that have tested old large dimensions (via modifications of Newton's inverse-square law: I think that if BICEP2 is right, old large dimensions are wrong and no corrections to Newton's law will ever be found in this way) but they are not really the same. GRS discussed here uses the quantum wave functions so it may feel certain things more finely than the very fine, but still classical mechanical experiments that have measured gravity beneath one millimeter.

GRS much like other precision experiments rely on resonances and exact frequencies – experimenters get very far with frequency measurements, indeed.

by Luboš Motl (noreply@blogger.com) at April 23, 2014 12:19 PM

April 22, 2014

Christian P. Robert - xi'an's og

AISTATS 2014 (day #1)

divide1First day at AISTATS 2014! After three Icelandic vacations days driving (a lot) and hinkg (too little) around South- and West-Iceland, I joined close to 300 attendees for this edition of the AISTATS conference series. I was quite happy to be there, if only because I had missed the conference last year (in Phoenix) and did not want this to become a tradition… Second, the mix of statistics, artificial intelligence and machine learning that characterises this conference is quite exciting, if challenging at time. What I most appreciated in this discovery of the conference is the central importance of the poster session, most talks being actually introductions to or oral presentations of posters! I find this feature terrific enough (is there such a notion as “terrific enough”?!) worth adopting in future conferences I am involved in. I just wish I had managed to tour the whole collection of posters today… The (first and) plenary lecture was delivered by Peter Bühlman, who spoke about a compelling if unusual (for me) version of causal inference. This was followed by sessions on Gaussian processes, graphical models, and mixed data sources. One highlight talk was the one by Marc Deisenroth, who showed impressive robotic fast learning based on Gaussian processes. At the end of this full day, I also attended an Amazon mixer where I learned about Amazon‘s entry on the local market, where it seems the company is getting a better picture of the current and future state of the U.S. economy than governmental services, thanks to a very fine analysis of the sales and entries on Amazon‘s entry. Then it was time to bike “home” on my rental bike, in the setting sun…


Filed under: Mountains, pictures, Running, Statistics, Travel, University life Tagged: AISTATS 2014, Amazon, Þingvellir, Iceland, machine learning, Phoenix

by xi'an at April 22, 2014 10:14 PM

Emily Lakdawalla - The Planetary Society Blog

Upcoming public appearances: me, Bill Nye, and Planetary Radio Live in Washington and Los Angeles
I have a spate of several public appearances coming up; I hope some of you can come out and see me and other Planetary Society folks, including Bill Nye and two, count them, two Planetary Radio Live events!

April 22, 2014 10:06 PM

Quantum Diaries

Can 2130 physicists pounding on keyboards turn out Shakespeare plays?

The CMS Collaboration, of which I am a member, has submitted 335 papers to refereed journals since 2009, including 109 such papers in 2013. Each of these papers had about 2130 authors. That means that the author list alone runs 15 printed pages. In some cases, the author list takes up more space than the actual content of the paper!

One might wonder: How do 2130 people write a scientific paper for a journal? Through a confluence of circumstances, I’ve been directly involved in the preparation of several papers over the last few months, so I have been thinking a lot about how this gets done, and thought I might use this opportunity to shed some light on the publication process. What I will not discuss here is why a paper should have 2130 authors and not more (or fewer)—this is a very interesting topic, but for now we will work from the premise that there are 2130 authors who, by signing the paper, take scientific responsibility for the correctness of its contents. How can such a big group organize itself to submit a scientific paper at all, and how can it turn out 109 papers in a year?

Certainly, with this many authors and this many papers, some set of uniform procedures are needed, and some number of people must put in substantial effort to maintain and operate the procedures. Each collaboration does things a bit differently, but all have the same goal in mind: to submit papers that are first correct (in the scientific sense of “correct” as in “not wrong with a high level of confidence”), and that are also timely. Correct takes precedence over timely; it would be quite an embarrassment to produce a paper that was incorrect because the work was done quickly and not carefully. Fortunately, in my many years in particle physics, I can think of very few cases when a correction to a published paper had to be issued, and never have I seen a paper from an experiment I have worked be retracted. This suggests that the publication procedures are indeed meeting their goals.

But even though being correct trumps everything, having an efficient publication process is still important. It would also be a shame to be scooped by a competitor on an interesting result because your paper was stuck inside your collaboration’s review process. So there is an important balance to be struck between being careful and being efficient.

One thing that would not be efficient would be for every one of the 2130 authors to scrutinize every publishable result in detail. If we were to try to do this, everyone would soon become consumed by reviewing data analyses, rather than working on the other necessary tasks of the experiment, from running the detector to processing the data to designing upgrades of the experiment. And it’s hard to imagine that, say, once 1000 people have examined a result carefully, another thousand would uncover a problem. That being said, everyone needs to understand that even if they decline to take part in the review of a particular paper, they are still responsible for it, in accordance with generally accepted guidelines for scientific authorship.

Instead, the review of each measurement or set of measurements destined for publication in a single paper is delegated by the collaboration to a smaller group of people. Different collaborations have different ways of forming these review committees—some create a new committee for a particular paper that dissolves when that paper is published, while others have standing panels that review multiple analyses within a certain topic area. These committees usually include several people with expertise in that particular area of particle physics or data analysis techniques, but one or two who serve as interested outsiders who might look at the work in a different way and come up with new questions about it. The reviewers tend to be more senior physicists, but some collaborations have allowed graduate students to be reviewers too. (One good way to learn how to analyze data is to carefully study how other people are doing it!)

The scientists who are performing a particular measurement with the data are typically also responsible for producing a draft of the scientific paper that will be submitted to the journal. The review committee is then responsible for making sure that the paper accurately describes the work and will be understandable to physicists who are not experts on this particular topic. There can also be a fair amount of work at this stage to shape the message of the paper; measurements produce results in the form of numerical values of physical quantities, but scientific papers have to tell stories about the values and how they are measured, and expressing the meaning of a measurement in words can be a challenge.

Once the review committee members think that a paper is of sufficient quality to be submitted to a journal, it is circulated to the entire collaboration for comment. Many collaborations insert a “style review” step at this stage, in which a physicist who has a lot of experience in the matter checks that the paper conforms to the collaboration’s style guidelines. This ensures some level of uniformity in terminology across the all of the collaboration’s papers, and it is also a good chance to check that the figures and tables are working as intended.

The circulation of a paper draft to the collaboration is a formal process that has potential scaling issues, given how many people might submit comments and suggestions. On relatively small collaborations such as those at the Tevatron (my Tevatron-era colleagues will find the use of the word “small” here ironic!), it was easy enough to take the comments by email, but the LHC collaborations have a more structured system for collecting and archiving comments. Collaborators are usually given about two weeks to read the draft paper and make comments. How many people send feedback can vary greatly with each paper; hotter topics might attract more attention. Some conscientious collaborators do in fact read every paper draft (as far as I can tell). To encourage participation, some collaborations do make explicit requests to a randomly-chosen set of institutes to scrutinize the paper, while some institutes have their own traditions of paper review. Comments on all aspects of the paper are typically welcome, from questions about the physics or the veracity of the analysis techniques, to suggestions on the organization of the paper and descriptions of data analysis, to matters like the placement of commas.

In any case, given the number of people who read the paper, the length of the comments can often exceed the length of the paper itself. The scientists who wrote the paper draft then have to address all of the comments. Some comments lead to changes in the paper to explain things better, or to additional cross-checks of the analysis to address a point that was raised. Many textual suggestions are implemented, while others are turned down with an explanation of why they are not necessary or harmful to the paper. The analysis review committee then verifies that all significant comments have been properly considered, and checks that the resulting revised paper draft is in good shape for submission.

Different collaborations have different final steps before the paper is actually submitted to a journal. Some have certain leaders of the collaboration, such as the spokespersons and/or physics coordinators, read the draft and make a final set of recommendations that are to be implemented before submission. Others have “publication committees” that organize public final readings of a paper that can lead to changes. At this stage the authors of the original draft very much hope that things go smoothly and that paper submission will be imminent.

And this whole process comes before the scientific tradition of independent, blind peer review! Journals have their own procedures for appointing referees who read the paper and give the journal editors advice on whether a paper should be published, and what changes or checks they might require before recommending publication. The interaction with the journal and its referees can also take quite some time, but almost always it ends with a positive result. The paper has gone through so many levels of scrutiny already that the output is really a high-quality scientific product that describes reproducible results, and that will ultimately stand the test of time.

A paper that describes a measurement in particle physics is the last step of a long journey, from the conception of the experiment, the design and subsequent construction of the apparatus, its operation over the course of years to collect the data sample, the processing of the data, and the subsequent analysis that leads to numerical values of physical quantities and their associated uncertainties. The actual writing of the papers, and process of validating them and bringing 2130 physicists to agree that the paper has told the right story about the whole journey is an important step in the creation of scientific knowledge.

by Ken Bloom at April 22, 2014 08:38 PM

astrobites - astro-ph reader's digest

New detections of exoplanet HD 95086 b with the Gemini Planet Imager

Title: Near-infrared detection and characterization of the exoplanet HD 95086 b with the Gemini Planet Imager
Authors: R. Galicher, J. Rameau, M. Bonnefoy, J.-L. Baudino, T. Currie, A. Boccaletti, G. Chauvin, A.-M. Lagrange, C. Marois
First Author’s Institution:  LESIA, CNRS, Observatoire de Paris
Paper Status: Accepted to A&A Letters

Figure 1 - Detections of the planet HD 95086 b at H band (1.7 microns - top), and K1 band (2.05 microns - bottom).  Figure from Galicher et al. (2014).

Figure 1 – Detections of the planet HD 95086 b at H band (1.7 microns – top), and K1 band (2.05 microns – bottom). Figure from Galicher et al. (2014).

HD 95086 is a young A type star, approximately 17 million years old, that hosts a dusty debris disk. Last year, Rameau et al. directly imaged a planet around the star, HD 95086 b, at a projected distance of 56 AU. From the brightness of the planet in their observation at a wavelength of 3.8 microns, they calculated the mass of the planet to be ~ 5 MJup (MJup is the mass of Jupiter), making HD 95086 b the lowest mass exoplanet that has been directly imaged.

Follow-up observations at shorter wavelengths were unable to detect the planet, suggesting it has a very red color, similar to the planets in the famous multi-planet system, HR 8799. In this paper, Galicher et al. use the newly commissioned Gemini Planet Imager (GPI) to successfully detect the planet at shorter wavelengths. The new observations are consistent with the old upper limitsthe planet is extremely red, meaning it is brighter a long wavelengths.

Gemini Planet Imager
GPI is a new state of the art high contrast imaging instrument on the Gemini South Telescope in Chile. GPI combines adaptive optics to remove the effects of atmospheric turbulence, a coronagraph for starlight suppression, and an integral field spectrograph (IFS) to get spatial and spectral information simultaneously. HD 95086 b is one of the first exoplanets to be imaged with GPI as part of its commissioning observations.

The results of the IFS observations are a spectral cube, a 2D image of the system at multiple wavelengths, giving three dimensions of information. The planet HD 95086 b, however, was too faint, so the data at all wavelengths were combined to increase the signal to noise. The final result of the data processing is two images of the planet at 1.7 and 2.05 microns (see Figure 1). The paper examines two years worth of observations of the planet to confirm that the planet is comoving with the star.

Figure 2 - Color - magnitude diagram of HD 95086 b (yellow star) compared to other exoplanets, M, L, and T dwarfs, and evolutionary models.  Figure from Galicher et al. (2014).

Figure 2 – Color – magnitude diagram of HD 95086 b (yellow star) compared to other exoplanets, M, L, and T dwarfs, and evolutionary models. Figure from Galicher et al. (2014).

Characterization of the planet
Figure 2 shows a color-magnitude diagram of HD 95086 b compared to other exoplanets, field dwarfs, and evolutionary models of planets and brown dwarfs. HD 95086 b lies in the transition region between L and T dwarfs, similar to the young planets HR 8799 c, d, and e. HD 95086 b, however, has a much redder color than the field dwarfs, (further right on the plot), suggesting there is a large amount of dust in its atmosphere.

Using the data at all three wavelengths were the planet was detected (1.7, 2.05, and 3.8 microns), Galicher et al. constructed a spectral energy distribution and fit planetary atmosphere models to the data. They found the planet’s effective temperature was between 600 and 1500 Kelvin. The planet’s mass cannot be constrained in this way; evolutionary models are needed instead, which depend on the unknown initial conditions. One set of evolutionary models gives the planet’s mass as 5 MJup. The other gives a range of 4-14 MJup, but 4 MJup is the most probable. Detections at more wavelengths and spectroscopy with higher signal to noise is needed to better constrain the properties of HD 95086 b.

by Jessica Donaldson at April 22, 2014 08:26 PM

Symmetrybreaking - Fermilab/SLAC

Documenting the development of discovery

Creating a compelling story about the search for the secrets of the universe in Particle Fever helped filmmaker Mark Levinson find his calling.

On July 4, 2012, while the entire particle physics community was celebrating the discovery of the Higgs boson, Mark Levinson was at CERN, frantically working to capture the occasion on film.

“I was so consumed with the details of getting it in frame and focus—trying to anticipate what would be important and where to be,” he says. “I was at the center of the universe for that day, and we had this unique opportunity. And the pressure was to do it justice.”

by Rhianna Wisniewski at April 22, 2014 07:31 PM

Emily Lakdawalla - The Planetary Society Blog

Rosetta update: Instrument commissioning going well; Philae cameras activated
Rosetta and Philae have very nearly completed a six-week phase of spacecraft and instrument checkouts to prepare the mission to do science. Recently, the lander used its cameras for the first time since hibernation, producing some new photos of Rosetta in space.

April 22, 2014 05:46 PM

arXiv blog

Diamond Teleporters Herald New Era of Quantum Routing

The ability to teleport quantum information between diamond crystals that can also store it is a small but important step toward a quantum Internet.


The prospect of a quantum Internet has excited physicists for two decades. A quantum Internet will allow the transmission of information around the world with perfect security and make cloud-based quantum computing a reality.

April 22, 2014 02:00 PM

John Baez - Azimuth

New IPCC Report (Part 8)

guest post by Steve Easterbrook

(8) To stay below 2°C of warming, most fossil fuels must stay buried in the ground

Perhaps the most profound advance since the previous IPCC report is a characterization of our global carbon budget. This is based on a finding that has emerged strongly from a number of studies in the last few years: the expected temperature change has a simple linear relationship with cumulative CO2 emissions since the beginning of the industrial era:

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

(Click to enlarge.)

The chart is a little hard to follow, but the main idea should be clear: whichever experiment we carry out, the results tend to lie on a straight line on this graph. You do get a slightly different slope in one experiment, the “1% percent CO2 increase per year” experiment, where only CO2 rises, and much more slowly than it has over the last few decades. All the more realistic scenarios lie in the orange band, and all have about the same slope.

This linear relationship is a useful insight, because it means that for any target ceiling for temperature rise (e.g. the UN’s commitment to not allow warming to rise more than 2°C above pre-industrial levels), we can easily determine a cumulative emissions budget that corresponds to that temperature. So that brings us to the most important paragraph in the entire report, which occurs towards the end of the summary for policymakers:

Limiting the warming caused by anthropogenic CO2 emissions alone with a probability of >33%, >50%, and >66% to less than 2°C since the period 1861–1880, will require cumulative CO2 emissions from all anthropogenic sources to stay between 0 and about 1560 GtC, 0 and about 1210 GtC, and 0 and about 1000 GtC since that period respectively. These upper amounts are reduced to about 880 GtC, 840 GtC, and 800 GtC respectively, when accounting for non-CO2 forcings as in RCP2.6. An amount of 531 [446 to 616] GtC, was already emitted by 2011.

Unfortunately, this paragraph is a little hard to follow, perhaps because there was a major battle over the exact wording of it in the final few hours of inter-governmental review of the “Summary for Policymakers”. Several oil states objected to any language that put a fixed limit on our total carbon budget. The compromise was to give several different targets for different levels of risk.

Let’s unpick them. First notice that the targets in the first sentence are based on looking at CO2 emissions alone; the lower targets in the second sentence take into account other greenhouse gases, and other earth systems feedbacks (e.g. release of methane from melting permafrost), and so are much lower. It’s these targets that really matter:

• To give us a one third (33%) chance of staying below 2°C of warming over pre-industrial levels, we cannot ever emit more than 880 gigatonnes of carbon.

• To give us a 50% chance, we cannot ever emit more than 840 gigatonnes of carbon.

• To give us a 66% chance, we cannot ever emit more than 800 gigatonnes of carbon.

Since the beginning of industrialization, we have already emitted a little more than 500 gigatonnes. So our remaining budget is somewhere between 300 and 400 gigatonnes of carbon. Existing known fossil fuel reserves are enough to release at least 1000 gigatonnes. New discoveries and unconventional sources will likely more than double this. That leads to one inescapable conclusion:

Most of the remaining fossil fuel reserves must stay buried in the ground.

We’ve never done that before. There is no political or economic system anywhere in the world currently that can persuade an energy company to leave a valuable fossil fuel resource untapped. There is no government in the world that has demonstrated the ability to forgo the economic wealth from natural resource extraction, for the good of the planet as a whole. We’re lacking both the political will and the political institutions to achieve this. Finding a way to achieve this presents us with a challenge far bigger than we ever imagined.


You can download all of Climate Change 2013: The Physical Science Basis here. Click below to read any part of this series:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Climate Change 2013: The Physical Science Basis is also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 22, 2014 10:17 AM

Lubos Motl - string vacua and pheno

Two string pheno papers
I hope you have survived the Easter if you had to undergo one. There are at least two interesting hep-th papers on string phenomenology today. Alon Faraggi wrote a 35-page review
String Phenomenology: Past, Present and Future Perspectives
which focuses on the old-fashioned heterotic string model building, especially the free fermionic ones. Those were the first research direction that convinced me more than 20 years ago that it had everything it needed to have to become a TOE.

Faraggi doesn't discuss inflation at all and it's questionable whether good inflation scenarios have been studied within the compactifications he prefers. That defect of his paper is more than compensated by the other paper I want to mention.




Luis E. Ibáñez and Irene Valenzuela wrote a paper on a realistic stringy explanation of the primordial gravitational waves apparently spotted by BICEP2,
The Inflaton as a MSSM Higgs and Open String Modulus Monodromy Inflation
The Higgs boson and the inflaton – two key players of recent experimental discoveries – are the two fundamental scalar fields in Nature whose existence is supported by the experimental data. The idea that they could be the same is very intriguing. However, the minimum incarnations of this sort seem to be excluded, especially after BICEP2, or they have severe problems, to say the least.




Ibáñez and his graduate student look at a slight modification of the minimum scenario. The inflaton isn't quite the light Higgs. Instead, it is one of the other Higgses and/or their superpartners. This heavy scalar – whose mass seems to be \(10^{13}\GeV\) if we want to realize the BICEP2 observations by the simplest Linde's quadratic potential – changes by trans-Planckian values as a field. The reason why it can do so is a special example of the axion monodromy inflation that Eva Silverstein has described a month ago. They argue that the appropriate scalars could be a modulus in heterotic \(\ZZ_{2N}\) orbifolds or an open-string D-brane modulus in type IIB orientifolds/orientifolds.

As you can see, they are avoiding the assumption that the superpartners must be near the \(\TeV\) scale that is accessible by the LHC. String theorists have always had mixed feelings about this question because the main reason for supersymmetry's existence according to string theory is much deeper and more fundamental – closer to the Planck scale – than some particular technical problem at a particular low-energy scale that would just happen to agree with the current collider scale. Phenomenologists tend to think that keeping the Higgs light is the key raison d'être for the SUSY's existence; string (formal) theorists such as these two authors prefer to think that SUSY has more important tasks before that – like stabilizing the Higgs potential (which doesn't imply that the Higgs has to be light).

In this Spanish scenario, the inflaton is as heavy as the SUSY breaking scale which is still about 3 orders of magnitude lighter than the GUT scale, \(10^{16}\GeV\), where they also approximately place the string scale and the compactification scale – and the numbers seem to make sense.

I think it's a good idea not to be excessively constrained by the phenomenologists' prejudice that SUSY had to be valid up to very low energies. The most natural picture suggested by string theory – and also by the experiments, including BICEP2 and the light Higgs mass near \(126\GeV\) that they seem to be compatible with – could be very different and hint at a SUSY breaking at an intermediate-to-high scalar not terribly far below the GUT scale. As you can see in this application of the SUSY scale to inflation, this high mass of the superpartners does not mean that SUSY has no implications for the experiments and observations.

by Luboš Motl (noreply@blogger.com) at April 22, 2014 07:14 AM

April 21, 2014

Geraint Lewis - Cosmic Horizons

The Greatest Experiment you've never heard of!
This Easter weekend is almost over, and has been quiet as the kids are off at camps. So, I'm going to write about something I think is very important, but many don't know about.

Let's start with a question - what was the greatest year of the last century in terms of scientific discovery?

Many will cite 1905, Einstein's miracle year, which I admit is a pretty good one. Then there is 1915, when Einstein sorted out general relativity. Again, a good year.

But I'm going to say 1956.  You might be scratching your head over this. 1956 was a good year - Elvis recorded Heartbreak Hotel and the Eurovision Song Contest was held for the first time - but in terms of science, what happened?

Well, there was a prediction in a paper and an experiment which changed the way we really understand the Universe. But what's this all about?

The key thing is concept called parity. Basically, all parity asks the question of what the Universe looks like when viewed in a mirror. Again, you might be scratching your head a little, but let's take a look at a simple example.

We know that the particles of light, the photon, carries a spin. Here's a picture from wikipedia
Now, it might seem that a mass-less photon, travelling at the speed of light, "spins", but it is one of those quantum-mechanical things.

The key thing here is that a photon can spin either clockwise or anticlockwise to the direction of motion of the photon. Normally these are called left-hand or right-hand photons. So, if I have a left-hand spinning photon (the |L> up there) and look at it in a mirror, then its spin would flip and it would look like a right-hand photon (|R>), which is something perfectly acceptable. The mirror representation of the photon could quite happily occur in the actual universe.

Photons are not the only particle that spins, electrons and neutrinos do, as do quarks, and as quarks can spin, so to do the composite particles they make up, like the proton and neutron. So, entire atoms can be spinning.

One of the important laws of the Universe is that spin (well, more correctly, angular momentum) is conserved. So when an atom emits or absorbs a photon, then the total amount of spin doesn't change.

Below is an example of what happens when hydrogen emits a 21cm radio photon. In this case, we care about the spin of the proton and the spin of the electron, which are aligned before the emission. But after the emission, the photon carries off some spin, and the electron's spin has flipped so the total angular momentum remains the same.
If we hold up a mirror and flip the spins of the proton and the electron before the emission, the emission can proceed as the electron can still flip and a photon is emitted, but now spinning in the opposite direction. This can happen in the real Universe as well as the mirror Universe.

Hopefully, by now, you are going "Well, duh!". Isn't this obvious. And, yes, it is. In fact, holding a mirror up to either of (or combinations of) the electromagneticstrong or gravitational interactions, this seems to be the case.

But something happened in 1956 that changed everything. Two researcher, Yang and Lee reviewed the evidence of whether these parity rules hold for the weak force, the force responsible for radioactive decay. And they concluded that the evidence didn't say that the mirror universe must resemble the one we live in if we look at weak interactions. 

OK, if you are lost. Read on, it will make sense. Yang and Lee proposed an experiment, an experiment undertaken by Wu. The experiment was to a heavy spinning, nucleus (Wu used cobalt) and cool it down in a magnetic field, which gets all of the cobalt nuclei spinning in the same direction. The cobalt nucleus undergoes a radioactive decay and spits out an electron. And what was noticed is that the nucleus spits out more electrons in one direction than the other (see left most picture below).

If we hold a mirror up to left-most picture you get the next one across. All that happens is that the spin of the cobalt nucleus reverses, but still more electrons get spat out of the bottom in both our universe and the mirror universe.

But do we see the mirror image actually occurring in our universe? The answer is no! The situation is actually as seen in the two right-most images - reverse the spin by flipping over the cobalt, and you flip the direction that most of the electrons come out of! The mirror does not occur in the Universe.

Why? Well, electrons can spin, and happily spin this way and that. But the problem is not the electron, but the other particle emitted during radioactive decay, the neutrino. Neutrinos carry spin, just like the photon, but unlike the photon, neutrinos can only spin one way! Neutrinos are always left-handed (anti-neutrinos are always right handed) and as angular momentum must be conserved, it dictates the direction the electrons are emitted.


(Image from the excellent HyperPhysics)

For our mirror image to occur within our actual universe, we would need right-handed neutrinos, and they don't exist!

This violation of parity was a big shock to the physical world as it shows that, in terms of the weak interaction, the universe is inherently asymmetrical. I think this is extremely cool.

Two final comments before I go an enjoy the sunshine.

Firstly, Yang and Lee got the 1957 Nobel Prize in Physics for their work, a year after they published their paper! The speed tells you how important and amazing the result was, although Wu did not get the Nobel for the experiment. As I've said before, I'm no historian, but one has to wonder if the fact she was a woman had anything to do with it!

And secondly, why does the neutrino have only one spin direction? Wouldn't the universe be neater if everything obeyed the rules of parity conservation? Why does the Universe behave like this? We don't know, but maybe if we asked, the Universe would simply respond by singing some Lady Gaga and point out that it was simply born this way.

Anyway. 1956 - what a year.

by Cusp (noreply@blogger.com) at April 21, 2014 10:20 PM

astrobites - astro-ph reader's digest

Hot Jupiters and Their Effects on Host Stars

Title: Indications for an influence of Hot Jupiters on the rotation and activity of their host stars
Authors: K. Poppenhaeger, S.J. Wolk
First Author’s institution: Harvard-Smithsonian Center for Astrophysics

Binary star systems are useful for comparing the evolution and dynamics of two different stars. Since the stellar pairs in a binary are usually presumed to have formed at the same time from the same protostellar cloud, we can directly compare any apparent discrepancies in the properties of the two. The situation becomes more interesting once an orbiting exoplanet is thrown into the mix. It seems unlikely that a planet could affect the dynamics of a host star orders of magnitude larger in mass, but observational studies have found correlations between stellar activity (ex. solar flares, coronal mass ejections, etc.) and the presence of hot Jupiters.

In this paper, the authors examine five different binary star systems with at least one exoplanet orbiting one of the stars (designated as the primary star). The systems were selected such that the separations within each binary pair are sufficiently large (>100 pc) so that the stars do not affect each other through mutual interactions. The companion star without a known exoplanet acts as an observational control to compare the lack of observed planetary influence.  The authors use X-ray observations taken from XMM-Newton and Chandra to measure the magnetic activity levels of these binary systems resulting from coronal and chromospheric emission. By measuring stellar X-ray luminosities, the authors can derive the ages of these component stars by comparing the observed X-ray emission to those of stellar samples of known ages.

Fig. 1: X-ray images of the binary star systems examined in this paper.

Fig. 1: X-ray images of the binary star systems examined in this paper.

The authors find that for three of these binary systems, the age estimates between the primary and secondary stars agree well with each other. Since binary stars are expected to form from the same protostellar cloud, we would expect them to have roughly same age. In the two other systems, the authors note discrepancies between the inferred ages of the stellar components. That is, the primary star with the exoplanet seem to have stronger than expected X-ray emission relative to the secondary. If binary star pairs should have similar properties, what can cause one star to deviate in such a way from its partner?

One mechanism to explain these differences is that a extrasolar planet orbiting the primary star could induce tidal bulges in the star’s atmosphere. This interaction can inhibit the spin-down of a host star, and the result is that the authors infer an age discrepancy between the stellar components based on rotation and magnetic activity. Additionally, the authors explore the possibility of angular momentum transfer between the planet and its host star. By estimating the angular momentum for each orbiting exoplanet, the authors show that this is greater than the angular momentum of the rotating host star (for comparison, Jupiter has 99% of the angular momentum in the our solar system, even though the Sun has 99% of the mass). It is therefore possible that the stellar rotation rate could be influenced angular momentum transfer via a planet migrating towards the star.

Since stellar rotation is influenced by magnetic coupling to its circumstellar disk, a planet migrating inwards towards its host star can create a gap in the disk (due to resonance effects), and this gap could affect the star-disk rotational coupling. The resulting change in stellar rotation period could potentially explain the overrotation of the primary star relative to the secondary star in the binary systems examined in this paper. The apparent differences in stellar rotational evolution are sensitive to the initial rotational periods of the stars during their formation, so this could be another explanation for the mentioned discrepancies.

Traditionally, stellar rotation and activity has been known to slow down due to magnetic braking through stellar winds, but the effects of hot Jupiters offer an interesting alternative to this effect. Currently, the authors are conducting observations of a larger sample of such systems to see whether these hot Jupiters are viable candidates for affecting the dynamics of their host stars. Finding other instances of this kind of interaction would substantiate this mechanism and suggest a new avenue for modeling stellar dynamics with their exoplanets.

by Anson Lam at April 21, 2014 06:17 PM

Sean Carroll - Preposterous Universe

Guest Post: Max Tegmark on Cosmic Inflation

Max TegmarkMost readers will doubtless be familiar with Max Tegmark, the MIT cosmologist who successfully balances down-and-dirty data analysis of large-scale structure and the microwave background with more speculative big-picture ideas about quantum mechanics and the nature of reality. Max has a new book out — Our Mathematical Universe: My Quest for the Ultimate Nature of Reality — in which he takes the reader on a journey from atoms and the solar system to a many-layered multiverse.

In the wake of the recent results indicating gravitational waves in the cosmic microwave background, here Max delves into the idea of inflation — what it really does, and what some of the implications are.


Thanks to the relentless efforts of the BICEP2 team during balmy -100F half-year-long nights at the South Pole, inflation has for the first time become not only something economists worry about, but also a theory for our cosmic origins that’s really hard to dismiss. As Sean has reported here on this blog, the implications are huge. Of course we need independent confirmation of the BICEP2 results before uncorking the champagne, but in the mean time, we’re forced to take quite seriously that everything in our observable universe was once smaller than a billionth the size of a proton, containing less mass than an apple, and doubled its size at least 80 times, once every hundredth of a trillionth of a trillionth of a trillionth of a second, until it was more massive than our entire observable universe.

We still don’t know what, if anything, came before inflation, but this is nonetheless a huge step forward in understanding our cosmic origins. Without inflation, we had to explain why there were over a million trillion trillion trillion trillion kilograms of stuff in existence, carefully arranged to be almost perfectly uniform while flying apart at huge speeds that were fine-tuned to 24 decimal places. The traditional answer in the textbooks was that we had no clue why things started out this way, and should simply assume it. Inflation puts the “bang” into our Big Bang by providing a physical mechanism for creating all those kilograms and even explains why they were expanding in such a special way. The amount of mass needed to get inflation started is less than that in an apple, so even though inflation doesn’t explain the origin of everything, there’s a lot less stuff left to explain the origin of.

If we take inflation seriously, then we need to stop saying that inflation happened shortly after our Big Bang, because it happened before it, creating it. It is inappropriate to define our Hot Big Bang as the beginning of time, because we don’t know whether time actually had a beginning, and because the early stages of inflation were neither strikingly hot nor big nor much of a bang. As that tiny speck of inflating substance doubled its diameter 80 times, the velocities with which its parts were flying away from one another increased by the same factor 2^80. Its volume increased by that factor cubed, i.e., 2^240, and so did its mass, since its density remained approximately constant. The temperature of any particles left over from before inflation soon dropped to near zero, with the only remaining heat coming from same Hawking/Unruh quantum fluctuations that generated the gravitational waves.

Taken together, this in my opinion means that the early stages of inflation are better thought of not as a Hot Big Bang but as a Cold Little Swoosh, because at that time our universe was not that hot (getting a thousand times hotter once inflation ended), not that big (less massive than an apple and less than a billionth of the size of a proton) and not much of a bang (with expansion velocities a trillion trillion times slower than after inflation). In other words, a Hot Big Bang did not precede and cause inflation. Instead, a Cold Little Swoosh preceded and caused our Hot Big Bang.

Since the BICEP2 breakthrough is generating such huge interest in inflation, I’ve decided to post my entire book chapter on inflation here so that you can get an up-to-date and self-contained account of what it’s all about. Here are some of the questions answered:

  • What does the theory of inflation really predict?
  • What physics does it assume?
  • Doesn’t creation of the matter around us from almost nothing violate energy conservation?
  • How could an infinite space get created in a finite time?
  • How is this linked to the BICEP2 signal?
  • What remarkable prize did Alan Guth win in 2005?

by Sean Carroll at April 21, 2014 03:22 PM

Emily Lakdawalla - The Planetary Society Blog

Forensic Ballistics: How Apollo 12 Helped Solve the Skydiver Meteorite Mystery
What can a 45-year-old mission to the Moon tell us about a "meteorite" flying past a skydiver on Earth?

April 21, 2014 02:33 PM

Symmetrybreaking - Fermilab/SLAC

Tracking particles faster at the LHC

A new trigger system will expand what ATLAS scientists can look for during high-energy collisions at the Large Hadron Collider.

For its next big performance, the Large Hadron Collider will restart in 2015 with twice its previous collision energy and a much higher rate of particle collisions per second.

Scientists have been scurrying to prepare their detectors for the new particle onslaught. As part of this preparation, a group that includes physicists from laboratories and universities in the Chicago area are designing a new system that will allow them to examine collisions faster than ever before.

by Sarah Charley at April 21, 2014 01:00 PM

Clifford V. Johnson - Asymptotia

Eclipse Progress
eclipse_progress_15_04_2014_cvjDid you catch the eclipse last Monday? It was wonderful. Here's a little snap I took of the progress (taken with an iPad camera precariously through the lens of a telescope, pointing out of a bedroom window, so not the best arrangement). One of the striking things about looking at the progress of it is just how extra three-dimensional the moon seems as the earth's shadow slowly covers it. It really makes one's whole mind and body latch on to the three dimensional reality of the sky - you really feel it, as opposed to just knowing it in your head. That's sort of hard to explain - and you're not going to see it in any photo anyone can show - so I imagine you are not really sure what I'm getting at if [...] Click to continue reading this post

by Clifford at April 21, 2014 04:39 AM

April 20, 2014

Clifford V. Johnson - Asymptotia

Self-Similar Dinner
romanesco_fractal_20_04_2014_2 Eventually, although a bit over-priced in the Hollywood farmer's market, I do fall for these at least once in the season. As someone whose job and pastimes involve seeing patterns everywhere, how can I not love the romanesco? It's a fractal! There are structures that repeat themselves on different scales again and again, which is the root of the term "self-similar". Fractals are wonderful structures in mathematics (that have self-similarity) that I urge you to find out more about if you don't already know (just google and follow your nose). And [...] Click to continue reading this post

by Clifford at April 20, 2014 11:23 PM

The n-Category Cafe

Enrichment and the Legendre--Fenchel Transform I

The Legendre–Fenchel transform, or Fenchel transform, or convex conjugation, is, in its naivest form, a duality between convex functions on a vector space and convex functions on the dual space. It is of central importance in convex optimization theory and in physics it is used to switch between Hamiltonian and Lagrangian perspectives.

graphs

Suppose that <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is a real vector space and that <semantics>f:V[,+]<annotation encoding="application/x-tex"> f\colon V\to [-\infty ,+\infty ] </annotation></semantics> is a function then the Fenchel transform is the function <semantics>f *:V #[,+]<annotation encoding="application/x-tex"> f^{\ast }\colon V^{#}\to [-\infty ,+\infty ] </annotation></semantics> defined on the dual vector space <semantics>V #<annotation encoding="application/x-tex">V^{#}</annotation></semantics> by <semantics>f *(k)sup xV{k,xf(x)}.<annotation encoding="application/x-tex"> f^{\ast }(k)\coloneqq \sup _{x\in V}\big \{ \langle k,x\rangle -f(x)\big \} . </annotation></semantics>

If you’re a regular reader then you will be unsurprised when I say that I want to show how it naturally arises from enriched category theory constructions. I’ll show that in the next post. In this post I’ll give a little introduction to the Legendre–Fenchel transform.

There is probably no best way to introduce the Legendre–Fenchel transform. The only treatment that I knew for many years was in Arnold’s book Mathematical Methods of Classical Mechanics, but I have recently come across the convex optimization literature and would highly recommend Tourchette’s The Fenchel Transform in a Nutshell — my treatment here is heavily influenced by this paper. I will talk mainly about the one-dimensional case as I think that gives a lot of the intuition.

We will start, as Legendre did, with the special case of a strictly convex differentiable function <semantics>f:<annotation encoding="application/x-tex">f\colon \mathbb{R}\to \mathbb{R}</annotation></semantics>; for instance, the function <semantics>x 2+1/2<annotation encoding="application/x-tex">x^{2}+1/2</annotation></semantics> pictured on the left hand side above. The derviative of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is strictly increasing and so the function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> can be parametrized by the derivative <semantics>k=df/dx<annotation encoding="application/x-tex">k =d f/d x</annotation></semantics> instead of the parameter <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>. Indeed we can write the parameter <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in terms of the slope <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>, <semantics>x=x(k)<annotation encoding="application/x-tex">x=x(k)</annotation></semantics>. The Legendre-Fenchel transform <semantics>f *<annotation encoding="application/x-tex">f^{*}</annotation></semantics> can then be defined to satisfy <semantics>k,x=f(x)+f *(k),<annotation encoding="application/x-tex"> \langle k,x \rangle = f(x) +f^{\ast }(k), </annotation></semantics> where the angle brackets mean the pairing between a vector space and its dual. In this one-dimensional case, where <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> are thought of as real numbers, we just have <semantics>k,x=kx<annotation encoding="application/x-tex">\langle k,x \rangle = k x</annotation></semantics>.

As <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is a function of <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> we can rewrite this as <semantics>f *(k)k,x(k)f(x(k)).<annotation encoding="application/x-tex"> f^{\ast }(k)\coloneqq \langle k,x(k) \rangle -f(x(k)). </annotation></semantics> So the Legendre-Fenchel transform encodes the function is a different way. By differentiating this equation you can see that the <semantics>df */dk=x(k)<annotation encoding="application/x-tex">d f^{\ast }/d k=x(k)</annotation></semantics>, thus we have interchanged the abcissa (the horizontal co-ordinate) and the slope. So if <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> has derivative <semantics>k 0<annotation encoding="application/x-tex">k_{0}</annotation></semantics> at <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> then <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> has derivative <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> at <semantics>k 0<annotation encoding="application/x-tex">k_{0}</annotation></semantics>. This is illustrated in the above picture.

I believe this is what Legendre did and then that what Fenchel did was to generalize this to non-differentiable functions.

For non-differentiable functions, we can’t talk about tangent lines and derivatives, but instead can talk about supporting lines. A supporting line is one which touches the graph at at least one point and never goes above the graph. (The fact that we’re singling out lines not going above the graph means that we have convex functions in mind.)

For instance, at the point <semantics>(x 0,f(x 0))<annotation encoding="application/x-tex">(x_{0},f(x_{0}))</annotation></semantics> the graph pictured below has no tangent line, but has supporting lines with gradient from <semantics>k 1<annotation encoding="application/x-tex">k_{1}</annotation></semantics> to <semantics>k 2<annotation encoding="application/x-tex">k_{2}</annotation></semantics>. A convex function will have at least one supporting line at each point.

graphs

It transpires that the right way to generalize the transform to this non-differentiable case is to define it as follows: <semantics>f *(k)sup x{k,xf(x)}.<annotation encoding="application/x-tex"> f^{\ast }(k)\coloneqq \sup _{x\in \mathbb{R}}\big \{ \langle k,x\rangle -f(x)\big \} . </annotation></semantics> In this case, if <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> has a supporting line of slope <semantics>k 0<annotation encoding="application/x-tex">k_{0}</annotation></semantics> at <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> then <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> has a supporting line of slope <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> at <semantics>k 0<annotation encoding="application/x-tex">k_{0}</annotation></semantics>. In the picture above, at <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics>, the function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> has supporting lines with slope from <semantics>k 1<annotation encoding="application/x-tex">k_{1}</annotation></semantics> to <semantics>k 2<annotation encoding="application/x-tex">k_{2}</annotation></semantics>: correspondingly, the function <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> has supporting lines with slope <semantics>x 0<annotation encoding="application/x-tex">x_{0}</annotation></semantics> all the way from <semantics>k 1<annotation encoding="application/x-tex">k_{1}</annotation></semantics> to <semantics>k 2<annotation encoding="application/x-tex">k_{2}</annotation></semantics>.

If we allow the function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> to be not strictly convex then the transform will not alway be finite. For example, if <semantics>f(x)ax+b<annotation encoding="application/x-tex">f(x)\coloneqq ax+b</annotation></semantics> then we have <semantics>f *(a)=b<annotation encoding="application/x-tex">f^{\ast }(a)=-b</annotation></semantics> and <semantics>f *(k)=+<annotation encoding="application/x-tex">f^{\ast }(k)=+\infty </annotation></semantics> for <semantics>ka<annotation encoding="application/x-tex">k\ne a</annotation></semantics>. So we will allow functions taking values in the extended real numbers: <semantics>¯[,+]<annotation encoding="application/x-tex">\overline{\mathbb{R}}\coloneqq [-\infty ,+\infty ]</annotation></semantics>.

We can use the above definition to get the transform of any function <semantics>f:¯<annotation encoding="application/x-tex">f\colon \mathbb{R}\to \overline{\mathbb{R}}</annotation></semantics>, whether convex or not, but the resulting transform <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> is always convex. (When there are infinite values involved we can also say that <semantics>f *<annotation encoding="application/x-tex">f^{\ast }</annotation></semantics> is lower semi-continuous, but I’ll absorb that into my definition of convex for functions taking infinite values.)

Everything we’ve done for one-dimensional <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics> easily generalizes to any finite dimensional real vector space <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics>, where we should say ‘supporting hyperplane’ instead of ‘supporting line’. From that we get a transform between sets of functions <semantics>(--) *:Fun(V,¯)Fun(V #,¯),<annotation encoding="application/x-tex"> (\text {--})^{\ast }\colon \mathrm{Fun}(V,\overline{\mathbb{R}})\to \mathrm{Fun}(V^{#},\overline{\mathbb{R}}), </annotation></semantics> where <semantics>V #<annotation encoding="application/x-tex">V^{#}</annotation></semantics> is the vector space dual of <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics>. Similarly, we have a reverse transform going the other way, which is traditionally also denoted with a star <semantics>(--) *:Fun(V #,¯)Fun(V,¯),<annotation encoding="application/x-tex"> (\text {--})^{\ast }\colon \mathrm{Fun}(V^{#},\overline{\mathbb{R}})\to \mathrm{Fun}(V,\overline{\mathbb{R}}), </annotation></semantics> for <semantics>g:V #¯<annotation encoding="application/x-tex">g\colon V^{#}\to \overline{\mathbb{R}}</annotation></semantics> we define <semantics>g *(x)sup kV #{k,xg(k)}.<annotation encoding="application/x-tex"> g^{\ast }(x)\coloneqq \sup _{k\in V^{#}}\big \{ \langle k,x\rangle -g(k)\big \} . </annotation></semantics>

This pair of transforms have some rather nice properties, for instance, they are order reversing. We can put a partial order on any set of functions to <semantics>¯<annotation encoding="application/x-tex">\overline{\mathbb{R}}</annotation></semantics> by defining <semantics>f 1f 2<annotation encoding="application/x-tex">f_{1}\ge f_{2}</annotation></semantics> if <semantics>f 1(x)f 2(x)<annotation encoding="application/x-tex">f_{1}(x)\ge f_{2}(x)</annotation></semantics> for all <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>. Then <semantics>f 1f 2f 2 *f 1 *.<annotation encoding="application/x-tex"> f_{1}\ge f_{2} \quad \Rightarrow \quad f_{2}^{\ast }\ge f_{1}^{\ast }. </annotation></semantics> Also for any function <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> we have <semantics>f *=f ***<annotation encoding="application/x-tex"> f^{\ast }=f^{\ast \ast \ast } </annotation></semantics> which implies that the operator <semantics>ff **<annotation encoding="application/x-tex">f\mapsto f^{\ast \ast }</annotation></semantics> is idempotent: <semantics>f **=f ****.<annotation encoding="application/x-tex"> f^{\ast \ast }=f^{\ast \ast \ast \ast }. </annotation></semantics> This means that <semantics>ff **<annotation encoding="application/x-tex">f\mapsto f^{\ast \ast }</annotation></semantics> is a closure operation. What it actually does is take the convex envelope of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>, which is the largest convex function less than or equal to <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>. Here’s an example.

graphs

This gives that if <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is already a convex function then <semantics>f **=f<annotation encoding="application/x-tex">f^{\ast \ast }=f</annotation></semantics>. And as a consequence the Legendre–Fenchel transform and the reverse transform restrict to an order reversing bijection between convex functions on <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> and convex functions on its dual <semantics>V #<annotation encoding="application/x-tex">V^{#}</annotation></semantics>. <semantics>Cvx(V,¯)Cvx(V #,¯).<annotation encoding="application/x-tex"> \mathrm{Cvx}(V,\overline{\mathbb{R}})\cong \mathrm{Cvx}(V^{#},\overline{\mathbb{R}}). </annotation></semantics>

There are many other things that can be said about the transform, such as Fenchel duality and the role it plays in optimization, but I don’t understand such things to my own satisfaction yet.

Next time I’ll explain how most of the above structure drops out of the nucleus construction in enriched category theory.

by willerton (S.Willerton@sheffield.ac.uk) at April 20, 2014 05:35 PM

The Great Beyond - Nature blog

How to make graphene in a kitchen blender
Graphene

Atomic resolution, scanning transmission electron microscope image of part of a nanosheet of shear exfoliated graphene. Credit: CRANN/SuperSTEM

Don’t try this at home. No really, don’t: it almost certainly won’t work and you won’t be able to use your kitchen blender for food afterwards. But buried in the supplementary information of a research paper published today is a domestic recipe for producing large quantities of clean flakes of graphene.

The carbon sheets are the world’s thinnest, strongest material;  electrically conductive and flexible; and tipped to transform everything from touchscreen displays to water treatment. Many researchers — including Jonathan Coleman at Trinity College Dublin — have been chasing ways to make large amounts of good-quality graphene flakes.

In Nature Materials, a team led by Coleman (and funded by the UK-based firm Thomas Swan) describe how they took a high-power (400-watt) kitchen blender and added half a litre of water, 10–25 millilitres of detergent and 20–50 grams of graphite powder (found in pencil leads). They turned the machine on for 10–30 minutes. The result, the team reports: a large number of micrometre-sized flakes of graphene, suspended in the water.

Coleman adds, hastily, that the recipe involves a delicate balance of surfactant and graphite, which he has not yet disclosed (this barrier dissuaded me from trying it out; he is preparing a detailed kitchen recipe for later publication). And in his laboratory, centrifuges, electron microscopes and spectrometers were also used to separate out the graphene and test the outcome. In fact, the kitchen-blender recipe was added late in the study as a bit of a gimmick — the main work was done first with an industrial blender (pictured).

Blender

Five litres of suspended graphene (in an industrial blender). Credit: CRANN.

Still, he says, the example shows just how simple his new method is for making graphene in industrial quantities. Thomas Swan has scaled the (patented) process up into a pilot plant and, says commercial director Andy Goodwin, hopes to be making a kilogram of graphene a day by the end of this year, sold as a dried powder and as a liquid dispersion from which it may be sprayed onto other materials.

“It is a significant step forward towards cheap and scalable mass production,” says Andrea Ferrari, an expert on graphene at the University of Cambridge, UK. “The material is of a quality close to the best in the literature, but with production rates apparently hundreds of times higher.”

The quality of the flakes is not as high as that of the ones the winners of the 2010 Nobel Prize in Chemistry, Andre Geim and Kostya Novoselov from Manchester University, famously isolated using Scotch Tape to peel off single sheets from graphite. Nor are they as large as the metre-scale graphene sheets that firms today grow atom by atom from a vapour. But outside of high-end electronics applications, smaller flakes suffice — the real question is how to make lots of them.

Although hundreds of tons of graphene are already being produced each year — and you can easily buy some online — their quality is variable. Many of the flakes in store are full of defects or smothered with chemicals, affecting their conductivity and other properties, and are tens or hundreds of layers thick. “Most of the companies are selling stuff that I wouldn’t even call graphene,” says Coleman.

The blender technique produces small flakes some four or five layers thick on average, but apparently without defects — meaning high electrical conductivity. Coleman thinks the blender induces shear forces in the liquid sufficient to prise off sheets of carbon atoms from the graphite chunks (“as if sliding cards from a deck”, he explains).

Kitchen blenders aren’t the only way to produce reasonably high-quality flakes of graphene. Ferrari still thinks that using ultrasound to rip graphite apart could give better materials in some cases. And Xinliang Feng, from the Max Planck Institute for Polymer Research in Mainz, Germany, says that his recent publication, in the Journal of the American Chemical Society, reports a way to produce higher-quality, fewer-layer graphene at higher rates by electrochemical means. (Coleman points out that Thomas Swan have taken the technique far beyond what is reported in the paper.)

As for applications, “the graphene market isn’t one size fits all”, says Coleman, but the researchers report testing it as the electrode materials in solar cells and batteries. He suggests that the flakes could also be added as a filler into plastic drinks bottles — where their added strength reduces the amount of plastic needed, and their ability to block the passage of gas molecules such as oxygen and carbon dioxide maintains the drink’s shelf life.

In another application altogether, a small amount added to rubber produces a band whose conductivity changes as it stretches — in other words, a sensitive strain sensor. Thomas Swan’s commercial manager, Andy Goodwin, mentions flexible, low-cost electronic displays; graphene flakes have also been suggested for use in desalination plants and even condoms.

In each case, it has yet to be proven that the carbon flakes really outperform other options — but the new discoveries for mass-scale production mean that we should soon find out. At the moment, an array of firms is competing for different market niches, but Coleman predicts a thinning-out as a few production techniques dominate. “There are many companies making and selling graphene now: there will be many fewer in five years’ time,” he says.

by Richard Van Noorden at April 20, 2014 05:00 PM

April 18, 2014

The n-Category Cafe

Elementary Observations on 2-Categorical Limits

Guest post by Christina Vasilakopoulou

In the eighth installment of the Kan Extension Seminar, we discuss the paper “Elementary Observations on 2-Categorical Limits” by G.M. Kelly, published in 1989. Even though Kelly’s classic book Basic Concepts of Enriched Category Theory, which contains the abstract theory related to indexed (or weighted) limits for arbitrary <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-categories, was available since 1982, the existence of the present article is well-justifiable.

On the one hand, it constitutes an independent account of the fundamental case <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>, thus it motivates and exemplifies the more general framework through a more gentle, yet meaningful exposition of 2-categorical limits. The explicit construction of specific notable finite limits such as inserters, equifiers etc. promotes the comprehension of the definitions, via a hands-on description. Moreover, these finite limits and particular results concerning 2-categories rather than general enriched categories, such as the construction of the cotensor as a PIE limit, are central for the theory of 2-categories. Lastly, by introducing indexed lax and pseudo limits along with Street’s bilimits, and providing appropriate lax/ pseudo/ bicategorical completeness results, the paper serves also as an indespensable reference for the later “2-Dimensional Monad Theory” by Blackwell, Kelly and Power.

I would like to take this opportunity to thank Emily as well as all the other participants of the Kan Extension Seminar. This has been a unique experience of constant motivation and inspiration for me!

Basic Machinery

Presently, our base of enrichment is the cartesian monoidal closed category <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> of (small) categories, with the usual adjunction <semantics>×𝒜[𝒜,]<annotation encoding="application/x-tex">-\times\mathcal{A}\dashv[\mathcal{A},-]</annotation></semantics>. The very definition of an indexed limit requires a good command of the basic <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>-categorical notions, as seen for example in “Review of the Elements of 2-categories” by Kelly and Street. In particular, a 2-natural transformation <semantics>α:GH<annotation encoding="application/x-tex">\alpha:G\Rightarrow H</annotation></semantics> between 2-functors consists of components which not only satisfy the usual naturality condition, but also the 2-naturality one expressing compatibility with 2-cells. Moreover, a modification between 2-natural transformations <semantics>m:αβ<annotation encoding="application/x-tex">m:\alpha\Rrightarrow\beta</annotation></semantics> has components families of 2-cells <semantics>m A:α Aβ A:GAHA<annotation encoding="application/x-tex">m_A:\alpha_A\Rightarrow\beta_A:GA\to HA</annotation></semantics> compatible with the mapped 1-cells of the domain 2-category, i.e. <semantics>m BGf=Hfm A<annotation encoding="application/x-tex">m_B\cdot Gf=Hf\cdot m_A</annotation></semantics> (where <semantics><annotation encoding="application/x-tex">\cdot</annotation></semantics> is whiskering).

A 2-functor <semantics>F:𝒦Cat<annotation encoding="application/x-tex">F:\mathcal{K}\to\mathbf{Cat}</annotation></semantics> is called representable, when there exists a 2-natural isomorphism <semantics>α:𝒦(K,)F.<annotation encoding="application/x-tex"> \alpha:\mathcal{K}(K,-)\xrightarrow{\quad\sim\quad}F. </annotation></semantics> The components of this isomorphism are <semantics>α A:𝒦(K,A)FA<annotation encoding="application/x-tex">\alpha_A:\mathcal{K}(K,A)\cong FA</annotation></semantics> in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>, and the unit of the representation is the corresponding `element’ <semantics>1FK<annotation encoding="application/x-tex">\mathbf{1}\to FK</annotation></semantics> via Yoneda.

For a general complete symmetric monoidal closed category <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>, the usual functor category <semantics>[𝒜,]<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{B}]</annotation></semantics> for two <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-categories is endowed with the structure of a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category itself, with hom-objects ends <semantics>[𝒜,](T,S)= A𝒜(TA,SA)<annotation encoding="application/x-tex"> [\mathcal{A},\mathcal{B}](T,S)=\int_{A\in\mathcal{A}} \mathcal{B}(TA,SA) </annotation></semantics> (which exist at least when <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> is small). In our context of <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> it is not necessary to employ ends and coends at all, and the hom-category <semantics>[𝒦,](G,H)<annotation encoding="application/x-tex">[\mathcal{K},\mathcal{L}](G,H)</annotation></semantics> of the functor 2-category is evidently the category of 2-natural transformations and modifications. However, we note that computations via (co)ends simplify and are essential for constructions and (co)completeness results for enrichment in general monoidal categories.

The definition of weighted limits for 2-categories

To briefly motivate the definition of a weighted limit, recall that an ordinary limit of a (<semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics>-) functor <semantics>G:𝒫𝒞<annotation encoding="application/x-tex">G:\mathcal{P}\to\mathcal{C}</annotation></semantics> is characterized by an isomorphism <semantics>𝒞(C,limG)[𝒫,𝒞](ΔC,G)<annotation encoding="application/x-tex"> \mathcal{C}(C,limG)\cong[\mathcal{P},\mathcal{C}](\Delta C, G) </annotation></semantics> natural in <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, where <semantics>ΔC:𝒫𝒞<annotation encoding="application/x-tex">\Delta C:\mathcal{P}\to\mathcal{C}</annotation></semantics> is the constant functor on the object <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. In other words, the limit is the representing object of the presheaf <semantics>[𝒫,𝒞](Δ,G):𝒞 opSet.<annotation encoding="application/x-tex"> [\mathcal{P},\mathcal{C}](\Delta -,G):\mathcal{C}^\op\to\mathbf{Set}. </annotation></semantics> Since components of a natural transformation <semantics>ΔCG<annotation encoding="application/x-tex">\Delta C\Rightarrow G</annotation></semantics> (i.e. cones) can be viewed as components of a natural <semantics>Δ1𝒞(C,G):𝒞Set<annotation encoding="application/x-tex">\Delta\mathbf{1}\Rightarrow\mathcal{C}(C,G-):\mathcal{C}\to\mathbf{Set}</annotation></semantics>, the above defining isomorphism can be written as <semantics>𝒞(C,limG)[𝒫,Set](Δ1,𝒞(C,G)).<annotation encoding="application/x-tex"> \mathcal{C}(C,\mathrm{lim}G)\cong[\mathcal{P},\mathbf{Set}](\Delta\mathbf{1},\mathcal{C}(C,G-)). </annotation></semantics> In this form, ordinary limits can be easily seen as particular examples of conical indexed limits for <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics>, and we are able to generalize the concept of a limit by replacing the functor <semantics>Δ1<annotation encoding="application/x-tex">\Delta\mathbf{1}</annotation></semantics> by an arbitrary functor (weight) <semantics>𝒞Set<annotation encoding="application/x-tex">\mathcal{C}\to\mathbf{Set}</annotation></semantics>.

We may thus think of a 2-functor <semantics>F:𝒫Cat<annotation encoding="application/x-tex">F:\mathcal{P}\to\mathbf{Cat}</annotation></semantics> as a (small) indexing type or weight, and a 2-functor <semantics>G:𝒫𝒦<annotation encoding="application/x-tex">G:\mathcal{P}\to\mathcal{K}</annotation></semantics> as a diagram in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> of shape <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics>: <semantics> Cat weight F 𝒫 diagramG 𝒦. <annotation encoding="application/x-tex"> \begin{matrix} & \mathbf{Cat}\quad \\ {}^{weight} \nearrow_{F} & \\ \mathcal{P} & \overset{G}\underset{diagram}{\rightarrow} & \mathcal{K}. \\ \end{matrix} </annotation></semantics> The 2-functor <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> gives rise to a 2-functor <semantics> p[Fp,𝒦(,Gp)]=[𝒫,Cat](F,𝒦(,G)):𝒦 opCat<annotation encoding="application/x-tex"> \int_p [Fp,\mathcal{K}(-,Gp)]=[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\; \mathcal{K}^\op\longrightarrow\mathbf{Cat} </annotation></semantics> which maps a 0-cell <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> to the category <semantics>[𝒫,Cat](F,𝒦(A,G))<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G))</annotation></semantics>. A representation of this contravariant 2-functor is an object <semantics>{F,K}𝒦<annotation encoding="application/x-tex">\{F,K\}\in\mathcal{K}</annotation></semantics> along with 2-natural isomorphism <semantics>𝒦(,{F,G})[𝒫,Cat](F,𝒦(,G))<annotation encoding="application/x-tex">\mathcal{K}(-,\{F,G\})\xrightarrow{\;\sim\;}[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)) </annotation></semantics> with components isomorphisms between categories <semantics>𝒦(A,{F,G})[𝒫,Cat](F,𝒦(A,G)).<annotation encoding="application/x-tex"> \mathcal{K}(A,\{F,G\})\cong[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G-)). </annotation></semantics> The unit of this representation is <semantics>1[𝒫,Cat](F,𝒦({F,G},G))<annotation encoding="application/x-tex">\mathbf{1}\to[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(\{F,G\},G))</annotation></semantics> which corresponds uniquely to a 2-natural transformation <semantics>ξ:F𝒦({F,G},G)<annotation encoding="application/x-tex">\xi:F\Rightarrow\mathcal{K}(\{F,G\},G)</annotation></semantics>.

Via this 2-natural isomorphism, the object <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics> in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> satisfies a universal property which can be expressed in two levels:

  • The 1-dimensional aspect of the universal property states that every natural transformation <semantics>ρ<annotation encoding="application/x-tex">\rho</annotation></semantics> factorizes as <semantics>Fρ 𝒦(A,G) ξ 𝒦(h,1) 𝒦({F,G},G) <annotation encoding="application/x-tex"> \begin{matrix} F \xrightarrow{\rho} & \mathcal{K}(A,G) \\ {}_\xi \searrow & \uparr_{\mathcal{K}(h,1)} \\ & \mathcal{K}(\{F,G\},G) \\ \end{matrix} </annotation></semantics> for a unique 1-cell <semantics>h:A{F,G}<annotation encoding="application/x-tex">h:A\to\{F,G\}</annotation></semantics> in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>, where the vertical arrow is just pre-composition with <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics>.

  • The 2-dimensional aspect of the universal property states that every modification <semantics>θ:ρρ<annotation encoding="application/x-tex">\theta:\rho\Rrightarrow\rho'</annotation></semantics> factorizes as <semantics>𝒦(α,1)ξ<annotation encoding="application/x-tex">\mathcal{K}(\alpha,1)\cdot \xi</annotation></semantics> for a unique 2-cell <semantics>α:hh<annotation encoding="application/x-tex">\alpha:h\Rightarrow h'</annotation></semantics> in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>.

The fact that the 2-dimensional aspect (which asserts an isomorphism of categories) does not in general follow from the 1-dimensional aspect (which asserts a bijection between the hom-sets of the underlying categories) is a recurrent issue of the paper. In fact, things would be different if the underlying category functor <semantics>𝒱(I,)=() 0:𝒱-CatCat<annotation encoding="application/x-tex"> \mathcal{V}(I,-)=(\;)_0:\mathcal{V}\text{-}\mathbf{Cat}\to\mathbf{Cat} </annotation></semantics> were conservative, in which case the 2-dimensional universal property would always imply the 1-dimensional one. Certainly though, this is not the case for <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>: the respective functor discards all the 2-cells and is not even faithful. However, if we know that a weighted limit exists, then the first level of the universal property suffices to detect it up to isomorphism.

Completeness of 2-categories

A 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> is complete when all limits <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics> exist. The defining 2-natural isomorphism extends the mapping <semantics>(F,G){F,G}<annotation encoding="application/x-tex">(F,G)\mapsto\{F,G\}</annotation></semantics> into a functor of two variables (the weighted limit functor) <semantics>{,}:[𝒫,Cat] op×[𝒫,𝒦]𝒦<annotation encoding="application/x-tex"> \{-,-\}:[\mathcal{P},\mathbf{Cat}]^{op}\times[\mathcal{P},\mathcal{K}]\longrightarrow \mathcal{K} </annotation></semantics> as the left parametrized adjoint (actually its opposite) of the functor <semantics>𝒦(,?):𝒦 op×[𝒫,𝒦][𝒫,Cat]<annotation encoding="application/x-tex"> \mathcal{K}(-,?):\mathcal{K}^{op}\times[\mathcal{P},\mathcal{K}]\to[\mathcal{P},\mathbf{Cat}] </annotation></semantics> mapping an object <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and a functor <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> to <semantics>𝒦(A,G)<annotation encoding="application/x-tex">\mathcal{K}(A,G-)</annotation></semantics>. A colimit in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> is a limit in <semantics>𝒦 op<annotation encoding="application/x-tex">\mathcal{K}^op</annotation></semantics>, and the weighted colimit functor is <semantics>*:[𝒫 op,Cat]×[𝒫,𝒦]𝒦.<annotation encoding="application/x-tex"> -\ast-:[\mathcal{P}^op,\mathbf{Cat}]\times[\mathcal{P},\mathcal{K}]\longrightarrow\mathcal{K}. </annotation></semantics> Apart from the evident duality, we observe that often colimits are harder to compute than limits. This may partially be due to the fact that <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics> is determined by the representable <semantics>𝒦(,{F,G})<annotation encoding="application/x-tex">\mathcal{K}(-,\{F,G\})</annotation></semantics> which gives generalized elements of <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics>, whereas the description of <semantics>𝒦(F*G,)<annotation encoding="application/x-tex">\mathcal{K}(F\ast G,-)</annotation></semantics> gives us arrows out of <semantics>F*G<annotation encoding="application/x-tex">F\ast G</annotation></semantics>. For example, limits in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> are easy to compute via <semantics>[𝒜,{F,G}][𝒫,Cat](F,[𝒜,G])[𝒜,[𝒫,Cat](F,G)]<annotation encoding="application/x-tex"> [\mathcal{A},\{F,G\}]\cong[\mathcal{P},\mathbf{Cat}](F,[\mathcal{A},G-])\cong[\mathcal{A},[\mathcal{P},\mathbf{Cat}](F,G)] </annotation></semantics> and in particular, taking <semantics>𝒜=1<annotation encoding="application/x-tex">\mathcal{A}=\mathbf{1}</annotation></semantics> gives us the objects of the category <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics> and <semantics>𝒜=2<annotation encoding="application/x-tex">\mathcal{A}=\mathbf{2}</annotation></semantics> gives us the morphisms. On the contrary, colimits in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> are not straightforward (except than their property <semantics>F*GG*F<annotation encoding="application/x-tex">F\ast G\cong G\ast F</annotation></semantics>).

Notice that like ordinary limits are defined, via representability, in terms of limits in <semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics>, we can define weighted limits in terms of limits of representables in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>: <semantics>𝒦(A,{F,G}){F,𝒦(A,G)},𝒦(F*,G,A){F,𝒦(G,A)}.<annotation encoding="application/x-tex"> \mathcal{K}(A,\{F,G\})\cong\{F,\mathcal{K}(A,G-)\},\quad\mathcal{K}(F\ast ,G,A)\cong\{F,\mathcal{K}(G-,A)\}. </annotation></semantics> On the other hand, if the weights are representables, via Yoneda lemma we get <semantics>{𝒫(P,),G}GP,𝒫(,P)*GGP.<annotation encoding="application/x-tex"> \{\mathcal{P}(P,-),G\}\cong GP, \qquad \mathcal{P}(-,P)\ast G\cong GP. </annotation></semantics>

The main result for general <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-completeness in Kelly’s book says that a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-enriched category is complete if and only if it admits all conical limits (equivalently, products and equalizers) and cotensor products. Explicitly, conical limits are those with weight the constant <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-functor <semantics>ΔI<annotation encoding="application/x-tex">\Delta I</annotation></semantics>, whereas cotensors are those where the domain enriched category <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics> is the unit category <semantics>1<annotation encoding="application/x-tex"> \mathbf{1}</annotation></semantics>, hence the weight and the diagram are determined by objects in <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> and <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> respectively. Once again, for <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>=<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> an elementary description of both limits is possible.

Notice that when a 2-category admits tensor products of the form <semantics>2*A<annotation encoding="application/x-tex">\mathbf{2}\ast A</annotation></semantics>, then the 2-dimensional universal property follows from the 1-dimensional for every limit, because of conservativity of the functor <semantics>Cat 0(2,)<annotation encoding="application/x-tex">\mathbf{Cat}_0(\mathbf{2},-)</annotation></semantics> and the definition of tensors. Moreover, the former also implies that the category <semantics>2<annotation encoding="application/x-tex">\mathbf{2}</annotation></semantics> is a strong generator in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>, hence the existence of only the cotensor <semantics>{2,B}<annotation encoding="application/x-tex">\{\mathbf{2},B\}</annotation></semantics> along with conical limits in a 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> is enough to deduce 2-completeness.

<semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> itself has cotensor and tensor products, given by <semantics>{𝒜,}=[𝒜,]<annotation encoding="application/x-tex">\{\mathcal{A},\mathcal{B}\}=[\mathcal{A},\mathcal{B}]</annotation></semantics> and <semantics>𝒜*=𝒜×<annotation encoding="application/x-tex">\mathcal{A}\ast\mathcal{B}=\mathcal{A}\times\mathcal{B}</annotation></semantics>. It is ultimately also cocomplete, all colimits being constructed from tensors and ordinary colimits in <semantics>Cat 0<annotation encoding="application/x-tex">\mathbf{Cat}_0</annotation></semantics> (which give the conical limits in <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics> by the existence of the cotensor <semantics>[2,B]<annotation encoding="application/x-tex">[\mathbf{2},B]</annotation></semantics>).

If we were to make use of ends and coends, the explicit construction of an arbitrary 2-(co)limit in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> as the (co)equalizer of a pair of arrows between (co)products of (co)tensors coincides with <semantics>{F,G}= K{FK,GK},F*G= KFK*GK.<annotation encoding="application/x-tex"> \{F,G\}=\int_K \{FK,GK\}, \qquad F\ast G=\int^K FK\ast GK. </annotation></semantics> Such an approach simplifies the proofs of many useful properties of limits and colimits, such as <semantics>{F,{G,H}}{F*G,H},(G*F)*HF*(G*H)<annotation encoding="application/x-tex"> \{F,\{G,H\}\}\cong\{F\ast G,H\},\;\;(G\ast F)\ast H\cong F\ast(G\ast H) </annotation></semantics> for appropriate 2-functors.

Famous finite 2-limits

The paper provides the description of some important classes of limits in 2-categories, essentially by exhibiting the unit of the defining representation for each particular case. A table which summarizes the main examples included is the following:

graphs

Let’s briefly go through the explicit construction of an inserter in a 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>. The weight and diagram shape are as in the first line of the above table, and denote by <semantics>BgfC<annotation encoding="application/x-tex">B\overset{f}\underset{g}{\rightrightarrows}C</annotation></semantics> the image of the diagram in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>. The standard technique is to identify the form of objects and morphisms of the functor 2-category <semantics>[𝒫,Cat](F,𝒦(A,G))<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G-))</annotation></semantics>, and then state both aspects of the universal property.

An object is a 2-natural transformation <semantics>α:F𝒦(A,G)<annotation encoding="application/x-tex">\alpha:F\Rightarrow\mathcal{K}(A,G-)</annotation></semantics> with components <semantics>α :1𝒦(A,B)<annotation encoding="application/x-tex">\alpha_\bullet:1\to\mathcal{K}(A,B)</annotation></semantics> and <semantics>α :2𝒦(A,C)<annotation encoding="application/x-tex">\alpha_\star:\mathbf{2}\to\mathcal{K}(A,C)</annotation></semantics> satisfying the usual naturality condition (2-naturality follows trivially, since <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics> only has the identity 2-cell). This amounts to the following data:

  • an 1-cell <semantics>Aα B<annotation encoding="application/x-tex">A\xrightarrow{\alpha_\bullet}B</annotation></semantics>, i.e. the object in <semantics>𝒦(A,B)<annotation encoding="application/x-tex">\mathcal{K}(A,B)</annotation></semantics> determined by the functor <semantics>α <annotation encoding="application/x-tex">\alpha_\bullet</annotation></semantics>;

  • a 2-cell <semantics>α 0α α 1<annotation encoding="application/x-tex">{\alpha_\star0}\overset{\alpha_\star}{\Rightarrow}{\alpha_\star1}</annotation></semantics>, i.e. the morphism in <semantics>𝒦(A,C)<annotation encoding="application/x-tex">\mathcal{K}(A,C)</annotation></semantics> determined by the functor <semantics>α <annotation encoding="application/x-tex">\alpha_\star</annotation></semantics>;

  • properties, which make the 1-cells <semantics>α 0,α 1<annotation encoding="application/x-tex">\alpha_\star0,\alpha_\star1</annotation></semantics> factorize as <semantics>α 0=Aα BfC<annotation encoding="application/x-tex">\alpha_\star0=A\xrightarrow{\alpha_\bullet}B\xrightarrow{f}C</annotation></semantics> and <semantics>α 1=Aα BgC<annotation encoding="application/x-tex">\alpha_\star1=A\xrightarrow{\alpha_\bullet}B\xrightarrow{g}C</annotation></semantics>.

We can encode the above data by a diagram <semantics> B α f A α C. α g B <annotation encoding="application/x-tex"> \begin{matrix} & B & \\ {}^{\alpha_\bullet} \nearrow && {\searrow}^f \\ A\; & \Downarrow{\alpha_\star}& \quad C. \\ {}_{\alpha_\bullet} \searrow && \nearrow_g \\ & B & \\ \end{matrix} </annotation></semantics> Now a morphism is a modification <semantics>m:αβ<annotation encoding="application/x-tex">m:\alpha\Rrightarrow\beta</annotation></semantics> between two objects as above. This has components

  • <semantics>m :α β <annotation encoding="application/x-tex">m_\bullet:\alpha_\bullet\Rightarrow\beta_\bullet</annotation></semantics> in <semantics>𝒦(A,B)<annotation encoding="application/x-tex">\mathcal{K}(A,B)</annotation></semantics>;

  • <semantics>m :α β <annotation encoding="application/x-tex">m_\star:\alpha_\star\Rightarrow\beta_\star</annotation></semantics> given by 2-cells <semantics>m 0:α 0β 0<annotation encoding="application/x-tex">m_\star^0:\alpha_\star0\Rightarrow{\beta_\star0}</annotation></semantics> and <semantics>m 1:α 1β 1<annotation encoding="application/x-tex">m_\star^1:\alpha_\star1\Rightarrow\beta_\star1</annotation></semantics> in <semantics>𝒦(A,C)<annotation encoding="application/x-tex">\mathcal{K}(A,C)</annotation></semantics> satisfying naturality <semantics>m 1α =β m 0<annotation encoding="application/x-tex">m^1_\star\circ\alpha_\star=\beta_\star\circ m^0_\star</annotation></semantics>.

The modification condition <semantics>m 0=fm <annotation encoding="application/x-tex">m^0_\star=f\cdot m_\bullet</annotation></semantics> and <semantics>m 1=gm <annotation encoding="application/x-tex">m^1_\star=g\cdot m_\bullet</annotation></semantics> gives the components of <semantics>m <annotation encoding="application/x-tex">m_\star</annotation></semantics> as whiskered composites of <semantics>m <annotation encoding="application/x-tex">m_\bullet</annotation></semantics>. We can thus express such a modification as a 2-cell <semantics>m <annotation encoding="application/x-tex">m_\bullet</annotation></semantics> satisfying <semantics>gm α =fm β <annotation encoding="application/x-tex">gm_\bullet\circ\alpha_\star=fm_\bullet\circ\beta_\star</annotation></semantics> (graphically expressed by pasting <semantics>m <annotation encoding="application/x-tex">m_\bullet</annotation></semantics> accordingly to the sides of <semantics>α ,β <annotation encoding="application/x-tex">\alpha_\star,\beta_\star</annotation></semantics>).

This encoding simplifies the statement of the universal property for <semantics>{F,G}<annotation encoding="application/x-tex">\{F,G\}</annotation></semantics>, as the object of in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> through which any natural transformation and modification uniquely factorize in an appropriate way (in fact, through the unit <semantics>ξ<annotation encoding="application/x-tex">\xi</annotation></semantics>). A very similar process can be followed for the identification of the other classes of limits. As an illustration, let’s consider some of these limits in the 2-category <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>.

  • The inserter of two functors <semantics>F,G:𝒞<annotation encoding="application/x-tex">F,G:\mathcal{B}\to\mathcal{C}</annotation></semantics> is a category <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> with objects pairs <semantics>(B,h)<annotation encoding="application/x-tex">(B,h)</annotation></semantics> where <semantics>B<annotation encoding="application/x-tex">B\in\mathcal{B}</annotation></semantics> and <semantics>h:FBGB<annotation encoding="application/x-tex">h:FB\to GB</annotation></semantics> in <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics>. A morphism <semantics>(B,h)(B,h)<annotation encoding="application/x-tex">(B,h)\to(B',h')</annotation></semantics> is an arrow <semantics>f:BB<annotation encoding="application/x-tex">f:B\to B'</annotation></semantics> in <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> such that the following diagram commutes: <semantics>FB Ff FB h h FB Gh GB. <annotation encoding="application/x-tex"> \begin{matrix} FB & \overset{Ff}{\longrightarrow} & FB' \\ {}_h\downarrow && \downarrow_{h'} \\ FB & \underset{Gh}{\longrightarrow} & GB'. \\ \end{matrix} </annotation></semantics> The functor <semantics>α =P:𝒜<annotation encoding="application/x-tex">\alpha_\bullet=P:\mathcal{A}\to\mathcal{B}</annotation></semantics> is just the forgetful functor, and the natural transformation is given by <semantics>(α ) (B,h)=h<annotation encoding="application/x-tex">(\alpha_\star)_{(B,h)}=h</annotation></semantics>.

  • The comma-object of two functors <semantics>F,G<annotation encoding="application/x-tex">F,G</annotation></semantics> is precisely the comma category. If the functors have also the same domain, their inserter is a subcategory of the comma category.

  • The equifier of two natural transformations <semantics>ϕ 1,ϕ 2:FG:𝒞<annotation encoding="application/x-tex">\phi^1,\phi^2:F\Rightarrow G:\mathcal{B}\to\mathcal{C}</annotation></semantics> is the full subcategory <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> of <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> over all objects <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> such that <semantics>ϕ B 1=ϕ B 2<annotation encoding="application/x-tex">\phi^1_B=\phi^2_B</annotation></semantics> in <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics>.

There is a variety of constructions of new classes of limits from given ones, coming down to the existence of endo-identifiers, inverters, iso-inserters, comma-objects, iso-comma-objects, lax/ oplax/pseudo limits of arrows and the cotensors <semantics>{2,K}<annotation encoding="application/x-tex">\{\mathbf{2},K\}</annotation></semantics>, <semantics>{I,K}<annotation encoding="application/x-tex">\{\mathbf{I},K\}</annotation></semantics> out of inserters, equifiers and binary products in the 2-category <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>. Along with the substantial construction of arbitrary cotensors out of these three classes, P(roducts)I(nserters)E(quifiers) limits are established as essential tools, also relatively to categories of algebras for 2-monads. Notice that equalizers are `too tight’ to fit in certain 2-categories of importance such as <semantics>Lex<annotation encoding="application/x-tex">\mathbf{Lex}</annotation></semantics>.

Weaker notions of limits in 2-categories

The concept of a weighted 2-limit strongly depends on the specific structure of the 2-category <semantics>[𝒫,Cat]<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}]</annotation></semantics> of 2-functors, 2-natural transformations and modifications, for the 2-categories <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics> and <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>. If we alter this structure by considering lax natural transformations or pseudonatural transformations, we obtain the definition of the lax limit <semantics>{F,G} l<annotation encoding="application/x-tex">\{F,G\}_l</annotation></semantics> and pseudo limit <semantics>{F,G} p<annotation encoding="application/x-tex">\{F,G\}_p</annotation></semantics> as the representing objects for the 2-functors <semantics>Lax[𝒫,Cat](F,𝒦(,G)):𝒦 opCat Psd[𝒫,Cat](F,𝒦(,G)):𝒦 opCat.<annotation encoding="application/x-tex"> \begin{matrix} Lax[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^\op\to\mathbf{Cat} \\ Psd[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^\op\to\mathbf{Cat}. \end{matrix} </annotation></semantics> Notice that the functor categories <semantics>Lax[𝒫,]<annotation encoding="application/x-tex">Lax[\mathcal{P},\mathcal{L}]</annotation></semantics> and <semantics>Psd[𝒫,]<annotation encoding="application/x-tex">Psd[\mathcal{P},\mathcal{L}]</annotation></semantics> are 2-categories whenever <semantics><annotation encoding="application/x-tex">\mathcal{L}</annotation></semantics> is a 2-category, hence the defining isomorphisms are again between categories as before.

An important remark is that any lax or pseudo limit in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> can be in fact expressed as a `strict’ weighted 2-limit. This is done by replacing the original weight with its image under the left adjoint of the incusion functors <semantics>[𝒫,Cat]Lax[𝒫,Cat]<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}]\hookrightarrow Lax[\mathcal{P},\mathbf{Cat}]</annotation></semantics>, <semantics>[𝒫,Cat]Psd[𝒫,Cat]<annotation encoding="application/x-tex">[\mathcal{P},\mathbf{Cat}]\hookrightarrow Psd[\mathcal{P},\mathbf{Cat}]</annotation></semantics>. The opposite does not hold: for example, inserters and equifiers are neither lax not pseudo limits.

We can relax the notion of limits in 2-categories even further, and define the bilimit <semantics>{F,G} b<annotation encoding="application/x-tex">\{F,G\}_b</annotation></semantics> of 2-functors <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> as the representing object up to equivalence: <semantics>𝒦(A,{F,G} b)Psd[𝒫,Cat](F,𝒦(A,G)).<annotation encoding="application/x-tex"> \mathcal{K}(A,\{F,G\}_b)\simeq Psd[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G)). </annotation></semantics> This is of course a particular case of general bilimits in bicategories, for which <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics> and <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics> are requested to be bicategories and <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> homomorphism of bicategories. The above equivalence of categories expresses a birepresentation of the homomorphism <semantics>Hom[𝒫,Cat](F,𝒦(,G)):𝒦 opCat<annotation encoding="application/x-tex">Hom[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^op\to\mathbf{Cat}</annotation></semantics>.

Evidently, bilimits (firstly introduced by Ross Street) may exist even when pseudo limits do not, since they require an equivalence rather than isomorphism of hom-categories. The following two results sum up the conditions ensuring whether a 2-category has all lax, pseudo and bilimits.

  • A 2-category with products, inserters and equifiers has all lax and pseudo limits (whereas it may not have all strict 2-limits).

  • A 2-category with biproducts, biequalizers and bicotensors is bicategorically complete. Equivalently, it admits all bilimits if and only if for all 2-functors <semantics>F:𝒫Cat<annotation encoding="application/x-tex">F:\mathcal{P}\to\mathbf{Cat}</annotation></semantics>, <semantics>G:𝒫𝒦<annotation encoding="application/x-tex">G:\mathcal{P}\to\mathcal{K}</annotation></semantics> from a small ordinary category <semantics>𝒫<annotation encoding="application/x-tex">\mathcal{P}</annotation></semantics>, the above mentioned birepresentation exists.

Street’s construction of an arbitrary bilimit requires a descent object of a 3-truncated bicosimplicial object in <semantics>𝒦<annotation encoding="application/x-tex">\mathcal{K}</annotation></semantics>. An appropriate modification of the arguments exhibits lax and pseudo limits as PIE limits.

These weaker forms of limits in 2-categories are fundamental for the theory of 2-categories and bicategories. Many important constructions such as the Eilenberg-Moore object as well as the Grothendieck construction on a fibration, arise as lax/oplax limits. They are also crucial in 2-monad theory, for example when studying categories of (strict) algebras with non-strict (pseudo or even lax/oplax) morphisms, which are more common in nature.

by riehl (eriehl@math.harvard.edu) at April 18, 2014 07:56 PM

The Great Beyond - Nature blog

Moon dust probe crashes
LADEE

The LADEE mission has ended in a controlled crash.

NASA

A NASA spacecraft that studied lunar dust vaporized into its own cloud of dust when it hit the Moon, as planned, in a mission-ending impact on 17 April. Launched last September, the Lunar Atmosphere and Dust Environment Explorer (LADEE) finished its primary mission in March. In early April, on an extended mission, it made close passes as low as 2 kilometres above the surface, gathering data on more than 100 low-elevation orbits. Mission controllers deliberately crashed it to avoid the chance that, left alone, it might crash and contaminate historic locations such as the Apollo landing sites.

During its lifetime, LADEE made the best measurements yet of the dust generated when tiny meteorites bombard the surface. It is still hunting the mystery of a horizon glow seen by Apollo astronauts. It also carried a test for future laser communications between spacecraft and Earth.

In its final days the probe unexpectedly survived the cold and dark of a total lunar eclipse on 15 April. Just before the eclipse, NASA had the spacecraft perform a final engine burn that determined the crash trajectory. LADEE normally coped with just one hour of darkness every time it looped behind the Moon. The eclipse put it into darkness for some four hours, potentially jeopardizing the ability of its battery-powered heaters to keep the spacecraft from freezing to death. But the spacecraft survived.

NASA has been running a contest to predict the exact date and time of the LADEE impact, and this morning predicted there may be multiple winners. When it hit, the probe was travelling about three times as fast as a rifle bullet. In the coming months the Lunar Reconnaissance Orbiter will take pictures of the crash site, which engineers are still determining.

by Alexandra Witze at April 18, 2014 02:35 PM

arXiv blog

Jupiter's Radio Emissions Could Reveal the Oceans on Its Icy Moons, Say Planetary Geologists

We should be able to use Jupiter’s radio emissions like ground-penetrating radar to study the oceans on Europa, Ganymede, and Callisto, say space scientists.


Among the most exciting destinations in the Solar System are Jupiter’s icy moons Europa, Ganymede, and Callisto. While the surface temperature on these bodies is a cool -160 °C, astronomers believe they all hide oceans of liquid water beneath their icy surfaces. That raises the intriguing possibility that these moons have all the ingredients required for life.

April 18, 2014 02:00 PM

Symmetrybreaking - Fermilab/SLAC

Is the universe balanced on a pinhead?

New precise measurements of the mass of the top quark bring back the question: Is our universe inherently unstable?

Scientists have known the mass of the heaviest fundamental particle, the top quark, since 1995.

But recent, more precise measurements of this mass have revived an old question: Why is it so huge?

No one is sure, but it might be a sign that our universe is inherently unstable. Or it might be a sign that some factor we don’t yet understand is keeping us in balance.

The top quark’s mass comes from its interaction with the Higgs field—which is responsible for the delicate balance of mass that allows matter to exist in its solid, stable form.

by Sarah Charley at April 18, 2014 01:00 PM

astrobites - astro-ph reader's digest

Arecibo Detects a Fast Radio Burst

  • Title: Fast Radio Burst Discovered in the Arecibo Pulsar ALFA Survey
  • Authors: L. G. Spitler, J. M. Cordes, J. W. T. Hessels, D. R. Lorimer, M. A. McLaughlin, S. Chatterjee, F. Crawford, J. S. Deneva, V. M. Kaspi, R. S. Wharton, B. Allen, S. Bogdanov, A. Brazier, F. Camilo, P. C. C. Freire, F. A. Jenet1, C. Karako–Argaman, B. Knispel, P. Lazarus, K. J. Lee, J. van Leeuwen, R. Lynch, A. G. Lyne, S. M. Ransom, P. Scholz, X. Siemens, I. H. Stairs, K. Stovall, J. K. Swiggum, A. Venkataraman, W. W. Zhu, C. Aulbert, H. Fehrmann
  • First Author’s Institution: Max Planck Institute for Radio Astronomy
  • Status: Accepted for Publication in Astronomy & Astrophysics

Fast radio bursts (FRBs) are no strangers to regular Astrobites readers- these mysterious radio signals are bright bursts of radiation which last a fraction of a second before disappearing, never to repeat again.  Not much is known about them except most (but not all) appear to originate from very far away, outside the galaxy.  Various theories have been proposed as to what may cause these signals.  Many astronomers pointed out the signals may not be real at all: since the first was published in 2007 there have only been six FRBs recorded in the literature, all of which were detected at Parkes Observatory in Australia, while surveys at other radio telescopes came up empty.  Based on this, debate raged as to whether the FRB signals could be from a more mundane origin than pulsing from beyond the galaxy, such as instrumentation noise or some unique phenomena at the Parkes site like unusual weather patterns.

FRB121102

Figure 1: The signal from FRB121102- the big top plot shows the signal in time and frequency (showing its dispersion measure), while the lower plots show the signal to noise ratio with respect to time (left) and frequency (right). The FRB’s properties are the same as those detected previously at Parkes.

Today’s paper is a vindication for the radio astronomers who insisted FRBs are astronomical in nature: it reports on the first-ever FRB detection by a telescope other than Parkes!  Specifically, this FRB was detected at Arecibo Observatory by the Pulsar ALFA Survey, which surveyed the galactic plane at 1.4 GHz in order to detect single pulses from pulsars.  Known as FRB 121102, the pulse was observed in November 2012 in one of 7 beams in the receiver ALFA uses, lasted about 3 ms, and came from the direction of the galactic plane.  Despite this, astronomers think it’s likely the pulse originated from outside the galaxy based on its dispersion measure, or how much the signal’s frequency is “smeared” due to traveling through space (as explained well in this Astrobite, signals arrive later in lower frequencies when traveling long distances, giving an estimate on how far away the signal originated).  This FRB’s dispersion measure was three times greater than the maximum galactic dispersion measure expected in the line of sight from which the FRB was observed, based on the distribution of matter in that part of the galaxy.  While the authors suggest the pulses might be from a rotating radio transient- a special kind of pulsar- no other pulses were detected in follow-up observations so the signal is probably not just an unusually bright pulse from a pulsar.

Instead, the authors point to how FRB 121102′s high dispersion measure- combined with similar properties compared to previously observed FRBs- suggest an extragalactic origin for the pulse.  Using the observed dispersion measure of the pulse and estimates on its scaling intergalactically, the team concludes the FRB originated at a distance of z= 0.26, or 1 Gpc away.  Just what could be creating bright bursts so far away is a definite mystery!

Finally, based on their detection of this FRB the authors estimate a similar event rate to the previous one set by Parkes Observatory, whereby there should be thousands of FRBs in the sky every day. (They are just very hard to detect because they are so brief.) And the fact that this is the first pulse detected by an independent observatory is very encouraging, as it certainly gives credence to the idea that Fast Radio Bursts are a real astronomical phenomenon.  With luck, we will soon hear about many more FRB detections from multiple observatories, which may unravel the secrets of where they come from.

by Yvette Cendes at April 18, 2014 09:54 AM

Tommaso Dorigo - Scientificblogging

Personal Information
Long-time readers of this blog (are there any left ?) know me well since I often used to  write posts about personal matters here and in my previous sites. However, I am aware that readers come and go, and I also realize that lately I have not disclosed much of my personal life here; things like where I work, what's my family like, what I do in my pastime, what are my dreams and my projects for the future. So it is a good idea to write some personal details here.

read more

by Tommaso Dorigo at April 18, 2014 08:59 AM

John Baez - Azimuth

New IPCC Report (Part 7)

guest post by Steve Easterbrook

(7) To stay below 2 °C of warming, the world must become carbon negative

Only one of the four future scenarios (RCP2.6) shows us staying below the UN’s commitment to no more than 2 ºC of warming. In RCP2.6, emissions peak soon (within the next decade or so), and then drop fast, under a stronger emissions reduction policy than anyone has ever proposed in international negotiations to date. For example, the post-Kyoto negotiations have looked at targets in the region of 80% reductions in emissions over say a 50 year period. In contrast, the chart below shows something far more ambitious: we need more than 100% emissions reductions. We need to become carbon negative:

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

The graph on the left shows four possible CO2 emissions paths that would all deliver the RCP2.6 scenario, while the graph on the right shows the resulting temperature change for these four. They all give similar results for temperature change, but differ in how we go about reducing emissions. For example, the black curve shows CO2 emissions peaking by 2020 at a level barely above today’s, and then dropping steadily until emissions are below zero by about 2070. Two other curves show what happens if emissions peak higher and later: the eventual reduction has to happen much more steeply. The blue dashed curve offers an implausible scenario, so consider it a thought experiment: if we held emissions constant at today’s level, we have exactly 30 years left before we would have to instantly reduce emissions to zero forever.

Notice where the zero point is on the scale on that left-hand graph. Ignoring the unrealistic blue dashed curve, all of these pathways require the world to go net carbon negative sometime soon after mid-century. None of the emissions targets currently being discussed by any government anywhere in the world are sufficient to achieve this. We should be talking about how to become carbon negative.

One further detail. The graph above shows the temperature response staying well under 2°C for all four curves, although the uncertainty band reaches up to 2°C. But note that this analysis deals only with CO2. The other greenhouse gases have to be accounted for too, and together they push the temperature change right up to the 2°C threshold. There’s no margin for error.


You can download all of Climate Change 2013: The Physical Science Basis here. Click below to read any part of this series:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Climate Change 2013: The Physical Science Basis is also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 18, 2014 08:46 AM

April 17, 2014

Symmetrybreaking - Fermilab/SLAC

Not just old codgers

During a day of talks at Stanford University, theoretical physicist Leonard Susskind explained “Why I Teach Physics to Old Codgers, and How It Got to Be a YouTube Sensation.”

Stanford professor Leonard Susskind has a well-deserved reputation among his colleagues as one of the most imaginative theorists working in physics today. During his nearly five decades in the field, he’s taken leading roles in the study of quark confinement, technicolor, black hole complementarity, the holographic principle and string theory. Even now, at the age of 73, he’s still in the thick of it, batting around ideas with his colleagues about firewalls, the latest twist on black holes.

by Lori Ann White at April 17, 2014 10:51 PM

Quantum Diaries

Searching for Dark Matter With the Large Underground Xenon Experiment

In December, a result from the Large Underground Xenon (LUX) experiment was featured in Nature’s Year In Review as one of the most important scientific results of 2013. As a student who has spent the past four years working on this experiment I will do my best to provide an introduction to this experiment and hopefully answer the question: why all the hype over what turned out to be a null result?

The LUX detector, deployed into the water tank shield

The LUX detector, deployed into its water tank shield 4850 feet underground.

Direct Dark Matter Detection

Weakly Interacting Massive Particles (WIMPs), or particles that interact only through the weak nuclear force and gravity, are a particularly compelling solution to the dark matter problem because they arise naturally in many extensions to the Standard Model. Quantum Diaries did a wonderful series last summer on dark matter, located here, so I won’t get into too many details about dark matter or the WIMP “miracle”, but I would however like to spend a bit of time talking about direct dark matter detection.

The Earth experiences a dark matter “wind”, or flux of dark matter passing through it, due to our motion through the dark matter halo of our galaxy. Using standard models for the density and velocity distribution of the dark matter halo, we can calculate that there are nearly 1 billion WIMPs per square meter per second passing through the Earth. In order to match observed relic abundances in the universe, we expect these WIMPs to have a small yet measurable interaction cross-section with ordinary nuclei.

In other words, there must be a small-but-finite probability of an incoming WIMP scattering off a target in a laboratory in such a way that we can detect it. The goal of direct detection experiments is therefore to look for these scattering events. These events are characterized by recoil energies of a few to tens of keV, which is quite small, but it is large enough to produce an observable signal.

So here’s the challenge: How do you build an experiment that can measure an extremely small, extremely rare signal with very high precision amid large amounts of background?

Why Xenon?

The signal from a recoil event inside a direct detection target typically takes one of three forms: scintillation light, ionization of an atom inside the target, or heat energy (phonons). Most direct detection experiments focus on one (or two) of these channels.

Xenon is a natural choice for a direct detection medium because it is easy to read out signals from two of these channels. Energy deposited in the scintillation channel is easily detectable because xenon is transparent to its own characteristic 175-nm scintillation. Energy deposited in the ionization channel is likewise easily detectable, since ionization electrons under the influence of an applied electric field can drift through xenon for distances up to several meters. These electrons can then be read out by any one of a couple different charge readout schemes.

Furthermore, the ratio of the energy deposited in these two channels is a powerful tool for discriminating between nuclear recoils such as WIMPs and neutrons, which are our signal of interest, and electronic recoils such as gamma rays, which are a major source of background.

Xenon is also particularly good for low-background science because of its self-shielding properties. That is, because liquid xenon is so dense, gammas and neutrons tend to attenuate within just a few cm of entering the target. Any particle that does happen to be energetic enough to reach the center of the target has a high probability of undergoing multiple scatters, which are easy to pick out and reject in software. This makes xenon ideal not just for dark matter searches, but also for other rare event searches such as neutrinoless double-beta decay.

The LUX Detector

The LUX experiment is located nearly a mile underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota. LUX rests on the 4850-foot level of the old Homestake gold mine, which was turned into a dedicated science facility in 2006.

Besides being a mining town and a center of Old West culture (The neighboring town, Deadwood, is famed as the location where Wild Bill Hickok met his demise in a poker game), Lead has a long legacy of physics. The same cavern where LUX resides once held Ray Davis’s famous solar neutrino experiment, which provided some of the first evidence for neutrino flavor oscillations and later won him the Nobel Prize.

A schematic of the LUX detector.

A schematic of the LUX detector.

The detector itself is what is called a two-phase time projection chamber (TPC). It essentially consists of a 370-kg xenon target in a large titanium can. This xenon is cooled down to its condensation point (~165 K), so that the bulk of the xenon target is liquid, and there is a thin layer of gaseous xenon on top. LUX has 122 photomultiplier tubes (PMTs) in two different arrays, one array on the bottom looking up into the main volume of the detector, and one array on the top looking down. Just inside those arrays are a set of parallel wire grids that supply an electric field throughout the detector. A gate grid located between the cathode and anode grid that lies close to the liquid surface allows the electric field in the liquid and gas regions to be separately tunable.

When an incident particle interacts with a xenon atom inside the target, it excites or ionizes the atom. In a mechanism common to all noble elements, that atom will briefly bond with another nearby xenon atom. The subsequent decay of this “dimer” back into its two constituent atoms causes a photon to be emitted in the UV. In LUX, this flash of scintillation light, called primary scintillation light or S1, is immediately detected by the PMTs. Next, any ionization charge that is produced is drifted upwards by a strong electric field (~200 V/cm) before it can recombine. This charge cloud, once it reaches the liquid surface, is pulled into the gas phase and accelerated very rapidly by an even stronger electric field (several kV/cm), causing a secondary flash of scintillation called S2, which is also detected by the PMTs. A typical signal read out from an event in LUX therefore consists of a PMT trace with two tell-tale pulses. 

A typical event in LUX. The bottom plot shows the primary (S1) and secondary (S2) signals from each of the individual PMTs. The top two plots show the total size of the S1 and the S2 pulses.

A typical event in LUX. The bottom plot shows the primary (S1) and secondary (S2) signals from each of the individual PMTs. The top two plots show the total size of the S1 and the S2 pulses.

As in any rare event search, controlling the backgrounds is of utmost importance. LUX employs a number of techniques to do so. By situating the detector nearly a mile underground, we reduce cosmic muon flux by a factor of 107. Next, LUX is deployed into a 300-tonne water tank, which reduces gamma backgrounds by another factor of 107 and neutrons by a factor of between 103 and 109, depending on their energy. Third, by carefully choosing a fiducial volume in the center of the detector, i.e., by cutting out events that happen near the edge of the target, we can reduce background by another factor of 104. And finally, electronic recoils produce much more ionization than do the nuclear recoils that we are interested in, so by looking at the ratio S2/S1 we can achieve over 99% discrimination between gammas and potential WIMPs. All this taken into account, the estimated background for LUX is less than 1 WIMP-like event throughout 300 days of running, making it essentially a zero-background experiment. The center of LUX is in fact the quietest place in the world, radioactively speaking.

Results From the First Science Run

From April to August 2013, LUX ran continuously, collecting 85.3 livedays of WIMP search data with a 118-kg fiducial mass, resulting in over ten thousand kg-days of data. A total of 83 million events were collected. Of these, only 6.5 million were single scatter events. After applying fiducial cuts and cutting on the energy region of interest, only 160 events were left. All of these 160 events were consistent with electronic recoils. Not a single WIMP was seen – the WIMP remains as elusive as the unicorn that has become the unofficial LUX mascot.

So why is this exciting? The LUX limit is the lowest yet – it represents a factor of 2-3 increase in sensitivity over the previous best limit at high WIMP masses, and it is over 20 times more sensitive than the next best limit for low-mass WIMPs.

The 90% confidence upper limit on the spin independent WIMP-nucleon interaction cross section: LUX compared to previous experiments.

The 90% confidence upper limit on the spin independent WIMP-nucleon interaction cross section: LUX compared to previous experiments.

Over the past few years, experiments such as DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si have each reported signals that are consistent with WIMPs of mass 5-10 GeV/c2. This is in direct conflict with the null results from ZEPLIN, COUPP, and XENON100, to name a few, and was the source of a fair amount of controversy in the direct detection community.

The LUX result was able to fairly definitively close the door on this question.

If the low-mass WIMPs favored by DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si do indeed exist, then statistically speaking LUX should have seen 1500 of them!

What’s Next?

Despite the conclusion of the 85-day science run, work on LUX carries on.

Just recently, there was a LUX talk presenting results from a calibration using low-mass neutrons as a proxy for WIMPs interacting within the detector, confirming the initial results from last autumn. Currently, LUX is gearing up for its next run, with the ultimate goal of collecting 300 livedays of WIMP-search data, which will extend the 2013 limit by a factor of five. And finally, a new detector called LZ is in the design stages, with a mass twenty times that of LUX and a sensitivity far greater.

***

For more details, the full LUX press release from October 2013 is located here:

http://www.youtube.com/watch?v=SMzAuhRFNQ0

by Nicole Larsen at April 17, 2014 07:57 PM

ZapperZ - Physics and Physicists

Dark Energy
In case you want an entertaining lesson or information on Dark Energy and why we think it is there, here's a nice video on it.



This video, in conjunction of the earlier video on Dark Matter, should give you some idea on what these "dark" entities are based on what we currently know.

Zz.

by ZapperZ (noreply@blogger.com) at April 17, 2014 03:19 PM

astrobites - astro-ph reader's digest

Crowd-Sourcing Crater Identification

Title: The Variability of Crater Identification Among Expert and Community Crater Analysts
Authors: Stuart J. Robbins and others
First Author’s institution: University of Colorado at Boulder
Status: Published in Icarus

“Citizen scientist” projects have popped up all over the Internet in recent years. Here’s Wikipedia’s list, and here’s our astro-specific list. These projects usually tackle complex visual tasks like mapping neurons, or classifying galaxies (a project we’ve discussed before).

Fig. 1: the near side of the moon, a mosaic of images captured by the Lunar Reconnaissance Orbiter. Several mare and highlands are marked. We now know that the maria (Latin for "seas", which is what early astronomers actually thought they were) were wiped as clean as a first-period chalkboard by lava flows some 3 billion years ago.

Fig. 1: The near side of the moon, a mosaic of images captured by the Lunar Reconnaissance Orbiter. Several mare and highlands are marked. The maria (Latin for “seas”, which is what early astronomers actually thought they were) were wiped as clean as a first-period chalkboard by lava flows some 3 billion years ago. (source: NASA/GFSC/ASU)

This is hard work. Not with all the professional scientists in the world could we achieve some of these tasks, not even with their grad students! But by asking for help from an army of untrained volunteers, scientists get much more data, and volunteers get to contribute to fundamental research and explore the beautiful patterns and eccentricities of nature.

The Moon Mappers project asks volunteers to identify craters on the Moon. One use for this work is to relatively date nearby surfaces. Newer surfaces, recently leveled by lava flows or tectonic activity, have had less time to accumulate craters. For example the crater-saturated highlands on the Moon are older than the less-cratered maria. Another use for this work is to calibrate models used to determine the bombardment history of the Moon. For this task, scientists need a distribution of crater sizes on the real lunar surface.

So how good are the volunteer Moon Mappers at characterizing crater densities and size distributions? For that matter how good are the experts?

Today’s study attempts to answer these questions by having a group of experts analyze images of the Moon from the Lunar Reconnaissance Orbiter Camera. Eight experts participated in the study, analyzing two images. The first image captured a variety of terrain types (both mare and highlands). The second image had already been scoured by Moon Mappers volunteers.

Results

caption

Fig. 2: One of the two images of the lunar surface used in this study. The top panel on the left shows the experts’ clusters, a different color for each expert. The bottom panel on the left shows volunteers’ clusters, all in red. The zoomed-in images to the right show a handful of craters of varying degrees of degradation. As expected, there is a larger spread visible for the volunteers’ clusters. (source: Robbins et al.)

The authors find a 10%-35% disagreement between experts on the number of craters of a given size. The lunar highlands yield the greatest dispersion: they are old and have many degraded features. The mare regions, where the craters are young and well-preserved, yield more consistent counts.

To examine how well analysts agree on the size and location of a given crater, the authors employ a clustering algorithm. To find a cluster the algorithm searches the datasets for craters within some distance threshold of others. The distance threshold is scaled by crater diameter so that, for example, if two analysts marked craters with diameters of ~10 px, and centers 15 px apart, these are considered unique. But if they both marked craters with diameters of ~100 px, and centers 15 px apart, these are considered the same. A final catalog is compiled by excluding the ‘craters’ that only a few analysts found. See Fig. 2 to the right for an example of crater clusters from the experts (top panel) and the volunteers (bottom panel).

caption

Fig. 3: The top panel shows the number of craters larger than a given diameter (horizontal axis), as determined by different analysts. A different color represents each analyst, and in some cases the same analyst using several different crater-counting techniques. The light gray line shows the catalog generated by clustering the volunteers’ datasets. It falls well-within the variations between experts. The bottom panel shows relative deviations from a power-law distribution. (source: Robbins et al.)

The authors find that the experts are in better agreement than the volunteers for any given crater’s diameter and location. This isn’t surprising. The experts have seen many more craters, in many different lighting conditions. And the experts used their own software tools, allowing them to zoom in and change contrast in the image. The Moon Mappers web-based interface is much less powerful.

Finally, the authors find that the size distributions computed from the volunteers’ clustered dataset falls well within the range of expert analysts’ size distributions. Fig. 3 demonstrates this.

In conclusion, the analysis of crater size distributions on a given surface can be done as accurately by a handful of volunteers as by a handful of experts. Furthermore, ages based on counting craters are almost always reported with underestimated errors: they don’t take into account the inherent variation amongst analysts. Properly accounting for errors of this type give uncertainties of a few hundred million years for surface ages of a few billion years. However, this study shows that the uncertainty is smaller when a group of analysts contribute to the count.

Consider becoming a Moon Mapper, Vesta Mapper, or Mercury Mapper yourself!

by Brett Deaton at April 17, 2014 05:05 AM

April 16, 2014

Symmetrybreaking - Fermilab/SLAC

Letter to the editor: Oldest light?

Reader Bill Principe raises an interesting question about the headline of a recent symmetry article.

Dear symmetry,

I am not a physicist, so forgive me if I get my physics wrong.

The most recent issue has an article called “The oldest light in the universe.”

April 16, 2014 11:34 PM

Sean Carroll - Preposterous Universe

Twenty-First Century Science Writers

I was very flattered to find myself on someone’s list of Top Ten 21st Century Science Non-Fiction Writers. (Unless they meant my evil twin. Grrr.)

However, as flattered as I am — and as much as I want to celebrate rather than stomp on someone’s enthusiasm for reading about science — the list is on the wrong track. One way of seeing this is that there are no women on the list at all. That would be one thing if it were a list of Top Ten 19th Century Physicists or something — back in the day, the barriers of sexism were (even) higher than they are now, and women were systematically excluded from endeavors such as science with a ruthless efficiency. And such barriers are still around. But in science writing, here in the 21st century, the ladies are totally taking over, and creating an all-dudes list of this form is pretty blatantly wrong.

I would love to propose a counter-list, but there’s something inherently subjective and unsatisfying about ranking people. So instead, I hereby offer this:

List of Ten or More Twenty-First Century Science Communicators of Various Forms Who Are Really Good, All of Whom Happen to be Women, Pulled Randomly From My Twitter Feed and Presented in No Particular Order.

I’m sure it wouldn’t take someone else very long to come up with a list of female science communicators that was equally long and equally distinguished. Heck, I’m sure I could if I put a bit of thought into it. Heartfelt apologies for the many great people I left out.

by Sean Carroll at April 16, 2014 10:02 PM

John Baez - Azimuth

New IPCC Report (Part 6)

guest post by Steve Easterbrook

(6) We have to choose which future we want very soon.

In the previous IPCC reports, projections of future climate change were based on a set of scenarios that mapped out different ways in which human society might develop over the rest of this century, taking account of likely changes in population, economic development and technological innovation. However, none of the old scenarios took into account the impact of strong global efforts at climate mitigation. In other words, they all represented futures in which we don’t take serious action on climate change. For this report, the new ‘RCPs’ have been chosen to allow us to explore the choice we face.

This chart sums it up nicely. If we do nothing about climate change, we’re choosing a path that will look most like RCP8.5. Recall that this is the one where emissions keep rising just as they have done throughout the 20th century. On the other hand, if we get serious about curbing emissions, we’ll end up in a future that’s probably somewhere between RCP2.6 and RCP4.5 (the two blue lines). All of these futures give us a much warmer planet. All of these futures will involve many challenges as we adapt to life on a warmer planet. But by curbing emissions soon, we can minimize this future warming.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

Note also that the uncertainty range (the shaded region) is much bigger for RCP8.5 than it is for the other scenarios. The more the climate changes beyond what we’ve experienced in the recent past, the harder it is to predict what will happen. We tend to use the difference across different models as an indication of uncertainty (the coloured numbers shows how many different models participated in each experiment). But there’s also the possibility of ‘unknown unknowns’—surprises that aren’t in the models, so the uncertainty range is likely to be even bigger than this graph shows.


You can download all of Climate Change 2013: The Physical Science Basis here. Click below to read any part of this series:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Climate Change 2013: The Physical Science Basis is also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 16, 2014 02:11 PM

arXiv blog

Hidden Vulnerability Discovered in the World's Airline Network

The global network of links between the world’s airports looks robust but contains a hidden weakness that could lead to entire regions of the planet being cut off.

April 16, 2014 02:00 PM

CERN Bulletin

Voice over IP phone calls from your smartphone
All CERN users do have a Lync account (see here) and can use Instant Messaging, presence and other features. In addition, if your number is activated on Lync IP Phone(1) system then you can make standard phone calls from your computer (Windows/Mac).   Recently, we upgraded the infrastructure to Lync 2013. One of the major features is the possibility to make Voice over IP phone calls from a smartphone using your CERN standard phone number (not mobile!). Install Lync 2013 on iPhone/iPad, Android or Windows Phone, connect to WiFi network and make phone calls as if you were in your office. There will be no roaming charges because you will be using WiFi to connect to CERN phone system(2). Register here to the presentation on Tuesday 29 April at 11 a.m. in the Technical Training Center and see the most exciting features of Lync 2013.   Looking forward to seeing you! The Lync team (1) How to register on Lync IP Phone system: http://information-technology.web.cern.ch/book/lync-ip-phone-service/how-register (2) People activated on Lync IP Phone system can make Voice over IP phone calls from Lync application.

April 16, 2014 09:39 AM

Lubos Motl - string vacua and pheno

Another anti-physics issue of SciAm
High energy physics is undoubtedly the queen and the ultimate reductionist root of all natural sciences. Nevertheless, during the last decade, it has become immensely fashionable for many people to boast that they're physics haters.

The cover of the upcoming May 2014 issue of Scientific American looks doubly scary for every physicist who has been harassed by the communist regime. It resembles a Soviet flag with some deeply misleading propaganda written over it:
A crisis in physics?

If supersymmetry doesn't pan out, scientists need a new way to explain the universe. [In between the lines]
Every part of this claim is pure bullshit, of course. First of all, there is no "crisis in physics". Second of all, chances are high that we won't be any certain whether SUSY is realized in Nature. Either SUSY will be found at the LHC in 2015 or soon afterwards, or it won't be. In the latter case, the status of SUSY will remain qualitatively the same as it is now. Top-down theorists will continue to be pretty much certain that SUSY exists in Nature in one form or another, one scale or another; bottom-up phenomenologists and experimenters will increasingly notice the absence of evidence – which is something else than the evidence for absence, however.

But aside from this delusion, the second part of the second sentence is totally misguided, too. Supersymmetry isn't a "new way to explain the universe". It is another symmetry, one that differs from some other well-known symmetries such as the rotational or Lorentz symmetry by its having fermionic generators but one that doesn't differ when it comes to its being just one aspect of theories. Supersymmetry isn't a theory of the universe by itself (in the same sense as the Standard Model or string theory); supersymmetry is a feature of some candidate theories of the universe.




To be sure that the hostility is repetitive (a lie repeated 100 times becomes the truth, she learned from Mr Goebbels), editor-in-chief Ms Mariette DiChristina introduces the May 2014 issue under the following title:
Does Physics Have a Problem?
What does it even mean for a scientific discipline to have a problem? Claims in science are either right or wrong. Some theories turn out to be right (at least temporarily), others turn out to be wrong. Some theories are viable and compatible with the evidence, others have been falsified. Some scientists are authors of right and/or important and valuable theories, others are authors of wrong ones or no theories at all.

Some classes of questions are considered settled so they are not being researched as "hot topics" anymore; others are behind the frontier where the scientists don't know the answers (and sometimes the questions): they are increasingly confused by the questions behind the frontier. This separation of the realm of questions by a fuzzy frontier of ignorance is a feature of science that applies to every scientific discipline and every moment of its history. One could argue that there can't be "crises in physics" at all but it's doubly bizarre to use this weird word for the current era which is as ordinary era of normal science as one can get.




The main article about popular physics was written by experimenter Maria Spiropulu (CMS, Caltech) and phenomenologist Joseph Lykken (a self-described very smart guy at Fermilab). They're very interesting and sensible folks but I would have objections to many things they wrote down and I think that the same thing holds for most high energy physicists.

They say that most HEP physicists believe that SUSY is true but add:
Indeed, results from the first run of the LHC have ruled out almost all the best-studied versions of supersymmetry. The negative results are beginning to produce if not a full-blown crisis in particle physics, then at least a widespread panic. The LHC will be starting its next run in early 2015, at the highest energies it was designed for, allowing researchers at the ATLAS and CMS experiments to uncover (or rule out) even more massive superpartners. If at the end of that run nothing new shows up, fundamental physics will face a crossroads: either abandon the work of a generation for want of evidence that nature plays by our rules, or press on and hope that an even larger collider will someday, somewhere, find evidence that we were right all along…
I don't have – and I have never had – any strong preference concerning the masses of superpartners i.e. the accessibility of SUSY by the collider experiments. All of them could have been below \(100\GeV\) but they may be at \(100\TeV\) or near the GUT scale, too. Naturalness suggests that they (especially the top squarks, higgsinos, and perhaps gluinos) are closer to the Higgs mass but it is just a vague argument based on Bayesian reasoning that is moreover tied to some specific enough models. Any modification of the SUSY model changes the quantification of the fine-tuning.

But even if it doesn't, the word "natural" is a flexible adjective. If the amount of fine-tuning increases, the model doesn't become unnatural instantly. It is a gradual change. What I find preposterous is the idea presented by the authors that "if the 2015 LHC run finds no proof of SUSY, fundamental physics will face a crossroads; it will either abandon the work altogether or press for a bigger collider".

You can make a 2016 New Year's resolution and say that you will stop thinking about SUSY if there is no evidence from the LHC for SUSY by that time. You may even establish a sect within high energy physics that will share this New Year's resolution with you. But it is just a New Year's resolution, not a science or a decision "implied" by the evidence. There will be other people who will consider your group's New Year's resolution to be premature and just downright stupid. Physics isn't organized by deadlines or five-year plans.

Other people will keep on working on some SUSY models because these models will be attractive and compatible with all the evidence available at that moment. Even if SUSY were experimentally proven to require a 1-in-1,000 fine-tuning – and it really can't be due to the model-dependence of the fine-tuning scores – most people will still rationally think that a 1-in-1,000 fine-tuning is better than the 1-in-1,000,000,000,000,000,000,000,000,000,000 fine-tuning apparently required by the Standard Model. Maria and Joseph know that it is so. In fact, they explicitly mention the "prepared reaction" by Nima Arkani-Hamed that Nima presented in Santa Barbara:
What if supersymmetry is not found at the LHC, [Nima] asked, before answering his own question: then we will make new supersymmetry models that put the superpartners just beyond the reach of the experiments. But wouldn’t that mean that we would be changing our story? That’s okay; theorists don’t need to be consistent—only their theories do.
If SUSY looks attractive enough, of course that phenomenologists will ignore the previous fashionable beliefs about the lightness of the superpartners and (invent and) focus on new models that are compatible with all the evidence at that moment. The relative fraction of hep-ph papers that are dedicated to SUSY model building may decrease in the case of the continuing absence of evidence but only gradually so simply because there are no any major enough alternatives that could completely squeeze the SUSY research. There can't really be any paradigm shift if the status quo continues. You either need some new experimental discoveries or some new theoretical discoveries for a paradigm shift.
This unshakable fidelity to supersymmetry is widely shared. Particle theorists do admit, however, that the idea of natural supersymmetry is already in trouble and is headed for the dustbin of history unless superpartners are discovered soon…
The word "natural" has several meanings and the important differences between these meanings is being (deliberately?) obfuscated by this sentence. It is almost a tautology that any theory that ultimately describes Nature accurately is "natural". But as long as we are ignorant about all the details about the final theory and how it describes Nature, we must be satisfied with approximate and potentially treacherous but operationally applicable definitions of "naturalness". In effective field theory, we assume that the parameters (at the high energy scale) are more or less uniformly distributed in a set and classify very special, unlikely (by this probability distribution) regions to be "unnatural" (typically very small values of some dimensionless parameters that could be of order one).

But the ultimate theory has different rules how to calculate the "probability distribution for the parameters". After all, string theory implies discrete values of all the parameters, so with some discrete information, we may sharpen the probability distribution for low-energy parameters to a higher-dimensional delta-function. We can just calculate the values of all the parameters. The values may be generic or natural according to some sensible enough smooth probability distribution (e.g. in an effective field theory). But if the effective field theory description overlooks some important new particles, interactions, patterns, or symmetries, it may be unnatural, too.

It's important to realize that our ways to estimate whether some values of parameters in some theories are natural are model-dependent and therefore bound to evolve. It is just completely wrong for Maria and Joseph to impose some ideas about physics from some year – 2000 or whatever is the "paradigm" they want everyone to be stuck at – and ban any progress of the thinking. Scientists' thinking inevitably evolves. That's why the scientific research is being done in the first place. So new evidence – including null results – is constantly being taken into account as physicists are adjusting their subjective probabilities of various theories and models, and of various values of parameters within these models.

This process will undoubtedly continue in 2015 and 2016 and later, too. At least, sensible people will continue to adjust their beliefs. If you allow me to say a similar thing as Nima did: theorists are not only allowed to present theories that are incompatible with some of their previous theories or beliefs. They are really obliged to adjust their beliefs – and even at one moment, a sensible enough theorists may (and perhaps should) really be thinking about many possible theories, models, and paradigms. Someone whose expectations turn out to be more accurate and nontrivially agreeing with the later observations should become more famous than others. But it is not a shame to update the probabilities of theories according to the new evidence. It's one of the basic duties that a scientist has to do!

I also feel that the article hasn't taken the BICEP2 results into account and for those reasons, it will already be heavily obsolete when the issue of Scientific American is out. They try to interpret the null results from the LHC as an argument against grand unification or similar physics at the GUT scale. But nothing like that follows from the null results at the LHC and in fact, the BICEP2's primordial gravitational waves bring us quite powerful evidence – if not a proof – that new interesting physics is taking place near the usual GUT scale i.e. not so far from the standard four-dimensional Planck scale.

So in the absence of the SM-violating collider data, the status quo will pretty much continue and the only other way to change it is to propose some so far overlooked alternative paradigm to SUSY that will clarify similar puzzles – or at least a comparable number of puzzles – as SUSY. It is totally plausible that bottom-up particle model builders will have to work with the absence of new collider discoveries – top-down theorists have worked without them for decades, anyway. It works and one can find – and string theorists have found – groundbreaking things in this way, too.

What I really dislike about the article is that – much like articles by many non-physicists – it tries to irrationally single out SUSY as a scapegoat. Even if one should panic about the null results from the LHC, and one shouldn't, these results would be putting pressure on every model or theory or paradigm of bottom-up physics that goes beyond the Standard Model. In fact, SUSY theories are still among the "least constrained ones" among all paradigms that try to postulate some (motivated by something) new physics at low enough energy scales. That's the actual reason why the events cannot rationally justify the elimination or severe reduction of SUSY research as a percentage of hep-ph research.

If someone thinks that it's pointless to do physics without new guaranteed enough experimental discoveries and this kind of physics looks like a "problem" or "crisis" to him or her, he or she should probably better leave physics. Those who would be left are looking for more than just the superficial gloss and low-hanging fruits. The number of HEP experimenters and phenomenologists building their work on a wishful thinking of many collider discoveries in the near future is arguably too high, anyway. But there are other, more emotion-independent approaches to physics that are doing very well.

by Luboš Motl (noreply@blogger.com) at April 16, 2014 09:37 AM

April 15, 2014

Quantum Diaries

Ten things you might not know about particle accelerators

A version of this article appeared in symmetry on April 14, 2014.

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

The Large Hadron Collider at CERN laboratory has made its way into popular culture: Comedian John Stewart jokes about it on The Daily Show, character Sheldon Cooper dreams about it on The Big Bang Theory and fictional villains steal fictional antimatter from it in Angels & Demons.

Despite their uptick in popularity, particle accelerators still have secrets to share. With input from scientists at laboratories and institutions worldwide, symmetry has compiled a list of 10 things you might not know about particle accelerators.

There are more than 30,000 accelerators in operation around the world.

Accelerators are all over the place, doing a variety of jobs. They may be best known for their role in particle physics research, but their other talents include: creating tumor-destroying beams to fight cancer; killing bacteria to prevent food-borne illnesses; developing better materials to produce more effective diapers and shrink wrap; and helping scientists improve fuel injection to make more efficient vehicles.

One of the longest modern buildings in the world was built for a particle accelerator.

Linear accelerators, or linacs for short, are designed to hurl a beam of particles in a straight line. In general, the longer the linac, the more powerful the particle punch. The linear accelerator at SLAC National Accelerator Laboratory, near San Francisco, is the largest on the planet.

SLAC’s klystron gallery, a building that houses components that power the accelerator, sits atop the accelerator. It’s one of the world’s longest modern buildings. Overall, it’s a little less than 2 miles long, a feature that prompts laboratory employees to hold an annual footrace around its perimeter.

Particle accelerators are the closest things we have to time machines, according to Stephen Hawking.

In 2010, physicist Stephen Hawking wrote an article for the UK paper the Daily Mail explaining how it might be possible to travel through time. We would just need a particle accelerator large enough to accelerate humans the way we accelerate particles, he said.

A person-accelerator with the capabilities of the Large Hadron Collider would move its passengers at close to the speed of light. Because of the effects of special relativity, a period of time that would appear to someone outside the machine to last several years would seem to the accelerating passengers to last only a few days. By the time they stepped off the LHC ride, they would be younger than the rest of us.

Hawking wasn’t actually proposing we try to build such a machine. But he was pointing out a way that time travel already happens today. For example, particles called pi mesons are normally short-lived; they disintegrate after mere millionths of a second. But when they are accelerated to nearly the speed of light, their lifetimes expand dramatically. It seems that these particles are traveling in time, or at least experiencing time more slowly relative to other particles.

The highest temperature recorded by a manmade device was achieved in a particle accelerator.

In 2012, Brookhaven National Laboratory’s Relativistic Heavy Ion Collider achieved a Guinness World Record for producing the world’s hottest manmade temperature, a blazing 7.2 trillion degrees Fahrenheit. But the Long Island-based lab did more than heat things up. It created a small amount of quark-gluon plasma, a state of matter thought to have dominated the universe’s earliest moments. This plasma is so hot that it causes elementary particles called quarks, which generally exist in nature only bound to other quarks, to break apart from one another.

Scientists at CERN have since also created quark-gluon plasma, at an even higher temperature, in the Large Hadron Collider.

The inside of the Large Hadron Collider is colder than outer space.

In order to conduct electricity without resistance, the Large Hadron Collider’s electromagnets are cooled down to cryogenic temperatures. The LHC is the largest cryogenic system in the world, and it operates at a frosty minus 456.3 degrees Fahrenheit. It is one of the coldest places on Earth, and it’s even a few degrees colder than outer space, which tends to rest at about minus 454.9 degrees Fahrenheit.

Nature produces particle accelerators much more powerful than anything made on Earth.

We can build some pretty impressive particle accelerators on Earth, but when it comes to achieving high energies, we’ve got nothing on particle accelerators that exist naturally in space.

The most energetic cosmic ray ever observed was a proton accelerated to an energy of 300 million trillion electronvolts. No known source within our galaxy is powerful enough to have caused such an acceleration. Even the shockwave from the explosion of a star, which can send particles flying much more forcefully than a manmade accelerator, doesn’t quite have enough oomph. Scientists are still investigating the source of such ultra-high-energy cosmic rays.

Particle accelerators don’t just accelerate particles; they also make them more massive.

As Einstein predicted in his theory of relativity, no particle that has mass can travel as fast as the speed of light—about 186,000 miles per second. No matter how much energy one adds to an object with mass, its speed cannot reach that limit.

In modern accelerators, particles are sped up to very nearly the speed of light. For example, the main injector at Fermi National Accelerator Laboratory accelerates protons to 0.99997 times the speed of light. As the speed of a particle gets closer and closer to the speed of light, an accelerator gives more and more of its boost to the particle’s kinetic energy.

Since, as Einstein told us, an object’s energy is equal to its mass times the speed of light squared (E=mc2), adding energy is, in effect, also increasing the particles’ mass. Said another way: Where there is more “E,” there must be more “m.” As an object with mass approaches, but never reaches, the speed of light, its effective mass gets larger and larger.

The diameter of the first circular accelerator was shorter than 5 inches; the diameter of the Large Hadron Collider is more than 5 miles.

In 1930, inspired by the ideas of Norwegian engineer Rolf Widerøe, 27-year-old physicist Ernest Lawrence created the first circular particle accelerator at the University of California, Berkeley, with graduate student M. Stanley Livingston. It accelerated hydrogen ions up to energies of 80,000 electronvolts within a chamber less than 5 inches across.

In 1931, Lawrence and Livingston set to work on an 11-inch accelerator. The machine managed to accelerate protons to just over 1 million electronvolts, a fact that Livingston reported to Lawrence by telegram with the added comment, “Whoopee!” Lawrence went on to build even larger accelerators—and to found Lawrence Berkeley and Lawrence Livermore laboratories.

Particle accelerators have come a long way since then, creating brighter beams of particles with greater energies than previously imagined possible. The Large Hadron Collider at CERN is more than 5 miles in diameter (17 miles in circumference). After this year’s upgrades, the LHC will be able to accelerate protons to 6.5 trillion electronvolts.

In the 1970s, scientists at Fermi National Accelerator Laboratory employed a ferret named Felicia to clean accelerator parts.

From 1971 until 1999, Fermilab’s Meson Laboratory was a key part of high-energy physics experiments at the laboratory. To learn more about the forces that hold our universe together, scientists there studied subatomic particles called mesons and protons. Operators would send beams of particles from an accelerator to the Meson Lab via a miles-long underground beam line.

To ensure hundreds of feet of vacuum piping were clear of debris before connecting them and turning on the particle beam, the laboratory enlisted the help of one Felicia the ferret.

Ferrets have an affinity for burrowing and clambering through holes, making them the perfect species for this job. Felicia’s task was to pull a rag dipped in cleaning solution on a string through long sections of pipe.

Although Felicia’s work was eventually taken over by a specially designed robot, she played a unique and vital role in the construction process—and in return asked only for a steady diet of chicken livers, fish heads and hamburger meat.

Particle accelerators show up in unlikely places.

Scientists tend to construct large particle accelerators underground. This protects them from being bumped and destabilized, but can also make them a little harder to find.

For example, motorists driving down Interstate 280 in northern California may not notice it, but the main accelerator at SLAC National Accelerator Laboratory runs underground just beneath their wheels.

Residents in villages in the Swiss-French countryside live atop the highest-energy particle collider in the world, the Large Hadron Collider.

And for decades, teams at Cornell University have played soccer, football and lacrosse on Robison Alumni Fields 40 feet above the Cornell Electron Storage Ring, or CESR. Scientists use the circular particle accelerator to study compact particle beams and to produce X-ray light for experiments in biology, materials science and physics.

Sarah Witman

by Fermilab at April 15, 2014 09:34 PM

Symmetrybreaking - Fermilab/SLAC

Ten things you might not know about particle accelerators

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators.

The Large Hadron Collider at CERN laboratory has made its way into popular culture: Comedian John Stewart jokes about it on The Daily Show, character Sheldon Cooper dreams about it on The Big Bang Theory and fictional villains steal fictional antimatter from it in Angels & Demons.

by Sarah Witman at April 15, 2014 07:59 PM

Sean Carroll - Preposterous Universe

Talks on God and Cosmology

Hey, remember the debate I had with William Lane Craig, on God and Cosmology? (Full video here, my reflections here.) That was on a Friday night, and on Saturday morning the event continued with talks from four other speakers, along with responses by WLC and me. At long last these Saturday talks have appeared on YouTube, so here they are!

First up was Tim Maudlin, who usually focuses on philosophy of physics but took the opportunity to talk about the implications of God’s existence for morality. (Namely, he thinks there aren’t any.)

Then we had Robin Collins, who argued for a new spin on the fine-tuning argument, saying that the universe is constructed to allow for it to be discoverable.

Back to Team Naturalism, Alex Rosenberg explains how the appearance of “design” in nature is well-explained by impersonal laws of physics.

Finally, James Sinclair offered thoughts on the origin of time and the universe.

To wrap everything up, the five of us participated in a post-debate Q&A session.

Enough debating for me for a while! Oh no, wait: on May 7 I’ll be in New York, debating whether there is life after death. (Spoiler alert: no.)

by Sean Carroll at April 15, 2014 03:16 PM

Lubos Motl - string vacua and pheno

Podcast with Lisa Randall on inflation, Higgs, LHC, DM, awe
I want to offer you a yesterday's 30-minute podcast of Huffington Post's David Freeman with Lisa Randall of Harvard
Podcast with Randall (audio over there)
The audio format is thanks to RobinHoodRadio.COM.

They talk about inflation, the BICEP2 discovery, the Higgs boson vs the Higgs field, the LHC, its tunnels, and the risk that the collider would create deadly black holes.




I think her comments are great, I agree with virtually everything, including the tone.




Well, I am not sure what she means by the early inflationary models' looking contrived but that's just half a sentence of a minor disagreement – which may become a major one, of course, if some people focus on this topic.

She is asked about the difference between the Big Bang and inflation, the Higgs boson vs. the Higgs field (who gives masses to other particles). The host asks about the size of the LHC; it is sort of bizarre because the photographs of the LHC have been everywhere in the media and they're very accessible so why would one ask about the size of the tunnel again?

The host also said that there would be "concerns" that the LHC would have created a hungry black hole that would devour our blue, not green planet. I liked Lisa's combative reply: the comment had to be corrected. There were concerns but only among the people who didn't have a clue. The actual calculations of safety – something that scientists are sort of obliged to perform before they do an experiment – end up with the result that we're safe as the rate of such accidents is lower than "one per the age of the universe". It's actually much lower than that but even that should be enough.

They also talk about the multiverse. Lisa says that she's not among those who are greatly interested in the multiverse ideas – she's more focused on things we can measure – but of course that there may be other universes. Just because we haven't see them doesn't mean that they don't exist. (She loves to point the same idea when it comes to dark matter.)

What comes at the end of the universe? She explains that the compact space – a balloon is free of troubles. The host says the usual thing that the laymen always do. The balloon is expanding into something, some preexisting space. But in the case of the universe, there is simply nothing outside it, Lisa warns. The balloon is the whole story. I have some minor understanding for this problem of the laymen because when I was 8, I also had the inclination to imagine that the curved spacetime of general relativity (from the popular articles and TV shows) had to be embedded into some larger, flat one. But this temptation went away a year later or so. The Riemannian geometry is meant to describe "all of space" and it allows curvature. To embed the space into a higher-dimensional flat one is a way (and not the only way) to visualize the curvature but these extra "crutches" are not necessarily physical. And in fact, they are not physical in our real universe.

Now, is dark matter the same thing as antimatter? Based on the frequency at which I have heard this question, I believe that every third layman must be asking the very same question. So Lisa has to say that antimatter is charged and qualitatively behaves just like ordinary matter – and they annihilate – while dark matter has to be new. Is dark matter made of black holes? Every 10th layman has this idea. It's actually an a priori viable one that needs some discussion. One has to look for "small astrophysical objects as dark matter". They would cause some gravitational lensing which is not seen.

So what is dark energy? It's something that is not localizable "stuff". Dark energy is smoothly spread everywhere. Absolute energy matters, Einstein found out. And the C.C. accelerates the expansion of the universe. Can the experiments find dark energy and dark matter? Not dark energy but possibly dark matter. It could be a bigger deal than the Higgs boson.

LHC is upgrading and will be reopened for collision business in 1 year. No one believes that the Higgs boson is everything there is but it is not clear that the other things are achievable by the LHC.

Lisa is now working on dark matter. Lots of theoretical ideas. Dark matter with a more strongly interacting component.

What is it good for? The electron seemed to be useless, too. So there may be unexpected applications. But applications are not the main motivation. She is also asked about being religious. She is not religious and for her, science isn't about the "sense of awe". So she is not religious even in the most general sense. Ultimately, science wants to understand things that clarify the "awe", that make the magnificent things look accessible. It is about solving puzzles and the satisfaction arises from the understanding, from the feeling that things fit together.

The host says that because she writes popular books, she must present the "sense of wonder". Lisa protests again. My books are about science, not the awe! :-) There is clearly a widespread feeling among the laymen that scientists are obliged to lick the buttocks of the stupid laymen in some particular ways. To constantly "admit" (more precisely, to constantly lie) that science knows nothing and spread religious feelings. But scientists are not obliged to do any of these things and in fact, they shouldn't do these things. A good popular book is one that attracts the reader into genuine science – the organized process of learning the truth about Nature – and that communicates some correct science (principles, methods, or results) to the readers. If science implies that the people who are afraid of the destruction of the world by the LHC are imbeciles, and be sure that science does imply that, a good popular scientific book must nicely articulate this point. A good popular scientific book is not one that reinforces the reader's spiritual or even anti-scientific preconceptions (although the book that does reinforce them may be generously praised by the stupid readers and critics).

Is it possible to convey the science without maths? Lisa tends to answer Yes because she appreciates classical music although she has never studied it. But she could still learn something about it from the books, although less than the professional musicians. So it doesn't have to be "all or nothing". People still learn some science even if they don't learn everything. And readers of her book, she believes, may come from many layers and learn the content to various degrees of depth and detail.

There's lots of talk about America's falling behind in STEM fields. LOL, exactly, there is a lot of talk, Lisa replies. 50 years ago, people were inspired by the space research. But the host tries to suggest that there is nothing inspiring in physics or science now or something like that. Lisa says that there are tons of awe-inspiring things – perhaps too many.

What is the most awe-inspiring fact, Lisa is asked? She answers that it's the body and size of all the things we understood in a recent century or so. Nebulae used to be galaxies, the host is amazed. Lisa talks about such cosmological insights for a while.



Incidentally, on Sunday, we finally went to Pilsner Techmania's 3D planetarium. We watched the Astronaut 3D program (trailed above: a movie about all the training that astronauts undergo and dangers awaiting them during the spaceflight) plus a Czech program on the spring sky above Pilsen (constellations and some ancient stories about them: I was never into it much and I am still shaking my head whenever someone looks at 9/15 stars/dots and not only determines that it is a human but also that its gender is female and even that she has never had sex before – that was the Virgo constellation, if you couldn't tell). Technically, I was totally impressed how Techmania has tripled or quadrupled (with the planetarium) in the last 6 months. The 3D glasses look robust and cool although they're based on a passive color system only. Things suddenly look very clean and modern (a year ago, Techmania would still slightly resemble the collapsing Škoda construction halls in Jules Verne's Steel City after a global nuclear war LOL).

On the other hand, I am not quite sure whether the richness of the spiritual charge of the content fully matches the generous superficial appearance (which can't hide that lots of money has clearly gone into it). There were many touch-sensitive tabletop displays in Techmania (e.g. one where you could move photographs of the Milky Way, a woman, and a few more from one side – X-ray spectrum – to the other side – radio waves – and see what it looks like), the "science on sphere" projection system, and a few other things (like a model of a rocket which can shoot something up; a gyroscope with many degrees of freedom for young astronauts to learn how to vomit; scales where you can see how much you weigh on the Moon and all the planets of the Solar System, including fake models of steel weights with apparently varying weights). I haven't seen the interiors of the expanded Techmania proper yet (there is a cool simple sundial before you enter the reception). Also, I think that the projectors in the 3D fulldome could be much stronger (more intense), the pictures were pretty dark relatively to how I remember cinemas. The 3D cosmos-oriented science movies will never be just like Titanic – one can't invest billions into things with limited audiences – but I still hope that they will make some progress because to some extent, these short programs looked like a "proof of a concept" rather than a full-fledged complete experience that should compete with regular movie theaters, among other sources of (less scientific) entertainment. I suppose that many more 3D fulldomes have to be built before the market with the truly impressive programs becomes significant.

by Luboš Motl (noreply@blogger.com) at April 15, 2014 01:12 PM

CERN Bulletin

CERN Bulletin Issue No. 16-17/2014
Link to e-Bulletin Issue No. 16-17/2014Link to all articles in this issue No.

April 15, 2014 09:32 AM

Clifford V. Johnson - Asymptotia

Beautiful Randomness
Spotted in the hills while out walking. Three chairs left out to be taken, making for an enigmatic gathering at the end of a warm Los Angeles Spring day... random_chairs_la_14_04_14 I love this city. -cvj Click to continue reading this post

by Clifford at April 15, 2014 04:30 AM

April 14, 2014

Clifford V. Johnson - Asymptotia

Total Lunar Eclipse!
There is a total eclipse of the moon tonight! It is also at not too inconvenient a time (relatively speaking) if you're on the West Coast. The eclipse begins at 10:58pm (Pacific) and gets to totality by 12:46am. This is good timing for me since I'd been meaning to set up the telescope and look at the moon recently anyway, and a full moon can be rather bright. Now there'll be a natural filter in the way, indirectly - the earth! There's a special event up at the Griffith Observatory if you are interested in making a party out of it. It starts at 7:00pm and you can see more about the [...] Click to continue reading this post

by Clifford at April 14, 2014 09:23 PM

Andrew Jaffe - Leaves on the Line

&ldquo;Public Service Review&rdquo;?

A few months ago, I received a call from someone at the “Public Service Review”, supposedly a glossy magazine distributed to UK policymakers and influencers of various stripes. The gentleman on the line said that he was looking for someone to write an article for his magazine giving an example of what sort of space-related research was going on at a prominent UK institution, to appear opposite an opinion piece written by Martin Rees, president of the Royal Society.

This seemed harmless enough, although it wasn’t completely clear what I (or the Physics Department, or Imperial College) would get out of it. But I figured I could probably knock something out fairly quickly. However, he told me there was a catch: it would cost me £6000 to publish the article. And he had just ducked out of his editorial meeting in order to find someone to agree to writing the article that very afternoon. Needless to say, in this economic climate, I didn’t have an account with an unused £6000 in it, especially for something of dubious benefit. (On the other hand, astrophysicists regularly publish in journals with substantial page charges.) It occurred to me that this could be a scam, although the website itself seems legitimate (although no one I spoke to knew anything about it).

I had completely forgotten about this until this week, when another colleague in our group at Imperial told me had received the same phone call, from the same organization, with the same details: article to appear opposite Lord Rees’; short deadline; large fee.

So, this is beginning to sound fishy. Has anyone else had any similar dealings with this organization?

Update: It has come to my attention that one of the comments below was made under a false name, in particular the name of someone who actually works for the publication in question, so I have removed the name, and will possibly likely the comment unless the original write comes forward with more and truthful information (which I will not publish without permission). I have also been informed of the possibility that some other of the comments below may come from direct competitors of the publication. These, too, may be removed in the absence of further confirming information.

Update II: In the further interest of hearing both sides of the discussion, I would like to point out the two comments from staff at the organization giving further information as well as explicit testimonials in their favor.

by Andrew at April 14, 2014 06:41 PM

The n-Category Cafe

universo.math

A new Spanish language mathematical magazine has been launched: universo.math. Hispanophones should check out the first issue! There are some very interesting looking articles which cover areas from art through politics to research-level mathematics.

The editor-in-chief is my mathematical brother Jacob Mostovoy and he wants it to be a mix of Mathematical Intellingencer, Notices of the AMS and the New Yorker, together with less orthodox ingredients; the aim is to keep the quality high.

Besides Jacob, the contributors to the first issue that I recognise include Alberto Verjovsky, Ernesto Lupercio and Edward Witten, so universo.math seems to be off to a high quality start.

by willerton (S.Willerton@sheffield.ac.uk) at April 14, 2014 05:16 PM

Matt Strassler - Of Particular Significance

A Lunar Eclipse Overnight

Overnight, those of you in the Americas and well out into the Pacific Ocean, if graced with clear skies, will be able to observe what is known as “a total eclipse of the Moon” or a “lunar eclipse”. The Moon’s color will turn orange for about 80 minutes, with mid-eclipse occurring simultaneously in all the areas in which the eclipse is visible: 3:00-4:30 am for observers in New York, 12:00- 1:30 am for observers in Los Angeles, and so forth. [As a bonus, Mars will be quite near the Moon, and about as bright as it gets; you can't miss it, since it is red and much brighter than anything else near the Moon.]

Since the Moon is so bright, you will be able to see this eclipse from even the most light-polluted cities. You can read more details of what to look for, and when to look for it in your time zone, at many websites, such as http://www.space.com/25479-total-lunar-eclipse-2014-skywatching-guide.html  However, many of them don’t really explain what’s going on.

One striking thing that’s truly very strange about the term “eclipse of the Moon” is that the Moon is not eclipsed at all. The Moon isn’t blocked by anything; it just becomes less bright than usual. It’s the Sun that is eclipsed, from the Moon’s point of view. See Figure 1. To say this another way, the terms “eclipse of the Sun” and “eclipse of the Moon”, while natural from the human-centric perspective, hide the fact that they really are not analogous. That is, the role of the Sun in a “solar eclipse” is completely different from the role of the Moon in a “lunar eclipse”, and the experience on Earth is completely different. What’s happening is this:

  • a “total eclipse of the Sun” is an “eclipse of the Sun by the Moon that leaves a shadow on the Earth.”
  • a “total eclipse of the Moon” is an “eclipse of the Sun by the Earth that leaves a shadow on the Moon.”

In a total solar eclipse, lucky humans in the right place at the right time are themselves, in the midst of broad daylight, cast into shadow by the Moon blocking the Sun. In a total lunar eclipse, however, it is the entire Moon that is cast into shadow; we, rather than being participants, are simply observers at a distance, watching in our nighttime as the Moon experiences this shadow. For us, nothing is eclipsed, or blocked; we are simply watching the effect of our own home, the Earth, eclipsing the Sun for Moon-people.

Fig. 1: In a "total solar eclipse", a small shadow is cast by the Moon upon the Earth; at that spot the Sun appears to be eclipsed.  In a "total lunar eclipse", the Earth casts a huge shadow across the entire Moon;

Fig. 1: In a “total solar eclipse”, a small shadow is cast by the Moon upon the Earth; at that spot the Sun appears to be eclipsed by the Moon. In a “total lunar eclipse”, the Earth casts a huge shadow across the entire Moon; on the near side of the Moon, the Sun appears to be eclipsed by the Earth.   The Moon glows orange because sunlight bends around the Earth through the Earth’s atmosphere; see Figure 2.  Picture is not to scale; the Sun is 100 times the size of the Earth, and much further away than shown.

Simple geometry, shown in Figure 1, assures that the first type of eclipse always happens at “new Moon”, i.e., when the Moon would not be visible in the Earth’s sky at night. Meanwhile the second type of eclipse, also because of geometry, only occurs on the night of the “full Moon”, when the entire visible side of the Moon is (except during an eclipse) in sunlight. Only then can the Earth block the Sun, from the Moon’s point of view.

An total solar eclipse — an eclipse of the Sun by the Moon, as seen from the Earth — is one of the nature’s most spectacular phenomena. [I am fortunate to speak from experience; put this on your bucket list.] That is both because we ourselves pass into darkness during broad daylight, creating an amazing light show, and even more so because, due to an accident of geometry, the Moon and Sun appear to be almost the same size in the sky: the Moon, though 400 times closer to the Earth than the Sun, happens to be just about 400 times smaller in radius than the Sun. What this means is that the Sun’s opaque bright disk, which is all we normally see, is almost exactly blocked by the Moon; but this allows the dimmer (but still bright!) silvery corona of the Sun, and the pink prominences that erupt off the Sun’s apparent “surface”, to become visible, in spectacular fashion, against a twilight sky. (See Figure 2.) This geometry also implies, however, that the length of time during which any part of the Earth sees the Sun as completely blocked is very short — not more than a few minutes — and that very little of the Earth’s surface actually goes into the Moon’s shadow (see Figure 1).

No such accident of geometry affects an “eclipse of the Moon”. If you were on the Moon, you would see the Earth in the sky as several times larger than the Sun, because the Earth, though about 400 times closer to the Moon than is the Sun, is only about 100 times smaller in radius than the Sun. Thus, the Earth in the Moon’s sky looks nearly four times as large, from side to side (and 16 times as large in apparent area) as does the Moon in the Earth’s sky.  (In short: Huge!) So when the Earth eclipses the Sun, from the Moon’s point of view, the Sun is thoroughly blocked, and remains so for as much as a couple of hours.

But that’s not to say there’s no light show; it’s just a very different one. The Sun’s light refracts through the Earth’s atmosphere, bending around the earth, such that the Earth’s edge appears to glow bright orange or red (depending on the amount of dust and cloud above the Earth.) This ring of orange light amid the darkness of outer space must be quite something to behold! Thus the Moon, instead of being lit white by direct sunlight, is lit by the unmoonly orange glow of this refracted light. The orange light then reflects off the Moon’s surface, and some travels back to Earth — allowing us to see an orange Moon. And we can see this from any point on the Earth for which the Moon is in the sky — which, during a full Moon, is (essentially) anyplace where the Sun is down.  That’s why anyone in the Americas and eastern Pacific Ocean can see this eclipse, and why we all see it simultaneously [though, since we're in different time zones, our clocks don't show the same hour.]

Since lunar eclipses (i.e. watching the Moon move into the Earth’s shadow) can be seen simultaneously across any part of the Earth where it is dark during the eclipse, they are common. I have seen two lunar eclipses at dawn, one at sunset, and several in the dark of night; I’ve seen the moon orange, copper-colored, and, once, blood red. If you miss one total lunar eclipse due to clouds, don’t worry; there will be more. But a total solar eclipse (i.e. standing in the shadow of the Moon) can only be seen and appreciated if you’re actually in the Moon’s shadow, which affects, in each eclipse, only a tiny fraction of the Earth — and often a rather inaccessible fraction. If you want to see one, you’ll almost certainly have to plan, and travel. My advice: do it.  Meanwhile, good luck with the weather tonight!


Filed under: Astronomy Tagged: astronomy

by Matt Strassler at April 14, 2014 05:08 PM

Symmetrybreaking - Fermilab/SLAC

CERN's LHCb experiment sees exotic particle

An analysis using LHC data verifies the existence of an exotic four-quark hadron.

Last week, the Large Hadron Collider experiment LHCb published a result confirming the existence of a rare and exotic particle. This particle breaks the traditional quark model and is a “smoking gun” for a new class of hadrons.

The Belle experiment at the KEK laboratory in Japan had previously announced the observation of such a particle, but it came into question when data from sister experiment BaBar at SLAC laboratory in California did not back up the result.

Now scientists at both the Belle and BaBar experiments consider the discovery confirmed by LHCb.

by Sarah Charley at April 14, 2014 05:02 PM

ZapperZ - Physics and Physicists

Learn Quantum Mechanics From Ellen DeGeneres
Hey, why not? :)



Although, there isn't much of "quantum mechanics" in here, but rather more on black holes and general relativity. Oh well!

Zz.

by ZapperZ (noreply@blogger.com) at April 14, 2014 01:34 PM

ZapperZ - Physics and Physicists

Science Is Running Out Of Things To Discover?
John Horgan is spewing out the same garbage again in his latest opinion piece (and yes, I'm not mincing my words here). His latest lob into this controversy is the so-called evidence that in physics, the time difference between the original work and when the Nobel prize is finally awarded is getting longer, and thus, his point that physics, especially "fundamental physics", is running out of things to discover.

In their brief Nature letter, Fortunato and co-authors do not speculate on the larger significance of their data, except to say that they are concerned about the future of the Nobel Prizes. But in an unpublished paper called "The Nobel delay: A sign of the decline of Physics?" they suggest that the Nobel time lag "seems to confirm the common feeling of an increasing time needed to achieve new discoveries in basic natural sciences—a somewhat worrisome trend."

This comment reminds me of an essay published in Nature a year ago, "After Einstein: Scientific genius is extinct." The author, psychologist Dean Keith Simonton, suggested that scientists have become victims of their own success. "Our theories and instruments now probe the earliest seconds and farthest reaches of the universe," he writes. Hence, scientists may produce no more "momentous leaps" but only "extensions of already-established, domain-specific expertise." Or, as I wrote in The End of Science, "further research may yield no more great revelations or revolutions, but only incremental, diminishing returns."
So, haven't we learned anything from the history of science? The last time someone thought that we knew all there was to know about an area of physics, and all that we could do was simply to make incremental understanding of the area,  it was pre-1985 before Mother Nature smacked us right in the face with the discovery of high-Tc superconductors.

There is a singular problem with this opinion piece. It equates "fundamental physics" with elementary particle/high energy/cosmology/string/etc. This neglects the fact that (i) the Higgs mechanism came out of condensed matter physics, (ii) "fundamental" understanding of various aspects of quantum field theory and other exotica such as Majorana fermions and magnetic monopole are coming out of condensed matter physics, (iii) the so-called "fundamental physics" doesn't have a monopoly on the physics Nobel prizes. It is interesting that Horgan pointed out the time lapse between the theory and Nobel prizes for superfluidity (of He3), but neglected the short time frame between discovery and the Nobel prize for graphene, or high-Tc superconductors.

As we know more and more, the problems that remain and new ones that popped up become more and more difficult to decipher and observe. Naturally, this will make the confirmation/acceptance up to the level of Nobel prize to be lengthier, both in terms of peer-reviewed evaluation and in time. But this metric does NOT reflect on whether we lack things to discover. Anyone who had done scientific research can tell you that as you try to solve something, other puzzling things pop up! I can guarantee you that the act of trying to solve the Dark Energy and Dark Matter problem will provide us with MORE puzzling observations, even if we solve those two. That has always been the pattern in scientific discovery from the beginning of human beings trying to decipher the world around us! In fact, I would say that we have a lot more things we don't know of now than before, because we have so many amazing instruments that are giving us more puzzling and unexpected things.

Unfortunately, Horgan seems to dismiss whole areas of physics as being unimportant and not "fundamental".

Zz.

by ZapperZ (noreply@blogger.com) at April 14, 2014 01:26 PM

Quantum Diaries

Moriond 2014 : de nouveaux résultats, de nouvelles explorations… mais pas de nouvelle physique

Même avant mon départ pour La Thuile (Italie), les résultats des Rencontres de Moriond remplissaient déjà les fils d’actualités. La session de cette année sur l’interaction électrofaible, du 15 au 22 mars, a débuté avec la première « mesure mondiale » de la masse du quark top, basée sur la combinaison des mesures publiées jusqu’à présent par les expériences Tevatron et LHC. La semaine s’est poursuivie avec un résultat spectaculaire de CMS sur la largeur du Higgs.

Même si elle approche de son 50e anniversaire, la conférence de Moriond est restée à l’avant-garde. Malgré le nombre croissant de conférences incontournables en physique des hautes énergies, Moriond garde une place de choix dans la communauté, pour des raisons en partie historiques : cette conférence existe depuis 1966 et elle s’est imposée comme l’endroit où les théoriciens et les expérimentateurs viennent pour voir et être vus. Regardons maintenant ce que les expériences du LHC nous ont réservé cette année…

Nouveaux résultats­­­

Cette année, le clou du spectacle à Moriond a bien entendu été l’annonce de la meilleure limite à ce jour pour la largeur du Higgs, à < 17 MeV avec 95 % de confiance, présentée aux deux sessions de Moriond par l’expérience CMS. La nouvelle mesure, obtenue par une nouvelle méthode d’analyse basée sur les désintégrations du Higgs en deux particules Z, est environ 200 fois plus précise que les précédentes. Les discussions sur cette limite ont porté principalement sur la nouvelle méthode utilisée pour l’analyse. Quelles hypothèses étaient nécessaires ? La même technique pouvait-elle être appliquée à un Higgs se désintégrant en deux bosons W ? Comment cette nouvelle largeur allait-elle influencer les modèles théoriques pour la nouvelle physique ? Nous le découvrirons sans doute à Moriond l’année prochaine…

L’annonce du premier résultat mondial conjoint pour la masse du quark top a aussi suscité un grand enthousiasme. Ce résultat, qui met en commun les données du Tevatron et du LHC, constitue la meilleure valeur jusqu’ici, au niveau mondial, à 173,34 ± 0,76 GeV/c2. Avant que l’effervescence ne soit retombée à la session de QCD de Moriond, CMS a annoncé un nouveau résultat préliminaire fondé sur l’ensemble des données collectées à 7 et 8 TeV. Ce résultat est à lui seul d’une précision qui rivalise avec celle de la moyenne mondiale, ce qui démontre clairement que nous n’avons pas encore atteint la plus grande précision possible pour la masse du quark top.

ot0172hCe graphique montre les quatre mesures de la masse du quark top publiées respectivement par les collaborations ATLAS, CDF, CMS et D0, ainsi que la mesure la plus précise à ce jour obtenue grâce à l’analyse conjointe.

D’autres nouveautés concernant le quark top, entre autres les nouvelles mesures précises de son spin et de sa polarisation issues du LHC, ainsi que les nouveaux résultats d’ATLAS pour la section efficace du quark top isolé dans le canal de désintégration t, ont été présentés par Kate Shaw le mardi 25 mars. La période II du LHC permettra d’approfondir encore notre compréhension du sujet.

Une mesure fondamentale et délicate permettant d’explorer la nature de la brisure de la symétrie électrofaible portée par le mécanisme de Brout-Englert-Higgs est celle de la diffusion de deux bosons vecteurs massifs. Cet événement est rare, mais en l’absence du boson de Higgs sa fréquence augmenterait fortement avec l’énergie de la collision, jusqu’à enfreindre les lois de la physique. Un indice de la collision d’un boson vecteur de force électrofaible a été détecté pour la première fois par ATLAS dans des événements impliquant deux leptons de même charge et deux jets présentant une grande différence de rapidité.

S’appuyant sur l’augmentation du volume de données et une meilleure analyse de celles-ci, les expériences du LHC s’attaquent à des états finaux multi-particules rares et difficiles qui font intervenir le boson de Higgs. ATLAS en a présenté un excellent exemple, avec un nouveau résultat dans la recherche de la production d’un Higgs associé à deux quarks top et se désintégrant en une paire de quarks b. Avec une limite prévue de 2,6 fois la prédiction du Modèle standard pour ce seul canal et une intensité de signal relative observée de 1,7 ± 1,4, la future exploitation à haute énergie du LHC, avec laquelle la fréquence de cet événement augmentera, suscite de grands espoirs.

Dans le même temps, dans le monde des saveurs lourdes, l’expérience LHCb a présenté des analyses supplémentaires de l’état exotique X(3872). L’expérience a confirmé de manière non ambiguë que ses nombres quantiques Jpc sont 1++ et a mis en évidence sa désintégration en ψ(2S)γ.

L’étude du plasma de quarks et de gluons se poursuit dans l’expérience ALICE, et les discussions ont porté surtout sur les résultats de l’exploitation du LHC en mode proton-plomb (p-Pb). En particulier, la « double crête » nouvellement observée dans les collisions p-Pb est étudiée en détail, et des analyses du pic de ses jets, de sa distribution de masse et de sa dépendance à la charge ont été présentées.

Nouvelles explorations

Grâce à notre nouvelle compréhension du boson de Higgs, le LHC est entré dans l’ère de la physique du Higgs de précision. Notre connaissance des propriétés du Higgs – par exemple les mesures de son spin et de sa largeur – s’est améliorée, et les mesures précises des interactions et des désintégrations du Higgs ont elles aussi bien progressé. Des résultats relatifs à la recherche d’une physique au-delà du Modèle standard ont également été présentés, et les expériences du LHC continuent de s’investir intensément dans la recherche de la supersymétrie.

En ce qui concerne le secteur de Higgs, de nombreux chercheurs espèrent trouver les cousins supersymétriques du Higgs et des bosons électrofaibles, appelés neutralinos et charginos, par l’intermédiaire de processus électrofaibles. ATLAS a présenté deux nouveaux articles résumant de multiples recherches en quête de ces particules. L’absence d’un signal significatif a été utilisée pour définir des limites d’exclusion pour les charginos et les neutralinos, soit 700 GeV – s’ils se désintègrent via des partenaires supersymétriques intermédiaires de leptons – et 420 GeV – quand ils se désintègrent seulement via des bosons du Modèle standard.

Par ailleurs, pour la première fois, une recherche du mode électrofaible le plus difficile à observer, produisant une paire de charginos qui se désintègrent en bosons W, a été entreprise par ATLAS. Ce mode ressemble à celui de la production de paires de W du Modèle standard, dont le taux mesuré actuellement paraît légèrement plus élevé que prévu.

Dans ce contexte, CMS a présenté de nouveaux résultats dans la recherche de la production d’une paire électrofaible de higgsinos via leur désintégration en un Higgs (à 125 GeV) et un gravitino de masse presque nulle. L’état final montre une signature caractéristique de jets de quatre quarks b, compatible avec une cinématique de double désintégration du Higgs. Un léger excès du nombre d’événements candidats signifie que l’expérience ne peut pas exclure un signal de higgsino. On établit des limites supérieures de l’intensité du signal d’environ deux fois la prédiction théorique pour des masses du higgsino comprises entre 350 et 450 GeV.

Dans plusieurs scénarios de supersymétrie, les charginos peuvent être métastables et ils pourraient potentiellement être détectés sous la forme de particules à durée de vie longue. CMS a présenté une recherche innovante de particules génériques chargées à durée de vie longue, effectuées en cartographiant l’efficacité de détection en fonction de la cinématique de la particule et de la perte d’énergie dans le trajectographe. Cette étude permet non seulement d’établir des limites strictes pour divers modèles supersymétriques qui prédisent une durée de vie du chargino (c*tau) supérieure à 50 cm mais elle fournit également un puissant outil à la communauté des théoriciens pour tester de manière indépendante les nouveaux modèles prédisant des particules chargées à durée de vie longue.

Afin d’être aussi général que possible dans la recherche de la supersymétrie, CMS a également présenté les résultats de nouvelles recherches, dans lesquelles un grand sous-ensemble des paramètres de la supersymétrie, tels que les masses du gluino et du squark, sont testés pour vérifier leur compatibilité statistique avec différentes mesures expérimentales. Cela a permis d’établir une carte des probabilités dans un espace à 19 dimensions. Cette carte montre notamment que les modèles prédisant des masses inférieures à 1,2 TeV pour le gluino et inférieures à 700 GeV pour le sbottom et le stop sont fortement défavorisés.

mais pas de nouvelle physique

Malgré toute ces recherches minutieuses, ce qu’on a le plus entendu à Moriond, c’était: « pas d’excès observé » – « cohérent avec le Modèle standard ». Tous les espoirs reposent maintenant sur la prochaine exploitation du LHC, à 13 TeV. Si vous souhaitez en savoir davantage sur les perspectives ouvertes par la deuxième exploitation du LHC, consultez l’article suivant du Bulletin du CERN: “La vie est belle à 13 TeV“.

En plus des divers résultats des expériences du LHC qui ont été présentés, des nouvelles ont aussi été rapportées à Moriond par les expériences du Tevatron, de BICEP, de RHIC et d’autres expériences. Pour en savoir plus, consultez les sites internet de la conférence, Moriond EW et Moriond QCD.

by CERN (Francais) at April 14, 2014 01:25 PM

Quantum Diaries

On the Shoulders of…

My first physics class wasn’t really a class at all. One of my 8th grade teachers noticed me carrying a copy of Kip Thorne’s Black Holes and Time Warps, and invited me to join a free-form book discussion group on physics and math that he was holding with a few older students. His name was Art — and we called him by his first name because I was attending, for want of a concise term that’s more precise, a “hippie” school. It had written evaluations instead of grades and as few tests as possible; it spent class time on student governance; and teachers could spend time on things like, well, discussing books with a few students without worrying about whether it was in the curriculum or on the tests. Art, who sadly passed some years ago, was perhaps best known for organizing the student cafe and its end-of-year trip, but he gave me a really great opportunity. I don’t remember learning anything too specific about physics from the book, or from the discussion group, but I remember being inspired by how wonderful and crazy the universe is.

My second physics class was combined physics and math, with Dan and Lewis. The idea was to put both subjects in context, and we spent a lot of time on working through how to approach problems that we didn’t know an equation for. The price of this was less time to learn the full breadth subjects; I didn’t really learn any electromagnetism in high school, for example.

When I switched to a new high school in 11th grade, the pace changed. There were a lot more things to learn, and a lot more tests. I memorized elements and compounds and reactions for chemistry. I learned calculus and studied a bit more physics on the side. In college, where the physics classes were broad and in depth at the same time, I needed to learn things fast and solve tricky problems too. By now, of course, I’ve learned all the physics I need to know — which is largely knowing who to ask or which books to look in for the things I need but don’t remember.

There are a lot of ways to run schools and to run classes. I really value knowledge, and I think it’s crucial in certain parts of your education to really buckle down and learn the facts and details. I’ve also seen the tremendous worth of taking the time to think about how you solve problems and why they’re interesting to solve in the first place. I’m not a high school teacher, so I don’t think I can tell the professionals how to balance all of those goods, which do sometimes conflict. What I’m sure of, though, is that enthusiasm, attention, and hard work from teachers is a key to success no matter what is being taught. The success of every physicist you will ever see on Quantum Diaries is built on the shoulders of the many people who took the time to teach and inspire them when they were young.

by Seth Zenz at April 14, 2014 12:25 PM

CERN Bulletin

Taxation in France: Memorandum concerning the annual internal taxation certificate and the declaration of income for 2013

You are reminded that the Organization levies an internal tax on the financial and family benefits it pays to the members of the personnel (see Chapter V, Section 2 of the Staff Rules and Regulations) and that the members of the personnel are exempt from national taxation on salaries and emoluments paid by CERN.

 

For any other income, the Organization would like to remind members of the personnel that they must comply with the national legislation applicable to them (cf. Article S V 2.02 of the Staff Rules).
 

I - Annual internal taxation certificate for 2013

The annual certificate of internal taxation for 2013, issued by the Finance, Procurement and Knowledge Transfer Department, is available since 21 February 2014. It is intended exclusively for the tax authorities.

If you are currently a member of the CERN personnel you received an e-mail containing a link to your annual certificate, which you can print out if necessary.

If you are no longer a member of the CERN personnel or are unable to access your annual certificate as indicated above, you will find information explaining how to obtain one here.

In case of difficulty in obtaining your annual certificate, send an e-mail explaining the problem to service-desk@cern.ch.

 

II - 2013 income tax declaration form in France

The 2013 income tax declaration form must be completed following the general indications available at the following address: https://cern.ch/admin-eguide/Impots/proc_impot_decl-fr.asp.
 

If you have any specific questions, please contact your LOCAL SERVICE DES IMPÔTS DES PARTICULIERS (SIP, private citizens’ tax office) DIRECTLY.

This information does not concern CERN pensioners, as they are no longer members of the CERN personnel and are therefore subject to the standard national legal provisions relating to taxation.

HR Department
Contact: 73903

April 14, 2014 12:04 PM

Tommaso Dorigo - Scientificblogging

Aldo Menzione And The Design Of The Silicon Vertex Detector
Below is a clip from a chapter of my book where I describe the story of the silicon microvertex detector of the CDF experiment. CDF collected proton-antiproton collisions from the Tevatron collider in 1985, 1987-88, 1992-96, and 2001-2011. Run 1A occurred in 1992, and it featured for the first time in a hadron collider a silicon strip detector, the SVX. The SVX would prove crucial for the discovery of the top quark.

read more

by Tommaso Dorigo at April 14, 2014 09:21 AM

John Baez - Azimuth

New IPCC Report (Part 5)

guest post by Steve Easterbrook

(5) Current rates of ocean acidification are unprecedented.

The IPCC report says:

The pH of seawater has decreased by 0.1 since the beginning of the industrial era, corresponding to a 26% increase in hydrogen ion concentration. [...] It is virtually certain that the increased storage of carbon by the ocean will increase acidification in the future, continuing the observed trends of the past decades. [...] Estimates of future atmospheric and oceanic carbon dioxide concentrations indicate that, by the end of this century, the average surface ocean pH could be lower than it has been for more than 50 million years.

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings. [The numbers indicate the number of models used in each ensemble.]

Ocean acidification has sometimes been ignored in discussions about climate change, but it is a much simpler process, and is much easier to calculate (notice the uncertainty range on the graph above is much smaller than most of the other graphs). This graph shows the projected acidification in the best and worst case scenarios (RCP2.6 and RCP8.5). Recall that RCP8.5 is the “business as usual” future.

Note that this doesn’t mean the ocean will become acid. The ocean has always been slightly alkaline—well above the neutral value of pH 7. So “acidification” refers to a drop in pH, rather than a drop below pH 7. As this continues, the ocean becomes steadily less alkaline. Unfortunately, as the pH drops, the ocean stops being supersaturated for calcium carbonate. If it’s no longer supersaturated, anything made of calcium carbonate starts dissolving. Corals and shellfish can no longer form. If you kill these off, the entire ocean food chain is affected. Here’s what the IPCC report says:

Surface waters are projected to become seasonally corrosive to aragonite in parts of the Arctic and in some coastal upwelling systems within a decade, and in parts of the Southern Ocean within 1–3 decades in most scenarios. Aragonite, a less stable form of calcium carbonate, undersaturation becomes widespread in these regions at atmospheric CO2 levels of 500–600 ppm.


You can download all of Climate Change 2013: The Physical Science Basis here. Click below to read any part of this series:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Climate Change 2013: The Physical Science Basis is also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 14, 2014 07:56 AM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
April 24, 2014 04:36 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at