Particle Physics Planet


April 16, 2014

Emily Lakdawalla - The Planetary Society Blog

The End of Opportunity and the Burden of Success
The Opportunity rover and the Lunar Reconnaissance Orbiter are both zeroed out in NASA's 2015 budget. Learn why these missions face the axe and why the White House is forcing NASA to choose between existing missions and starting new ones.

April 16, 2014 01:19 AM

April 15, 2014

Christian P. Robert - xi'an's og

MCqMC 2014 [closup]

Leuven6As mentioned earlier, this was my very first MCqMC conference and I really enjoyed it, even though (or because) there were many topics that did not fall within my areas of interest. (By comparison, WSC is a serie of conferences too remote from those areas for my taste, as I realised in Berlin where we hardly attended any talk and hardly anyone attended my session!) Here I appreciated the exposure to different mathematical visions on Monte Carlo, without being swamped by applications as at WSC… Obviously, our own Bayesian computational community was much less represented than at, say, MCMSki! Nonetheless, I learned a lot during this conference for instance from Peter Glynn‘s fantastic talk, and I came back home with new problems and useful references [as well as a two-hour delay in the train ride from Brussels]. I also obviously enjoyed the college-town atmosphere of Leuven, the many historical landmarks  and the easily-found running routes out of the town. I am thus quite eager to attend the next MCqMC 2016 meeting (in Stanford, an added bonus!) and even vaguely toying with the idea of organising MCqMC 2018 in Monaco (depending on the return for ISBA 2016 and ISBA 2018). In any case, thanks to the scientific committee for the invitation to give a plenary lecture in Leuven and to the local committee for a perfect organisation of the meeting.


Filed under: pictures, Running, Statistics, Travel, University life, Wines Tagged: Berlin, Brussels, ISBA 2016, Leuven, MCMSki IV, MCQMC2014, train, WSC 2012

by xi'an at April 15, 2014 10:14 PM

Quantum Diaries

Ten things you might not know about particle accelerators

A version of this article appeared in symmetry on April 14, 2014.

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

The Large Hadron Collider at CERN laboratory has made its way into popular culture: Comedian John Stewart jokes about it on The Daily Show, character Sheldon Cooper dreams about it on The Big Bang Theory and fictional villains steal fictional antimatter from it in Angels & Demons.

Despite their uptick in popularity, particle accelerators still have secrets to share. With input from scientists at laboratories and institutions worldwide, symmetry has compiled a list of 10 things you might not know about particle accelerators.

There are more than 30,000 accelerators in operation around the world.

Accelerators are all over the place, doing a variety of jobs. They may be best known for their role in particle physics research, but their other talents include: creating tumor-destroying beams to fight cancer; killing bacteria to prevent food-borne illnesses; developing better materials to produce more effective diapers and shrink wrap; and helping scientists improve fuel injection to make more efficient vehicles.

One of the longest modern buildings in the world was built for a particle accelerator.

Linear accelerators, or linacs for short, are designed to hurl a beam of particles in a straight line. In general, the longer the linac, the more powerful the particle punch. The linear accelerator at SLAC National Accelerator Laboratory, near San Francisco, is the largest on the planet.

SLAC’s klystron gallery, a building that houses components that power the accelerator, sits atop the accelerator. It’s one of the world’s longest modern buildings. Overall, it’s a little less than 2 miles long, a feature that prompts laboratory employees to hold an annual footrace around its perimeter.

Particle accelerators are the closest things we have to time machines, according to Stephen Hawking.

In 2010, physicist Stephen Hawking wrote an article for the UK paper the Daily Mail explaining how it might be possible to travel through time. We would just need a particle accelerator large enough to accelerate humans the way we accelerate particles, he said.

A person-accelerator with the capabilities of the Large Hadron Collider would move its passengers at close to the speed of light. Because of the effects of special relativity, a period of time that would appear to someone outside the machine to last several years would seem to the accelerating passengers to last only a few days. By the time they stepped off the LHC ride, they would be younger than the rest of us.

Hawking wasn’t actually proposing we try to build such a machine. But he was pointing out a way that time travel already happens today. For example, particles called pi mesons are normally short-lived; they disintegrate after mere millionths of a second. But when they are accelerated to nearly the speed of light, their lifetimes expand dramatically. It seems that these particles are traveling in time, or at least experiencing time more slowly relative to other particles.

The highest temperature recorded by a manmade device was achieved in a particle accelerator.

In 2012, Brookhaven National Laboratory’s Relativistic Heavy Ion Collider achieved a Guinness World Record for producing the world’s hottest manmade temperature, a blazing 7.2 trillion degrees Fahrenheit. But the Long Island-based lab did more than heat things up. It created a small amount of quark-gluon plasma, a state of matter thought to have dominated the universe’s earliest moments. This plasma is so hot that it causes elementary particles called quarks, which generally exist in nature only bound to other quarks, to break apart from one another.

Scientists at CERN have since also created quark-gluon plasma, at an even higher temperature, in the Large Hadron Collider.

The inside of the Large Hadron Collider is colder than outer space.

In order to conduct electricity without resistance, the Large Hadron Collider’s electromagnets are cooled down to cryogenic temperatures. The LHC is the largest cryogenic system in the world, and it operates at a frosty minus 456.3 degrees Fahrenheit. It is one of the coldest places on Earth, and it’s even a few degrees colder than outer space, which tends to rest at about minus 454.9 degrees Fahrenheit.

Nature produces particle accelerators much more powerful than anything made on Earth.

We can build some pretty impressive particle accelerators on Earth, but when it comes to achieving high energies, we’ve got nothing on particle accelerators that exist naturally in space.

The most energetic cosmic ray ever observed was a proton accelerated to an energy of 300 million trillion electronvolts. No known source within our galaxy is powerful enough to have caused such an acceleration. Even the shockwave from the explosion of a star, which can send particles flying much more forcefully than a manmade accelerator, doesn’t quite have enough oomph. Scientists are still investigating the source of such ultra-high-energy cosmic rays.

Particle accelerators don’t just accelerate particles; they also make them more massive.

As Einstein predicted in his theory of relativity, no particle that has mass can travel as fast as the speed of light—about 186,000 miles per second. No matter how much energy one adds to an object with mass, its speed cannot reach that limit.

In modern accelerators, particles are sped up to very nearly the speed of light. For example, the main injector at Fermi National Accelerator Laboratory accelerates protons to 0.99997 times the speed of light. As the speed of a particle gets closer and closer to the speed of light, an accelerator gives more and more of its boost to the particle’s kinetic energy.

Since, as Einstein told us, an object’s energy is equal to its mass times the speed of light squared (E=mc2), adding energy is, in effect, also increasing the particles’ mass. Said another way: Where there is more “E,” there must be more “m.” As an object with mass approaches, but never reaches, the speed of light, its effective mass gets larger and larger.

The diameter of the first circular accelerator was shorter than 5 inches; the diameter of the Large Hadron Collider is more than 5 miles.

In 1930, inspired by the ideas of Norwegian engineer Rolf Widerøe, 27-year-old physicist Ernest Lawrence created the first circular particle accelerator at the University of California, Berkeley, with graduate student M. Stanley Livingston. It accelerated hydrogen ions up to energies of 80,000 electronvolts within a chamber less than 5 inches across.

In 1931, Lawrence and Livingston set to work on an 11-inch accelerator. The machine managed to accelerate protons to just over 1 million electronvolts, a fact that Livingston reported to Lawrence by telegram with the added comment, “Whoopee!” Lawrence went on to build even larger accelerators—and to found Lawrence Berkeley and Lawrence Livermore laboratories.

Particle accelerators have come a long way since then, creating brighter beams of particles with greater energies than previously imagined possible. The Large Hadron Collider at CERN is more than 5 miles in diameter (17 miles in circumference). After this year’s upgrades, the LHC will be able to accelerate protons to 6.5 trillion electronvolts.

In the 1970s, scientists at Fermi National Accelerator Laboratory employed a ferret named Felicia to clean accelerator parts.

From 1971 until 1999, Fermilab’s Meson Laboratory was a key part of high-energy physics experiments at the laboratory. To learn more about the forces that hold our universe together, scientists there studied subatomic particles called mesons and protons. Operators would send beams of particles from an accelerator to the Meson Lab via a miles-long underground beam line.

To ensure hundreds of feet of vacuum piping were clear of debris before connecting them and turning on the particle beam, the laboratory enlisted the help of one Felicia the ferret.

Ferrets have an affinity for burrowing and clambering through holes, making them the perfect species for this job. Felicia’s task was to pull a rag dipped in cleaning solution on a string through long sections of pipe.

Although Felicia’s work was eventually taken over by a specially designed robot, she played a unique and vital role in the construction process—and in return asked only for a steady diet of chicken livers, fish heads and hamburger meat.

Particle accelerators show up in unlikely places.

Scientists tend to construct large particle accelerators underground. This protects them from being bumped and destabilized, but can also make them a little harder to find.

For example, motorists driving down Interstate 280 in northern California may not notice it, but the main accelerator at SLAC National Accelerator Laboratory runs underground just beneath their wheels.

Residents in villages in the Swiss-French countryside live atop the highest-energy particle collider in the world, the Large Hadron Collider.

And for decades, teams at Cornell University have played soccer, football and lacrosse on Robison Alumni Fields 40 feet above the Cornell Electron Storage Ring, or CESR. Scientists use the circular particle accelerator to study compact particle beams and to produce X-ray light for experiments in biology, materials science and physics.

Sarah Witman

by Fermilab at April 15, 2014 09:34 PM

Symmetrybreaking - Fermilab/SLAC

Ten things you might not know about particle accelerators

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators.

The Large Hadron Collider at CERN laboratory has made its way into popular culture: Comedian John Stewart jokes about it on The Daily Show, character Sheldon Cooper dreams about it on The Big Bang Theory and fictional villains steal fictional antimatter from it in Angels & Demons.

by Sarah Witman at April 15, 2014 07:59 PM

astrobites - astro-ph reader's digest

A Faint Black Hole?

  • Title: Swift J1357.2-0933: the faintest black hole?
  • Authors: M. Armas Padilla, R. Wijnands, N. Degenaar, T. Munoz-Darias, J.Casares, & R.P. Fender
  • First Author’s Institution: University of Amsterdam
  • Paper Status: Submitted to MNRAS

X-ray binaries are stellar systems that are luminous in the x-ray portion of the spectrum. Matter from one star (typically on the Main Sequence) is transferred onto a more massive white dwarf, neutron star, or stellar-mass black hole. This accretion gives rise to the high x-ray luminosities because the infalling matter converts gravitational potential energy to kinetic energy, which is then radiated away as x-rays. Astronomers subdivide these systems into low-mass x-ray binaries (LMXB) and high-mass x-ray binaries (HMXB). This distinction depends on the mass of the star that is donating mass to the other component.

Occasionally these systems can begin to transfer relatively large amounts of mass, changing their mass transfer rate by up to a few orders of magnitude. However, they are typically found in a lower mass-transferring state referred to as quiescence. Since these systems appear to spend most of their time in these quiescent states, it is common to compare various systems using parameters determined when they were in this state.

The authors of today’s paper investigate the nature of one particular black hole LMXB, called Swift J1357.2-0933. They observed the source with the XMM-Newton satellite, which can observe between 0.1 and 15 keV. Concurrent observations were taken in the optical to confirm the source was in a quiescent state by comparing the magnitude to previously known values.

Figure 1:

Figure 1: The x-ray luminosity plotted against the orbital period for neutron star (red stars) and black hole (black circles) x-ray binaries. The source studied in this paper is shown as the white circle. The grey box shows the orbital period and luminosity uncertainty given the uncertainty in the distance. The crossed black circle represents the only other currently known black hole binary around the theoretical switching point for the luminosity-period relation. (From Armas Padilla 2014)

The x-ray spectrum can be used to determine the x-ray luminosity of the source, if you know the distance, through the flux-luminosity relation. Determining the luminosity is complicated by the large uncertainty in the distance. Previous studies of this object have placed this object between 0.5 and 6 kpc, though 1.5 kpc is the commonly stated distance. Assuming a distance of 1.5 kpc, the authors determine this source to be the faintest BH LMXB known, based on x-ray luminosity. If it is further away, the luminosity would be comparable to the faintest BH sources. Figure 1 shows how the distance uncertainty affects the luminosity (grey box).

A more precise distance will help constrain some theory about the behavior of BH binary sources. Binary sources will lose angular momentum due to gravitational radiation, causing the orbital period to decrease. It has been predicted that as the orbital period decreases, the luminosity would likewise decrease, but only down to a certain period, at which the luminosity would again increase. There are not yet enough known and measured BH binary sources to fully investigate this theoretical prediction. The object studied in this paper is around the period at which this luminosity is thought to switch. There is only one other black hole binary at a similar period, shown as the crossed black circle in Figure 1. Accurately determining the distance will allow a more accurate luminosity and feed into if or where this switch occurs.

This system has the potential to help provide an important constraint to our understanding of low-mass x-ray binaries and their evolution over time. The most important task to reach this potential is to more accurately determine the distance, which can be measured through obtaining more optical photometry and spectroscopy of the companion main sequence star.

by Josh Fuchs at April 15, 2014 06:39 PM

Peter Coles - In the Dark

Magnetic River

I stumbled across this yesterday as a result of an email from a friend who shall remain nameless (i.e. Anton). I remember seeing Prof. Eric Laithwaite on the television quite a few times when I was a kid. What I found so interesting about watching this so many years later is that it’s still so watchable and compelling. No frills, no gimmicks, just very clear explanation and demonstrations, reinforced by an aura of authoritativeness that makes you want to listen to him. If only more modern science communication were as direct as this.  I suppose part of the appeal is that he speaks with an immediately identifiable no-nonsense accent, from the part of the Midlands known as Lancashire….

 

 


by telescoper at April 15, 2014 04:18 PM

Sean Carroll - Preposterous Universe

Talks on God and Cosmology

Hey, remember the debate I had with William Lane Craig, on God and Cosmology? (Full video here, my reflections here.) That was on a Friday night, and on Saturday morning the event continued with talks from four other speakers, along with responses by WLC and me. At long last these Saturday talks have appeared on YouTube, so here they are!

First up was Tim Maudlin, who usually focuses on philosophy of physics but took the opportunity to talk about the implications of God’s existence for morality. (Namely, he thinks there aren’t any.)

Then we had Robin Collins, who argued for a new spin on the fine-tuning argument, saying that the universe is constructed to allow for it to be discoverable.

Back to Team Naturalism, Alex Rosenberg explains how the appearance of “design” in nature is well-explained by impersonal laws of physics.

Finally, James Sinclair offered thoughts on the origin of time and the universe.

To wrap everything up, the five of us participated in a post-debate Q&A session.

Enough debating for me for a while! Oh no, wait: on May 7 I’ll be in New York, debating whether there is life after death. (Spoiler alert: no.)

by Sean Carroll at April 15, 2014 03:16 PM

Lubos Motl - string vacua and pheno

Podcast with Lisa Randall on inflation, Higgs, LHC, DM, awe
I want to offer you a yesterday's 30-minute podcast of Huffington Post's David Freeman with Lisa Randall of Harvard
Podcast with Randall (audio over there)
The audio format is thanks to RobinHoodRadio.COM.

They talk about inflation, the BICEP2 discovery, the Higgs boson vs the Higgs field, the LHC, its tunnels, and the risk that the collider would create deadly black holes.




I think her comments are great, I agree with virtually everything, including the tone.




Well, I am not sure what she means by the early inflationary models' looking contrived but that's just half a sentence of a minor disagreement – which may become a major one, of course, if some people focus on this topic.

She is asked about the difference between the Big Bang and inflation, the Higgs boson vs. the Higgs field (who gives masses to other particles). The host asks about the size of the LHC; it is sort of bizarre because the photographs of the LHC have been everywhere in the media and they're very accessible so why would one ask about the size of the tunnel again?

The host also said that there would be "concerns" that the LHC would have created a hungry black hole that would devour our blue, not green planet. I liked Lisa's combative reply: the comment had to be corrected. There were concerns but only among the people who didn't have a clue. The actual calculations of safety – something that scientists are sort of obliged to perform before they do an experiment – end up with the result that we're safe as the rate of such accidents is lower than "one per the age of the universe". It's actually much lower than that but even that should be enough.

They also talk about the multiverse. Lisa says that she's not among those who are greatly interested in the multiverse ideas – she's more focused on things we can measure – but of course that there may be other universes. Just because we haven't see them doesn't mean that they don't exist. (She loves to point the same idea when it comes to dark matter.)

What comes at the end of the universe? She explains that the compact space – a balloon is free of troubles. The host says the usual thing that the laymen always do. The balloon is expanding into something, some preexisting space. But in the case of the universe, there is simply nothing outside it, Lisa warns. The balloon is the whole story. I have some minor understanding for this problem of the laymen because when I was 8, I also had the inclination to imagine that the curved spacetime of general relativity (from the popular articles and TV shows) had to be embedded into some larger, flat one. But this temptation went away a year later or so. The Riemannian geometry is meant to describe "all of space" and it allows curvature. To embed the space into a higher-dimensional flat one is a way (and not the only way) to visualize the curvature but these extra "crutches" are not necessarily physical. And in fact, they are not physical in our real universe.

Now, is dark matter the same thing as antimatter? Based on the frequency at which I have heard this question, I believe that every third layman must be asking the very same question. So Lisa has to say that antimatter is charged and qualitatively behaves just like ordinary matter – and they annihilate – while dark matter has to be new. Is dark matter made of black holes? Every 10th layman has this idea. It's actually an a priori viable one that needs some discussion. One has to look for "small astrophysical objects as dark matter". They would cause some gravitational lensing which is not seen.

So what is dark energy? It's something that is not localizable "stuff". Dark energy is smoothly spread everywhere. Absolute energy matters, Einstein found out. And the C.C. accelerates the expansion of the universe. Can the experiments find dark energy and dark matter? Not dark energy but possibly dark matter. It could be a bigger deal than the Higgs boson.

LHC is upgrading and will be reopened for collision business in 1 year. No one believes that the Higgs boson is everything there is but it is not clear that the other things are achievable by the LHC.

Lisa is now working on dark matter. Lots of theoretical ideas. Dark matter with a more strongly interacting component.

What is it good for? The electron seemed to be useless, too. So there may be unexpected applications. But applications are not the main motivation. She is also asked about being religious. She is not religious and for her, science isn't about the "sense of awe". So she is not religious even in the most general sense. Ultimately, science wants to understand things that clarify the "awe", that make the magnificent things look accessible. It is about solving puzzles and the satisfaction arises from the understanding, from the feeling that things fit together.

The host says that because she writes popular books, she must present the "sense of wonder". Lisa protests again. My books are about science, not the awe! :-) There is clearly a widespread feeling among the laymen that scientists are obliged to lick the buttocks of the stupid laymen in some particular ways. To constantly "admit" (more precisely, to constantly lie) that science knows nothing and spread religious feelings. But scientists are not obliged to do any of these things and in fact, they shouldn't do these things. A good popular book is one that attracts the reader into genuine science – the organized process of learning the truth about Nature – and that communicates some correct science (principles, methods, or results) to the readers. If science implies that the people who are afraid of the destruction of the world by the LHC are imbeciles, and be sure that science does imply that, a good popular scientific book must nicely articulate this point. A good popular scientific book is not one that reinforces the reader's spiritual or even anti-scientific preconceptions (although the book that does reinforce them may be generously praised by the stupid readers and critics).

Is it possible to convey the science without maths? Lisa tends to answer Yes because she appreciates classical music although she has never studied it. But she could still learn something about it from the books, although less than the professional musicians. So it doesn't have to be "all or nothing". People still learn some science even if they don't learn everything. And readers of her book, she believes, may come from many layers and learn the content to various degrees of depth and detail.

There's lots of talk about America's falling behind in STEM fields. LOL, exactly, there is a lot of talk, Lisa replies. 50 years ago, people were inspired by the space research. But the host tries to suggest that there is nothing inspiring in physics or science now or something like that. Lisa says that there are tons of awe-inspiring things – perhaps too many.

What is the most awe-inspiring fact, Lisa is asked? She answers that it's the body and size of all the things we understood in a recent century or so. Nebulae used to be galaxies, the host is amazed. Lisa talks about such cosmological insights for a while.



Incidentally, on Sunday, we finally went to Pilsner Techmania's 3D planetarium. We watched the Astronaut 3D program (trailed above: a movie about all the training that astronauts undergo and dangers awaiting them during the spaceflight) plus a Czech program on the spring sky above Pilsen (constellations and some ancient stories about them: I was never into it much and I am still shaking my head whenever someone looks at 9/15 stars/dots and not only determines that it is a human but also that its gender is female and even that she has never had sex before – that was the Virgo constellation, if you couldn't tell). Technically, I was totally impressed how Techmania has tripled or quadrupled (with the planetarium) in the last 6 months. The 3D glasses look robust and cool although they're based on a passive color system only. Things suddenly look very clean and modern (a year ago, Techmania would still slightly resemble the collapsing Škoda construction halls in Jules Verne's Steel City after a global nuclear war LOL).

On the other hand, I am not quite sure whether the richness of the spiritual charge of the content fully matches the generous superficial appearance (which can't hide that lots of money has clearly gone into it). There were many touch-sensitive tabletop displays in Techmania (e.g. one where you could move photographs of the Milky Way, a woman, and a few more from one side – X-ray spectrum – to the other side – radio waves – and see what it looks like), the "science on sphere" projection system, and a few other things (like a model of a rocket which can shoot something up; a gyroscope with many degrees of freedom for young astronauts to learn how to vomit; scales where you can see how much you weigh on the Moon and all the planets of the Solar System, including fake models of steel weights with apparently varying weights). I haven't seen the interiors of the expanded Techmania proper yet (there is a cool simple sundial before you enter the reception). Also, I think that the projectors in the 3D fulldome could be much stronger (more intense), the pictures were pretty dark relatively to how I remember cinemas. The 3D cosmos-oriented science movies will never be just like Titanic – one can't invest billions into things with limited audiences – but I still hope that they will make some progress because to some extent, these short programs looked like a "proof of a concept" rather than a full-fledged complete experience that should compete with regular movie theaters, among other sources of (less scientific) entertainment. I suppose that many more 3D fulldomes have to be built before the market with the truly impressive programs becomes significant.

by Luboš Motl (noreply@blogger.com) at April 15, 2014 01:12 PM

CERN Bulletin

CERN Bulletin Issue No. 16-17/2014
Link to e-Bulletin Issue No. 16-17/2014Link to all articles in this issue No.

April 15, 2014 09:32 AM

Peter Coles - In the Dark

Bearded Bishop Brentwood welcomed but too late for Beard of Spring poll

telescoper:

I’m still way behind John Brayford (who he?), but there’s definitely signs of a bounce! The Deadline is 19th April. Vote for me!

 

Originally posted on Kmflett's Blog:

Beard Liberation Front
PRESS RELEASE 14th April
Contact Keith Flett 07803 167266
Bearded Bishop Brentwood welcomed but too late for Spring Beard poll

The Beard Liberation Front, the informal network of beard wearers that campaigns against beardism, has welcomed the news that the Pope on Monday appointed Fr Alan Williams FM as the Bishop of Brentwood but say that his appointment is too late for inclusion on the Beard of Spring 2014 poll which concludes on Friday.

The campaigners say that they are certain that the distinguished Bishop will feature in future

The big issue in the days left for voting is whether current leader Sheffield United footballer John Brayford did enough in his team’s defeat to Hull in Sunday’s FA Cup semi-final to take the title or whether challengers such as cosmologist Peter Coles and Editor of the I Paper Olly Duff can catch him

The Beard of Spring…

View original 136 more words


by telescoper at April 15, 2014 07:57 AM

Clifford V. Johnson - Asymptotia

Beautiful Randomness
Spotted in the hills while out walking. Three chairs left out to be taken, making for an enigmatic gathering at the end of a warm Los Angeles Spring day... random_chairs_la_14_04_14 I love this city. -cvj Click to continue reading this post

by Clifford at April 15, 2014 04:30 AM

astrobites - astro-ph reader's digest

Forming Stars in the Stream

Title: Recent Star Formation in the Leading Arm of the Magellanic Stream
Authors: Dana I. Casetti-Dinescu, Christian Moni Bidin, Terrence M. Girard, Rène A. Mèndez, Katherine Vieira, Vladimir I. Korchagin, and William F. van Altena
First Author’s Institution: Dept. of Physics, Southern Connecticut State University, New Haven, CT; Astronomy Dept., Yale University, New Haven, CT.
Paper Status: 
Accepted for publication in ApJ Letters

Our galaxy is not alone. I don’t just mean that there are other galaxies in the Universe, but that there are other galaxies sitting right at our doorstep. The Milky Way is but one of 54 galaxies in our local group (Andromeda is one of these). Many of these galaxies are smaller, dwarf galaxies, and two of the closest and largest of these galaxies are the Small and Large Magellanic Clouds (visible by naked eye if you happen to live in the Southern Hemisphere). These two clouds are thought to be in their first orbit around the Milky Way galaxy, but have been interacting with each other for quite some time. Through tidal interactions, these two clouds have produced large streams of gas that can be seen in radio emission as shown in Fig. 1. The long tail behind the two Magellanic Clouds is called the Magellanic Stream (MS), the gas connecting the two is known as the bridge, and the gas above and to the right the leading arm (LA).

Fig. 1:

Fig. 1: Shown here is a composite image of our Milky Way (optical, center) and the large stream of gas associated with the Magellanic Clouds (radio, red). The Small and Large Magellanic clouds can be seen as the small and large bright spots towards the lower right, connected to each other by a bridge of gas. Behind these galaxies is a long tail known as the Magellanic Stream. Ahead is the branched, leading arm. (Credit: Nidever, et. al., NRAO/AUI/NSF and Meilinger, Leiden-Argentine-Bonn Survey, Parkes Observatory, Westerbork Observatory, Arecibo Observitory)

Stars in the Stream

Astronomers expect the interactions between these two clouds and each other, and their interactions with our Milky Way, to induce star formation within the gas streams. The authors investigate this possibility by looking for young, hot, massive stars in the leading arm (again, to the right in Fig. 1). The authors find 42 stars that could be young stars associated with the leading arm. However, determining exactly what these objects are, and whether or not they are associated with the leading arm itself is not an easy task. The authors obtain spectra for each of these objects  in order to identify the stellar type and their physical location.

Fig. 2 shows a color map of the neutral hydrogen in the leading arm (whose different sections are labelled LAI to LAIV) overlaid with the candidate stars as crosses; the stars are spread between three separate groups labelled A, B, and C. Not all of the candidate stars will be young stars associated with the leading arm. The authors identify contaminating foreground stars by examining the composition of the stars obtained through the spectra. Many stars (22, the white circles in Fig. 2) are in fact foreground stars that turned out to either be small dwarf stars (subdwarf B stars to be exact) or white dwarfs. The rest of the stars (green circles) are the young, hot stars the authors sought. Of these, the authors confirmed that 6 were kinematically (based upon their positions and velocities) associated with the leading arm; these are denoted by red boxes in Fig. 2. These stars have temperatures ranging from 16,000 K to 17,200 K. 5 of these stars are young B stars, and 1 is a subdwarf B star. Given the kinematics of these stars, the authors rule out the possibility that this group of stars are “runaway stars” that actually came from the Milky Way disc.

Fig. 2:

Fig. 2: The leading arm shown in neutral hydrogen along with the 42 candidate stars (crosses). The four segments of the leading arm are marked by LA I – LA IV, and the three groups of stars by A, B, and C. Each star identified as a foreground star is marked with a white circle, while the green circles denote young, hot O stars. Red boxes indicated definite association with the leading arm, while the black star marks the youngest, hottest star in the sample. (Credit: Casetti-Dinescu et. al. 2014)

One of these things is not like the other…

Of their entire sample, the authors find a very special star, marked by the black star in Fig. 2. This is an O type star (O6V to be exact), and has a temperature of 43,700 K, and a mass around 40 solar masses. This is a far younger, and far hotter star from anything else in the sample. At this temperature and mass, this star has a lifetime on the order of 1-2 millions of years. Given this very short lifetime, the authors rule out the possibility that this star came from the Milky Way (it would have to be at least 385 millions years old if it did). In addition, they rule out the possibility that it came from the Large Magellanic Cloud, as it would have to have an unrealistically large velocity to move from the Large Magellanic Cloud to where it is currently located.

Because of this, the authors conclude that they have just discovered for the first time a star that formed very recently within the leading arm of the Magellanic stream. This young, hot star was born out of the interactions between the Milky Way galaxy and the two Magellanic Clouds, but exits completely independent of them.

 

by Andrew Emerick at April 15, 2014 03:33 AM

April 14, 2014

Christian P. Robert - xi'an's og

adaptive subsampling for MCMC

Oxford to Coventry, Feb. 25, 2012

“At equilibrium, we thus should not expect gains of several orders of magnitude.”

As was signaled to me several times during the MCqMC conference in Leuven, Rémi Bardenet, Arnaud Doucet and Chris Holmes (all from Oxford) just wrote a short paper for the proceedings of ICML on a way to speed up Metropolis-Hastings by reducing the number of terms one computes in the likelihood ratio involved in the acceptance probability, i.e.

\prod_{i=1}^n\frac{L(\theta^\prime|x_i)}{L(\theta|x_i)}.

The observations appearing in this likelihood ratio are a random subsample from the original sample. Even though this leads to an unbiased estimator of the true log-likelihood sum, this approach is not justified on a pseudo-marginal basis à la Andrieu-Roberts (2009). (Writing this in the train back to Paris, I am not convinced this approach is in fact applicable to this proposal as the likelihood itself is not estimated in an unbiased manner…)

In the paper, the quality of the approximation is evaluated by Hoeffding’s like inequalities, which serves as the basis for a stopping rule on the number of terms eventually evaluated in the random subsample. In fine, the method uses a sequential procedure to determine if enough terms are used to take the decision and the probability to take the same decision as with the whole sample is bounded from below. The sequential nature of the algorithm requires to either recompute the vector of likelihood terms for the previous value of the parameter or to store all of them for deriving the partial ratios. While the authors adress the issue of self-evaluating whether or not this complication is worth the effort, I wonder (from my train seat) why they focus so much on recovering the same decision as with the complete likelihood ratio and the same uniform. It would suffice to get the same distribution for the decision (an alternative that is easier to propose than to create of course). I also (idly) wonder if a Gibbs version would be manageable, i.e. by changing only some terms in the likelihood ratio at each iteration, in which case the method could be exact… (I found the above quote quite relevant as, in an alternative technique we are constructing with Marco Banterle, the speedup is particularly visible in the warmup stage.) Hence another direction in this recent flow of papers attempting to speed up MCMC methods against the incoming tsunami of “Big Data” problems.


Filed under: pictures, Statistics, Travel Tagged: acceptance rate, Brussels, Gibbs sampler, Hoeffding, ICML, Leuven, MCMC, MCQMC2014, Metropolis-Hastings algorithms, Paris, sequential Monte Carlo, speedup

by xi'an at April 14, 2014 10:14 PM

Christian P. Robert - xi'an's og

Clifford V. Johnson - Asymptotia

Total Lunar Eclipse!
There is a total eclipse of the moon tonight! It is also at not too inconvenient a time (relatively speaking) if you're on the West Coast. The eclipse begins at 10:58pm (Pacific) and gets to totality by 12:46am. This is good timing for me since I'd been meaning to set up the telescope and look at the moon recently anyway, and a full moon can be rather bright. Now there'll be a natural filter in the way, indirectly - the earth! There's a special event up at the Griffith Observatory if you are interested in making a party out of it. It starts at 7:00pm and you can see more about the [...] Click to continue reading this post

by Clifford at April 14, 2014 09:23 PM

Peter Coles - In the Dark

White in the moon the long road lies

White in the moon the long road lies,
The moon stands blank above;
White in the moon the long road lies
That leads me from my love.

Still hangs the hedge without a gust,
Still, still the shadows stay:
My feet upon the moonlit dust
Pursue the ceaseless way.

The world is round, so travellers tell,
And straight though reach the track,
Trudge on, trudge on, ’twill all be well,
The way will guide one back.

But ere the circle homeward hies
Far, far must it remove:
White in the moon the long road lies
That leads me from my love.

by A.E. Housman (1859-1936)

 


by telescoper at April 14, 2014 07:40 PM

Andrew Jaffe - Leaves on the Line

“Public Service Review”?

A few months ago, I received a call from someone at the “Public Service Review”, supposedly a glossy magazine distributed to UK policymakers and influencers of various stripes. The gentleman on the line said that he was looking for someone to write an article for his magazine giving an example of what sort of space-related research was going on at a prominent UK institution, to appear opposite an opinion piece written by Martin Rees, president of the Royal Society.

This seemed harmless enough, although it wasn’t completely clear what I (or the Physics Department, or Imperial College) would get out of it. But I figured I could probably knock something out fairly quickly. However, he told me there was a catch: it would cost me £6000 to publish the article. And he had just ducked out of his editorial meeting in order to find someone to agree to writing the article that very afternoon. Needless to say, in this economic climate, I didn’t have an account with an unused £6000 in it, especially for something of dubious benefit. (On the other hand, astrophysicists regularly publish in journals with substantial page charges.) It occurred to me that this could be a scam, although the website itself seems legitimate (although no one I spoke to knew anything about it).

I had completely forgotten about this until this week, when another colleague in our group at Imperial told me had received the same phone call, from the same organization, with the same details: article to appear opposite Lord Rees’; short deadline; large fee.

So, this is beginning to sound fishy. Has anyone else had any similar dealings with this organization?

Update: It has come to my attention that one of the comments below was made under a false name, in particular the name of someone who actually works for the publication in question, so I have removed the name, and will possibly likely the comment unless the original write comes forward with more and truthful information (which I will not publish without permission). I have also been informed of the possibility that some other of the comments below may come from direct competitors of the publication. These, too, may be removed in the absence of further confirming information.

Update II: In the further interest of hearing both sides of the discussion, I would like to point out the two comments from staff at the organization giving further information as well as explicit testimonials in their favor.

by Andrew at April 14, 2014 06:41 PM

The n-Category Cafe

universo.math

A new Spanish language mathematical magazine has been launched: universo.math. Hispanophones should check out the first issue! There are some very interesting looking articles which cover areas from art through politics to research-level mathematics.

The editor-in-chief is my mathematical brother Jacob Mostovoy and he wants it to be a mix of Mathematical Intellingencer, Notices of the AMS and the New Yorker, together with less orthodox ingredients; the aim is to keep the quality high.

Besides Jacob, the contributors to the first issue that I recognise include Alberto Verjovsky, Ernesto Lupercio and Edward Witten, so universo.math seems to be off to a high quality start.

by willerton (S.Willerton@sheffield.ac.uk) at April 14, 2014 05:16 PM

Matt Strassler - Of Particular Significance

A Lunar Eclipse Overnight

Overnight, those of you in the Americas and well out into the Pacific Ocean, if graced with clear skies, will be able to observe what is known as “a total eclipse of the Moon” or a “lunar eclipse”. The Moon’s color will turn orange for about 80 minutes, with mid-eclipse occurring simultaneously in all the areas in which the eclipse is visible: 3:00-4:30 am for observers in New York, 12:00- 1:30 am for observers in Los Angeles, and so forth. [As a bonus, Mars will be quite near the Moon, and about as bright as it gets; you can't miss it, since it is red and much brighter than anything else near the Moon.]

Since the Moon is so bright, you will be able to see this eclipse from even the most light-polluted cities. You can read more details of what to look for, and when to look for it in your time zone, at many websites, such as http://www.space.com/25479-total-lunar-eclipse-2014-skywatching-guide.html  However, many of them don’t really explain what’s going on.

One striking thing that’s truly very strange about the term “eclipse of the Moon” is that the Moon is not eclipsed at all. The Moon isn’t blocked by anything; it just becomes less bright than usual. It’s the Sun that is eclipsed, from the Moon’s point of view. See Figure 1. To say this another way, the terms “eclipse of the Sun” and “eclipse of the Moon”, while natural from the human-centric perspective, hide the fact that they really are not analogous. That is, the role of the Sun in a “solar eclipse” is completely different from the role of the Moon in a “lunar eclipse”, and the experience on Earth is completely different. What’s happening is this:

  • a “total eclipse of the Sun” is an “eclipse of the Sun by the Moon that leaves a shadow on the Earth.”
  • a “total eclipse of the Moon” is an “eclipse of the Sun by the Earth that leaves a shadow on the Moon.”

In a total solar eclipse, lucky humans in the right place at the right time are themselves, in the midst of broad daylight, cast into shadow by the Moon blocking the Sun. In a total lunar eclipse, however, it is the entire Moon that is cast into shadow; we, rather than being participants, are simply observers at a distance, watching in our nighttime as the Moon experiences this shadow. For us, nothing is eclipsed, or blocked; we are simply watching the effect of our own home, the Earth, eclipsing the Sun for Moon-people.

Fig. 1: In a "total solar eclipse", a small shadow is cast by the Moon upon the Earth; at that spot the Sun appears to be eclipsed.  In a "total lunar eclipse", the Earth casts a huge shadow across the entire Moon;

Fig. 1: In a “total solar eclipse”, a small shadow is cast by the Moon upon the Earth; at that spot the Sun appears to be eclipsed by the Moon. In a “total lunar eclipse”, the Earth casts a huge shadow across the entire Moon; on the near side of the Moon, the Sun appears to be eclipsed by the Earth.   The Moon glows orange because sunlight bends around the Earth through the Earth’s atmosphere; see Figure 2.  Picture is not to scale; the Sun is 100 times the size of the Earth, and much further away than shown.

Simple geometry, shown in Figure 1, assures that the first type of eclipse always happens at “new Moon”, i.e., when the Moon would not be visible in the Earth’s sky at night. Meanwhile the second type of eclipse, also because of geometry, only occurs on the night of the “full Moon”, when the entire visible side of the Moon is (except during an eclipse) in sunlight. Only then can the Earth block the Sun, from the Moon’s point of view.

An total solar eclipse — an eclipse of the Sun by the Moon, as seen from the Earth — is one of the nature’s most spectacular phenomena. [I am fortunate to speak from experience; put this on your bucket list.] That is both because we ourselves pass into darkness during broad daylight, creating an amazing light show, and even more so because, due to an accident of geometry, the Moon and Sun appear to be almost the same size in the sky: the Moon, though 400 times closer to the Earth than the Sun, happens to be just about 400 times smaller in radius than the Sun. What this means is that the Sun’s opaque bright disk, which is all we normally see, is almost exactly blocked by the Moon; but this allows the dimmer (but still bright!) silvery corona of the Sun, and the pink prominences that erupt off the Sun’s apparent “surface”, to become visible, in spectacular fashion, against a twilight sky. (See Figure 2.) This geometry also implies, however, that the length of time during which any part of the Earth sees the Sun as completely blocked is very short — not more than a few minutes — and that very little of the Earth’s surface actually goes into the Moon’s shadow (see Figure 1).

No such accident of geometry affects an “eclipse of the Moon”. If you were on the Moon, you would see the Earth in the sky as several times larger than the Sun, because the Earth, though about 400 times closer to the Moon than is the Sun, is only about 100 times smaller in radius than the Sun. Thus, the Earth in the Moon’s sky looks nearly four times as large, from side to side (and 16 times as large in apparent area) as does the Moon in the Earth’s sky.  (In short: Huge!) So when the Earth eclipses the Sun, from the Moon’s point of view, the Sun is thoroughly blocked, and remains so for as much as a couple of hours.

But that’s not to say there’s no light show; it’s just a very different one. The Sun’s light refracts through the Earth’s atmosphere, bending around the earth, such that the Earth’s edge appears to glow bright orange or red (depending on the amount of dust and cloud above the Earth.) This ring of orange light amid the darkness of outer space must be quite something to behold! Thus the Moon, instead of being lit white by direct sunlight, is lit by the unmoonly orange glow of this refracted light. The orange light then reflects off the Moon’s surface, and some travels back to Earth — allowing us to see an orange Moon. And we can see this from any point on the Earth for which the Moon is in the sky — which, during a full Moon, is (essentially) anyplace where the Sun is down.  That’s why anyone in the Americas and eastern Pacific Ocean can see this eclipse, and why we all see it simultaneously [though, since we're in different time zones, our clocks don't show the same hour.]

Since lunar eclipses (i.e. watching the Moon move into the Earth’s shadow) can be seen simultaneously across any part of the Earth where it is dark during the eclipse, they are common. I have seen two lunar eclipses at dawn, one at sunset, and several in the dark of night; I’ve seen the moon orange, copper-colored, and, once, blood red. If you miss one total lunar eclipse due to clouds, don’t worry; there will be more. But a total solar eclipse (i.e. standing in the shadow of the Moon) can only be seen and appreciated if you’re actually in the Moon’s shadow, which affects, in each eclipse, only a tiny fraction of the Earth — and often a rather inaccessible fraction. If you want to see one, you’ll almost certainly have to plan, and travel. My advice: do it.  Meanwhile, good luck with the weather tonight!


Filed under: Astronomy Tagged: astronomy

by Matt Strassler at April 14, 2014 05:08 PM

Peter Coles - In the Dark

Matzo Balls

This evening sees the start of the Jewish Festival of the Passover (Pesach) which made me think of posting this piece of inspired silliness by the legendary Slim Gaillard to wish you all a Chag Sameach.

Slim Gaillard was a talented musician in his own right, but also a wonderful comedian and storyteller. He’s most famous for the novelty jazz acts he formed with musicians such as Slam Stewart and, later, Bam Brown; their stream of consciousness vocals ranged far afield from the original lyrics along with wild interpolations of nonsense syllables such as MacVoutie and O-reeney; one such performance figures in the 1957 novel On the Road by Jack Kerouac.

In later life Slim Gaillard travelled a lot in Europe – he could speak 8 languages in addition to English – and spent long periods living in London. He died there, in fact, in 1991, aged 75. I saw him a few times myself when I used to go regularly to Ronnie Scott’s Club. A tall, gangly man with a straggly white beard and wonderful gleam in his eye, he cut an unmistakeable figure in the bars and streets of Soho. He rarely had to buy himself a drink as he was so well known and such an entertaining fellow that a group always formed around him  in order to enjoy his company whenever he went into a pub. You never quite knew what he was going to do next, in fact. I once saw him sit down and play a piano with his palms facing upwards, striking the notes with the backs of his fingers. Other random things worth mentioning are that Slim Gaillard’s daughter was married to Marvin Gaye and it is generally accepted that the word “groovy” was coined by him (Slim). I know it’s a cliché, but he really was a larger-than-life character and a truly remarkable human being.

They don’t make ‘em like Slim any more, but you can get a good idea of what a blast he was by listening to this record, which is bound to bring a smile even to the  most crabbed of faces….

 

 

 

 

 


by telescoper at April 14, 2014 05:06 PM

Symmetrybreaking - Fermilab/SLAC

CERN's LHCb experiment sees exotic particle

An analysis using LHC data verifies the existence of an exotic four-quark hadron.

Last week, the Large Hadron Collider experiment LHCb published a result confirming the existence of a rare and exotic particle. This particle breaks the traditional quark model and is a “smoking gun” for a new class of hadrons.

The Belle experiment at the KEK laboratory in Japan had previously announced the observation of such a particle, but it came into question when data from sister experiment BaBar at SLAC laboratory in California did not back up the result.

Now scientists at both the Belle and BaBar experiments consider the discovery confirmed by LHCb.

by Sarah Charley at April 14, 2014 05:02 PM

Emily Lakdawalla - The Planetary Society Blog

Pretty picture: Sunset over Gale crater
Imagine yourself on a windswept landscape of rocks and red dust with mountains all around you. The temperature -- never warm on this planet -- suddenly plunges, as the small Sun sets behind the western range of mountains.

April 14, 2014 03:38 PM

arXiv blog

How to Detect Criminal Gangs Using Mobile Phone Data

Law enforcement agencies are turning to social network theory to better understand the behaviors and habits of criminal gangs.


The study of social networks is providing dramatic insights into the nature of our society and how we are connected to one another. So it’s no surprise that law enforcement agencies want to get in on the act.

April 14, 2014 02:00 PM

ZapperZ - Physics and Physicists

Learn Quantum Mechanics From Ellen DeGeneres
Hey, why not? :)



Although, there isn't much of "quantum mechanics" in here, but rather more on black holes and general relativity. Oh well!

Zz.

by ZapperZ (noreply@blogger.com) at April 14, 2014 01:34 PM

ZapperZ - Physics and Physicists

Science Is Running Out Of Things To Discover?
John Horgan is spewing out the same garbage again in his latest opinion piece (and yes, I'm not mincing my words here). His latest lob into this controversy is the so-called evidence that in physics, the time difference between the original work and when the Nobel prize is finally awarded is getting longer, and thus, his point that physics, especially "fundamental physics", is running out of things to discover.

In their brief Nature letter, Fortunato and co-authors do not speculate on the larger significance of their data, except to say that they are concerned about the future of the Nobel Prizes. But in an unpublished paper called "The Nobel delay: A sign of the decline of Physics?" they suggest that the Nobel time lag "seems to confirm the common feeling of an increasing time needed to achieve new discoveries in basic natural sciences—a somewhat worrisome trend."

This comment reminds me of an essay published in Nature a year ago, "After Einstein: Scientific genius is extinct." The author, psychologist Dean Keith Simonton, suggested that scientists have become victims of their own success. "Our theories and instruments now probe the earliest seconds and farthest reaches of the universe," he writes. Hence, scientists may produce no more "momentous leaps" but only "extensions of already-established, domain-specific expertise." Or, as I wrote in The End of Science, "further research may yield no more great revelations or revolutions, but only incremental, diminishing returns."
So, haven't we learned anything from the history of science? The last time someone thought that we knew all there was to know about an area of physics, and all that we could do was simply to make incremental understanding of the area,  it was pre-1985 before Mother Nature smacked us right in the face with the discovery of high-Tc superconductors.

There is a singular problem with this opinion piece. It equates "fundamental physics" with elementary particle/high energy/cosmology/string/etc. This neglects the fact that (i) the Higgs mechanism came out of condensed matter physics, (ii) "fundamental" understanding of various aspects of quantum field theory and other exotica such as Majorana fermions and magnetic monopole are coming out of condensed matter physics, (iii) the so-called "fundamental physics" doesn't have a monopoly on the physics Nobel prizes. It is interesting that Horgan pointed out the time lapse between the theory and Nobel prizes for superfluidity (of He3), but neglected the short time frame between discovery and the Nobel prize for graphene, or high-Tc superconductors.

As we know more and more, the problems that remain and new ones that popped up become more and more difficult to decipher and observe. Naturally, this will make the confirmation/acceptance up to the level of Nobel prize to be lengthier, both in terms of peer-reviewed evaluation and in time. But this metric does NOT reflect on whether we lack things to discover. Anyone who had done scientific research can tell you that as you try to solve something, other puzzling things pop up! I can guarantee you that the act of trying to solve the Dark Energy and Dark Matter problem will provide us with MORE puzzling observations, even if we solve those two. That has always been the pattern in scientific discovery from the beginning of human beings trying to decipher the world around us! In fact, I would say that we have a lot more things we don't know of now than before, because we have so many amazing instruments that are giving us more puzzling and unexpected things.

Unfortunately, Horgan seems to dismiss whole areas of physics as being unimportant and not "fundamental".

Zz.

by ZapperZ (noreply@blogger.com) at April 14, 2014 01:26 PM

Quantum Diaries

Moriond 2014 : de nouveaux résultats, de nouvelles explorations… mais pas de nouvelle physique

Même avant mon départ pour La Thuile (Italie), les résultats des Rencontres de Moriond remplissaient déjà les fils d’actualités. La session de cette année sur l’interaction électrofaible, du 15 au 22 mars, a débuté avec la première « mesure mondiale » de la masse du quark top, basée sur la combinaison des mesures publiées jusqu’à présent par les expériences Tevatron et LHC. La semaine s’est poursuivie avec un résultat spectaculaire de CMS sur la largeur du Higgs.

Même si elle approche de son 50e anniversaire, la conférence de Moriond est restée à l’avant-garde. Malgré le nombre croissant de conférences incontournables en physique des hautes énergies, Moriond garde une place de choix dans la communauté, pour des raisons en partie historiques : cette conférence existe depuis 1966 et elle s’est imposée comme l’endroit où les théoriciens et les expérimentateurs viennent pour voir et être vus. Regardons maintenant ce que les expériences du LHC nous ont réservé cette année…

Nouveaux résultats­­­

Cette année, le clou du spectacle à Moriond a bien entendu été l’annonce de la meilleure limite à ce jour pour la largeur du Higgs, à < 17 MeV avec 95 % de confiance, présentée aux deux sessions de Moriond par l’expérience CMS. La nouvelle mesure, obtenue par une nouvelle méthode d’analyse basée sur les désintégrations du Higgs en deux particules Z, est environ 200 fois plus précise que les précédentes. Les discussions sur cette limite ont porté principalement sur la nouvelle méthode utilisée pour l’analyse. Quelles hypothèses étaient nécessaires ? La même technique pouvait-elle être appliquée à un Higgs se désintégrant en deux bosons W ? Comment cette nouvelle largeur allait-elle influencer les modèles théoriques pour la nouvelle physique ? Nous le découvrirons sans doute à Moriond l’année prochaine…

L’annonce du premier résultat mondial conjoint pour la masse du quark top a aussi suscité un grand enthousiasme. Ce résultat, qui met en commun les données du Tevatron et du LHC, constitue la meilleure valeur jusqu’ici, au niveau mondial, à 173,34 ± 0,76 GeV/c2. Avant que l’effervescence ne soit retombée à la session de QCD de Moriond, CMS a annoncé un nouveau résultat préliminaire fondé sur l’ensemble des données collectées à 7 et 8 TeV. Ce résultat est à lui seul d’une précision qui rivalise avec celle de la moyenne mondiale, ce qui démontre clairement que nous n’avons pas encore atteint la plus grande précision possible pour la masse du quark top.

ot0172hCe graphique montre les quatre mesures de la masse du quark top publiées respectivement par les collaborations ATLAS, CDF, CMS et D0, ainsi que la mesure la plus précise à ce jour obtenue grâce à l’analyse conjointe.

D’autres nouveautés concernant le quark top, entre autres les nouvelles mesures précises de son spin et de sa polarisation issues du LHC, ainsi que les nouveaux résultats d’ATLAS pour la section efficace du quark top isolé dans le canal de désintégration t, ont été présentés par Kate Shaw le mardi 25 mars. La période II du LHC permettra d’approfondir encore notre compréhension du sujet.

Une mesure fondamentale et délicate permettant d’explorer la nature de la brisure de la symétrie électrofaible portée par le mécanisme de Brout-Englert-Higgs est celle de la diffusion de deux bosons vecteurs massifs. Cet événement est rare, mais en l’absence du boson de Higgs sa fréquence augmenterait fortement avec l’énergie de la collision, jusqu’à enfreindre les lois de la physique. Un indice de la collision d’un boson vecteur de force électrofaible a été détecté pour la première fois par ATLAS dans des événements impliquant deux leptons de même charge et deux jets présentant une grande différence de rapidité.

S’appuyant sur l’augmentation du volume de données et une meilleure analyse de celles-ci, les expériences du LHC s’attaquent à des états finaux multi-particules rares et difficiles qui font intervenir le boson de Higgs. ATLAS en a présenté un excellent exemple, avec un nouveau résultat dans la recherche de la production d’un Higgs associé à deux quarks top et se désintégrant en une paire de quarks b. Avec une limite prévue de 2,6 fois la prédiction du Modèle standard pour ce seul canal et une intensité de signal relative observée de 1,7 ± 1,4, la future exploitation à haute énergie du LHC, avec laquelle la fréquence de cet événement augmentera, suscite de grands espoirs.

Dans le même temps, dans le monde des saveurs lourdes, l’expérience LHCb a présenté des analyses supplémentaires de l’état exotique X(3872). L’expérience a confirmé de manière non ambiguë que ses nombres quantiques Jpc sont 1++ et a mis en évidence sa désintégration en ψ(2S)γ.

L’étude du plasma de quarks et de gluons se poursuit dans l’expérience ALICE, et les discussions ont porté surtout sur les résultats de l’exploitation du LHC en mode proton-plomb (p-Pb). En particulier, la « double crête » nouvellement observée dans les collisions p-Pb est étudiée en détail, et des analyses du pic de ses jets, de sa distribution de masse et de sa dépendance à la charge ont été présentées.

Nouvelles explorations

Grâce à notre nouvelle compréhension du boson de Higgs, le LHC est entré dans l’ère de la physique du Higgs de précision. Notre connaissance des propriétés du Higgs – par exemple les mesures de son spin et de sa largeur – s’est améliorée, et les mesures précises des interactions et des désintégrations du Higgs ont elles aussi bien progressé. Des résultats relatifs à la recherche d’une physique au-delà du Modèle standard ont également été présentés, et les expériences du LHC continuent de s’investir intensément dans la recherche de la supersymétrie.

En ce qui concerne le secteur de Higgs, de nombreux chercheurs espèrent trouver les cousins supersymétriques du Higgs et des bosons électrofaibles, appelés neutralinos et charginos, par l’intermédiaire de processus électrofaibles. ATLAS a présenté deux nouveaux articles résumant de multiples recherches en quête de ces particules. L’absence d’un signal significatif a été utilisée pour définir des limites d’exclusion pour les charginos et les neutralinos, soit 700 GeV – s’ils se désintègrent via des partenaires supersymétriques intermédiaires de leptons – et 420 GeV – quand ils se désintègrent seulement via des bosons du Modèle standard.

Par ailleurs, pour la première fois, une recherche du mode électrofaible le plus difficile à observer, produisant une paire de charginos qui se désintègrent en bosons W, a été entreprise par ATLAS. Ce mode ressemble à celui de la production de paires de W du Modèle standard, dont le taux mesuré actuellement paraît légèrement plus élevé que prévu.

Dans ce contexte, CMS a présenté de nouveaux résultats dans la recherche de la production d’une paire électrofaible de higgsinos via leur désintégration en un Higgs (à 125 GeV) et un gravitino de masse presque nulle. L’état final montre une signature caractéristique de jets de quatre quarks b, compatible avec une cinématique de double désintégration du Higgs. Un léger excès du nombre d’événements candidats signifie que l’expérience ne peut pas exclure un signal de higgsino. On établit des limites supérieures de l’intensité du signal d’environ deux fois la prédiction théorique pour des masses du higgsino comprises entre 350 et 450 GeV.

Dans plusieurs scénarios de supersymétrie, les charginos peuvent être métastables et ils pourraient potentiellement être détectés sous la forme de particules à durée de vie longue. CMS a présenté une recherche innovante de particules génériques chargées à durée de vie longue, effectuées en cartographiant l’efficacité de détection en fonction de la cinématique de la particule et de la perte d’énergie dans le trajectographe. Cette étude permet non seulement d’établir des limites strictes pour divers modèles supersymétriques qui prédisent une durée de vie du chargino (c*tau) supérieure à 50 cm mais elle fournit également un puissant outil à la communauté des théoriciens pour tester de manière indépendante les nouveaux modèles prédisant des particules chargées à durée de vie longue.

Afin d’être aussi général que possible dans la recherche de la supersymétrie, CMS a également présenté les résultats de nouvelles recherches, dans lesquelles un grand sous-ensemble des paramètres de la supersymétrie, tels que les masses du gluino et du squark, sont testés pour vérifier leur compatibilité statistique avec différentes mesures expérimentales. Cela a permis d’établir une carte des probabilités dans un espace à 19 dimensions. Cette carte montre notamment que les modèles prédisant des masses inférieures à 1,2 TeV pour le gluino et inférieures à 700 GeV pour le sbottom et le stop sont fortement défavorisés.

mais pas de nouvelle physique

Malgré toute ces recherches minutieuses, ce qu’on a le plus entendu à Moriond, c’était: « pas d’excès observé » – « cohérent avec le Modèle standard ». Tous les espoirs reposent maintenant sur la prochaine exploitation du LHC, à 13 TeV. Si vous souhaitez en savoir davantage sur les perspectives ouvertes par la deuxième exploitation du LHC, consultez l’article suivant du Bulletin du CERN: “La vie est belle à 13 TeV“.

En plus des divers résultats des expériences du LHC qui ont été présentés, des nouvelles ont aussi été rapportées à Moriond par les expériences du Tevatron, de BICEP, de RHIC et d’autres expériences. Pour en savoir plus, consultez les sites internet de la conférence, Moriond EW et Moriond QCD.

by CERN (Francais) at April 14, 2014 01:25 PM

Emily Lakdawalla - The Planetary Society Blog

Interview with a Mars Explorer
A conversation with Dr. Sarah Milkovich, HiRISE Investigation Scientist.

April 14, 2014 01:03 PM

Peter Coles - In the Dark

Awards and Rewards

A surge in the polls for footballer John Brayford of Sheffield United (in the Midlands) has left my dreams of the coveted title of Beard of Spring in ruins. I’m still in second place, but with the leader on 83.7% I think I’ll shortly be writing my concession speech…

Fortunately, however my disappointment at fading into oblivion in one competition has been more than adequately offset by joy at being awarded a Prize by students from the Department of Physics & Astronomy at the University of Sussex. You could have knocked me down with a feather (had I not been seated) when they announced my name as winner of the award for Best Expressed Research. Here’s the trophy:

award

I’m assuming that it’s solid gold, although it’s surprisingly light to carry. I’m not sure where I should store it until next year when presumably it will be handed onto someone else. It did occur to me to send it up to Newcastle United. At least that way they will have something to put in their trophy cabinet…

DSCN1446

Anyway, I’d like to thank everyone who voted for me, although I’m still not at all sure what “Best Expressed Research” actually means nor do I know what I did in particular to deserve the award. Not that any of that really matters. It’s honour enough to be working in a Department that’s part of a School where there’s such a wonderful friendly and cooperative atmosphere between staff and students. I’ve worked in some good physics departments in my time, but the Department of Sussex is completely unique both for the level of support it offers students and the fact that so many of the undergraduates are so highly motivated. Maybe that’s at least partly because there is such a close link between our teaching and research across the Department. Some people think – and some universities would have them think – that research-led teaching only happens in Russell Group institutions. In reality there’s plenty of evidence that, at least in Physics, Sussex does research-led teaching better than any of the Russell group.

Amid all the administrative jobs I have to do these days the opportunity to do a bit of teaching every now and then is the only chance I have of staying even approximately sane. I’m not sure how many other Heads of School at Sussex University do teaching – I’m told my predecessor in the School of Mathematical and Physical Sciences didn’t do any – but the day I have to stop teaching is the day I’ll retire. Teaching students who want to learn is much more than mere waged labour – it’s one of the most rewarding ways there is of spending your time.


by telescoper at April 14, 2014 12:49 PM

Quantum Diaries

On the Shoulders of…

My first physics class wasn’t really a class at all. One of my 8th grade teachers noticed me carrying a copy of Kip Thorne’s Black Holes and Time Warps, and invited me to join a free-form book discussion group on physics and math that he was holding with a few older students. His name was Art — and we called him by his first name because I was attending, for want of a concise term that’s more precise, a “hippie” school. It had written evaluations instead of grades and as few tests as possible; it spent class time on student governance; and teachers could spend time on things like, well, discussing books with a few students without worrying about whether it was in the curriculum or on the tests. Art, who sadly passed some years ago, was perhaps best known for organizing the student cafe and its end-of-year trip, but he gave me a really great opportunity. I don’t remember learning anything too specific about physics from the book, or from the discussion group, but I remember being inspired by how wonderful and crazy the universe is.

My second physics class was combined physics and math, with Dan and Lewis. The idea was to put both subjects in context, and we spent a lot of time on working through how to approach problems that we didn’t know an equation for. The price of this was less time to learn the full breadth subjects; I didn’t really learn any electromagnetism in high school, for example.

When I switched to a new high school in 11th grade, the pace changed. There were a lot more things to learn, and a lot more tests. I memorized elements and compounds and reactions for chemistry. I learned calculus and studied a bit more physics on the side. In college, where the physics classes were broad and in depth at the same time, I needed to learn things fast and solve tricky problems too. By now, of course, I’ve learned all the physics I need to know — which is largely knowing who to ask or which books to look in for the things I need but don’t remember.

There are a lot of ways to run schools and to run classes. I really value knowledge, and I think it’s crucial in certain parts of your education to really buckle down and learn the facts and details. I’ve also seen the tremendous worth of taking the time to think about how you solve problems and why they’re interesting to solve in the first place. I’m not a high school teacher, so I don’t think I can tell the professionals how to balance all of those goods, which do sometimes conflict. What I’m sure of, though, is that enthusiasm, attention, and hard work from teachers is a key to success no matter what is being taught. The success of every physicist you will ever see on Quantum Diaries is built on the shoulders of the many people who took the time to teach and inspire them when they were young.

by Seth Zenz at April 14, 2014 12:25 PM

Lubos Motl - string vacua and pheno

Andrei Linde: universe or multiverse?
Some time ago, before the BICEP2 discovery (in July 2012, weeks after the Higgs discovery), Andrei Linde gave an 82-minute talk at SETI, a center to search for ETs.



Because Linde and his theories – even some more specific theories – seem to be greatly vindicated by the BICEP2 announcement, it may be interesting to listen to his more general ideas about the subject. Linde is a pretty entertaining speaker – the audience is laughing often, too.




He starts with jokes about the word "principle" and comments about the cosmological principle, the uniformity principle, the big bang theory, possible global shapes of the universe and fates of the expanďing universe, and so on.




Linde employs plain English – with his cute sofťish Russian accent – to clarify many stupid questions. Why are so many people doing SEŤI? Why is the universe so large? Why energy is not conserved in cosmology?

But he ultimately gets to the multiverse and other controversial topics near the cutting edge. Amusingly enough, Linde mentions Hawking's old proposal to explain the uniformity of the universe anthropically. If it were non-uniform, it would become lethally non-uniform, and we couldn't live here and ask stupid questions. Except that Linde shows that Hawking's explanation doesn't really work and there is a more satisfying one, anyway.

Linde is surprised that the simple solutions for inflation etc. were only understood so recently, 30 years ago or so. He spends some time by explaining why the young universe was red (he is from Russia) or black (Henry Ford:I didn't quite understand this remark on Ford LOL: but the final point is that a largely expanded universe looks color uniform even if it is not). Linde prefers to believe in the multiverse (containing inequivalent vacua) because diversity is more generic.

At the end, he talked about the cosmological observations as a "time machine", the fractal nature of the universe, the cosmological mutation arising from the landscape etc. Some of his humor is childishly cute. The regions of the multiverse are separated not by border patrols but by domain walls and if you are young enough, energetic, and stupid, you go through the wall and die. ;-) Around 53:00, string theory is finally discussed, with the claim that there are 10500 colors of the universe. KKLT. Users of iPhone are parts of the silicon life created in the Silicon Valley.

Guth made a comment about the free lunch and the Soviet man Linde was deeply impressed by the free lunches. So he improved the inflation as the eternal feast where all possible dishes are served. ;-)

During the talk, Linde says lots of philosophical things about verification of theories etc. He knew inflation was right but he didn't expect that proofs would be found. So he was amazed by the experimenters. Concerning the "unfalsifiability" claims, he debunks them by saying that not even the U.S. courts work in this way. For example, a murder (of his wife) suspect is not given a new wife and a knife to repeatedly try whether he would kill her again. ;-) They just eliminate options and release a verdict. But reasoning doesn't require repeatable experiments.

Around 1:10:00, he spends some time with funny musings about Einstein's "the most incomprehensible thing about the Universe is that it is comprehensible", Wigner's "incredibly efficient mathematics", and some comments about the unexpectedly inefficient biology. Those things are explained anthropically as tautologies, too. Physicists can't exist at places where physics doesn't work etc. That's nice except that millions of things we have already understood also have a better, less tautological, more unequivocal, and more nontrivial explanation, and the same may be true for many currently unexplained patterns in Nature, too.

Questions begin at 1:12:55. Someone is puzzled whether Linde is for or against the anthropic reasoning. He is against the non-inflationary anthropic arguments. In inflation, things are different. He says that 10500 options is much better than the single 1 candidate on the Soviet ballots. ;-) In the second question, he explains that we know the theory of structure formation that produces the right filaments etc.; the small non-flatness of the spectrum is important in that, too. Someone with a seemingly similar Russian accent asks whether the initial wave function of the universe applies just to our universe or the whole multiverse. I think that Linde didn't understand the question so he talked about the many-world interpretation of quantum mechanics (just an interpretation, not a key insight etc.; MWI ignores the key role of conscious observers in QM, and so on; I completely agree with Linde here, even though he is answering a wrong question). The man asks the question whether entanglement between particles in 2 universes can exist. Linde says it can but he says it can exist on 2 islands. However, the entanglement behind the cosmic horizon may be unphysical due to the cosmic horizon complementarity principle, I would add.

At any rate, a fun talk.

by Luboš Motl (noreply@blogger.com) at April 14, 2014 11:44 AM

Tommaso Dorigo - Scientificblogging

Aldo Menzione And The Design Of The Silicon Vertex Detector
Below is a clip from a chapter of my book where I describe the story of the silicon microvertex detector of the CDF experiment. CDF collected proton-antiproton collisions from the Tevatron collider in 1985, 1987-88, 1992-96, and 2001-2011. Run 1A occurred in 1992, and it featured for the first time in a hadron collider a silicon strip detector, the SVX. The SVX would prove crucial for the discovery of the top quark.

read more

by Tommaso Dorigo at April 14, 2014 09:21 AM

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 5)

guest post by Steve Easterbrook

(5) Current rates of ocean acidification are unprecedented.

The IPCC report says:

The pH of seawater has decreased by 0.1 since the beginning of the industrial era, corresponding to a 26% increase in hydrogen ion concentration. [...] It is virtually certain that the increased storage of carbon by the ocean will increase acidification in the future, continuing the observed trends of the past decades. [...] Estimates of future atmospheric and oceanic carbon dioxide concentrations indicate that, by the end of this century, the average surface ocean pH could be lower than it has been for more than 50 million years.

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings. [The numbers indicate the number of models used in each ensemble.]

Ocean acidification has sometimes been ignored in discussions about climate change, but it is a much simpler process, and is much easier to calculate (notice the uncertainty range on the graph above is much smaller than most of the other graphs). This graph shows the projected acidification in the best and worst case scenarios (RCP2.6 and RCP8.5). Recall that RCP8.5 is the “business as usual” future.

Note that this doesn’t mean the ocean will become acid. The ocean has always been slightly alkaline—well above the neutral value of pH 7. So “acidification” refers to a drop in pH, rather than a drop below pH 7. As this continues, the ocean becomes steadily less alkaline. Unfortunately, as the pH drops, the ocean stops being supersaturated for calcium carbonate. If it’s no longer supersaturated, anything made of calcium carbonate starts dissolving. Corals and shellfish can no longer form. If you kill these off, the entire ocean food chain is affected. Here’s what the IPCC report says:

Surface waters are projected to become seasonally corrosive to aragonite in parts of the Arctic and in some coastal upwelling systems within a decade, and in parts of the Southern Ocean within 1–3 decades in most scenarios. Aragonite, a less stable form of calcium carbonate, undersaturation becomes widespread in these regions at atmospheric CO2 levels of 500–600 ppm.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 14, 2014 07:56 AM

Andrew Jaffe - Leaves on the Line

Academic Blogging Still Dangerous?

Nearly a decade ago, blogging was young, and its place in the academic world wasn’t clear. Back in 2005, I wrote about an anonymous article in the Chronicle of Higher Education, a so-called “advice” column admonishing academic job seekers to avoid blogging, mostly because it let the hiring committee find out things that had nothing whatever to do with their academic job, and reject them on those (inappropriate) grounds.

I thought things had changed. Many academics have blogs, and indeed many institutions encourage it (here at Imperial, there’s a College-wide list of blogs written by people at all levels, and I’ve helped teach a course on blogging for young academics). More generally, outreach has become an important component of academic life (that is, it’s at least necessary to pay it lip service when applying for funding or promotions) and blogging is usually seen as a useful way to reach a wide audience outside of one’s field.

So I was distressed to see the lament — from an academic blogger — “Want an academic job? Hold your tongue”. Things haven’t changed as much as I thought:

… [A senior academic said that] the blog, while it was to be commended for its forthright tone, was so informal and laced with profanity that the professor could not help but hold the blog against the potential faculty member…. It was the consensus that aspiring young scientists should steer clear of such activities.

Depending on the content of the blog in question, this seems somewhere between a disregard for academic freedom and a judgment of the candidate on completely irrelevant grounds. Of course, it is natural to want the personalities of our colleagues to mesh well with our own, and almost impossible to completely ignore supposedly extraneous information. But we are hiring for academic jobs, and what should matter are research and teaching ability.

Of course, I’ve been lucky: I already had a permanent job when I started blogging, and I work in the UK system which doesn’t have a tenure review process. And I admit this blog has steered clear of truly controversial topics (depending on what you think of Bayesian probability, at least).

by Andrew at April 14, 2014 06:26 AM

April 13, 2014

Christian P. Robert - xi'an's og

thumbleweed [local] news

It has been exactly a year since my climbing accident and the loss of my right thumb. Time for a quick recap (for anyone still interested!): Looking back over that thumbless year, I cannot see a significant impact over my daily life: I can essentially operate the same way as before, from climbing to cooking, from writing to biking, from driving to skiing, and the few inconveniences I experience are quite minor. Not large enough to rely on the prosthesis I received a few months ago. I do not particularly suffer from the right hand in cold (or hot) weather and my ice-climbing trip in Banff last month showed I could stand temperatures of -30⁰C with no difference from the past. I never experience “phantom thumb” sensations and rarely notice people taking stock of my missing thumb. Hence, while it has been an annoying accident that has disrupted our lives for a few weeks, the long term consequences are clearly minimal.


Filed under: Mountains, Running, Travel Tagged: amputation, anniversary, Banff, climbing accident, indoor climbing, prosthesis, thumb

by xi'an at April 13, 2014 10:14 PM

Clifford V. Johnson - Asymptotia

Participatory Art!
As you know I am a big fan of sketching, and tire easily of the remark people make that they "can't draw" - an almost meaningless thing society trains most people to think and say, with the result that they miss out on a most wonderful part of life. Sketching is a wonderful way of slowing down and really looking at the world around you and recording your impressions. If you can write you certainly can draw. It is a learned skill, and the most important part of it is learning to look*. But anyway, I was pleased to see this nice way of getting people of all ages involved in sketching for fun at the book festival! So I reached in and grabbed a photo for you. artists_row_latfob_12th_april_2014 [...] Click to continue reading this post

by Clifford at April 13, 2014 09:14 PM

Clifford V. Johnson - Asymptotia

Young Author Meeting!
20140413-110946.jpgIt is nice to see the variety of authors at a book fair event like this one, and it's great to see people's enthusiasm about meeting people who've written works they've spent a lot of time with. The long lines for signings are remarkable! As you might guess, I'm very much a supporter of the unsung authors doing good work in their own small way, not anywhere near the spotlight. An interesting booth caught my notice as I was wandering... The word "science" caught my eye. Seems that a mother and daughter team wrote a science book to engage children to become involved in science... Hurrah! So Jalen Langie (the daughter, amusingly wearing a lab coat) gets to be [...] Click to continue reading this post

by Clifford at April 13, 2014 06:08 PM

The Great Beyond - Nature blog

IPCC report calls for climate mitigation action now, not later

The world is heading towards possibly dangerous levels of global warming despite increasing efforts to promote the transition to a low-carbon economy, the Intergovernmental Panel on Climate Change (IPCC) warns in its latest report today.

As the concentration of greenhouse gases in the atmosphere continues to rise to unprecedented levels, the groups says only major institutional and technological change will give the world a better than even chance of staying below 2C warming – the widely accepted threshold to dangerous climate change. Stabilizing greenhouse gas concentrations at 450 parts per million CO2 equivalent – a level which scientists think is needed to limit warming to 2C – will require a three to four-fold increase in the share of low-carbon energies, such as renewables and nuclear, in the global power mix. Improvements in energy efficiency and, possibly, the use of carbon capture and storage technology will be needed to assist the process, the IPCC says.

The report was produced by the IPCC’s Working Group III, which has been tasked with looking into the mitigation of climate change. Its 33-page Summary for Policymakers was approved, line by line, by hundreds of IPCC authors and representatives of 195 governments over the past week in Berlin. Launching the report at a presentation in the city, Ottmar Edenhofer, the co-chair of the working group, admitted the discussions were at times nerve-rackingly tense.

To assess the options, costs and possible adverse side-effects of different pathways to stabilizing emissions at safe levels, the 235 lead authors of the report analysed close to 1,200 scenarios of socioeconomic development and cited almost 10,000 scientific papers. The resulting work, although phrased in rather technical language, is unambiguous in its message that the challenge of climate change is mounting as time proceeds.

“Global emissions have increased despite the recent economic crisis and remarkable mitigation efforts by some countries,” Edenhofer says. “Economic growth and population growth have outpaced improvements in energy efficiency – and since the turn of the century coal has become competitive again in many parts of the world.”

The report makes clear that it would be wise to act now rather than later. But, in line with the IPCC’s mandate to be policy-neutral, it includes no specific recommendations as to the energy and related policies that individual countries should follow.

“Substantial investment in clean energies is needed in all sectors of the global economy, including in some parts of the world in nuclear power,” says Edenhofer. “But it would be inappropriate for the IPCC to prescribe reduction targets or energy policies to specific countries.”

Doing nothing is not an option, he says. In a business-as-usual scenario run without meaningful mitigation policies, greenhouse gas concentrations double by the end of the century, the working group found. This would result in global warming of 4C to 5C above the pre-industrial (1750) level with possibly dramatic consequences on natural systems and human welfare.

Mitigating climate change would lead to a roughly 5% reduction in global consumption, according to the report. But, says Edenhofer, this does not mean that the world has to sacrifice economic growth. In fact, the group found that action to keep temperature rises at bay would reduce global economic growth by no more than 0.06% per year. This figure excludes the benefits of climate mitigation, such as from better air quality and health, which are thought to lower the actual costs of mitigation.

The full report outlines in great detail over 16 chapters the emission reduction potential of sectors including energy production and use, industry, transport and building and land use, and describes how mitigation efforts in one sector determine the needs in others. The IPCC has also assessed the potential of carbon capture and storage technology, which it says would be essential for achieving low-stabilization targets. More ambitious geoengineering possibilities, such as proposals to deliberately reduce the amount of sunlight reaching the Earth’s surface, have not been assessed in the report.

“There is a whole portfolio of mitigation options that can be combined in ways that meet the political priorities of individual countries,” says Edenhofer. “The means to tackle the problem exist, but we need to use them.”

Effective climate mitigation, adds Rajendra Pachauri, the chairman of the IPCC, will not be achieved if individual nations and agents advance their own interests independently. Nations hope to agree on binding emission reduction targets at a United Nations climate meeting in 2015 in Paris.

Delaying action is getting increasingly risky and will only lead to tougher requirements and higher costs at a later stage, says Pachauri.

“We haven’t done nearly enough yet,” he says. “A high-speed mitigation train needs to leave the station soon and all of global society needs to get on board.”

 

by Quirin Schiermeier at April 13, 2014 03:42 PM

Geraint Lewis - Cosmic Horizons

Gravitational lensing in WDM cosmologies: The cross section for giant arcs
We've had a pretty cool paper accepted for publication in the Monthly Notices of the Royal Astronomical Society  which tackles a big question in astronomy, namely what is the temperature of dark matter. Huh, you might say "temperature", what do you mean by "temperature"? I will explain.

The paper is by Hareth Mahdi, a PhD student at the Sydney Institute for Astronomy. Hareth's expertise is in gravitational lensing, using the huge amounts of mass in galaxy clusters to magnify the view of the distant Universe. Gravitational lenses are amongst the most beautiful things in all of astronomy. For example:
Working out how strong the lensing effect is reveals the amount of mass in the cluster, showing that there is a lot of dark matter present.

Hareth's focus is not "real" clusters, but clusters in "synthetic" universes, universes we generate inside supercomputers. The synthetic universes look as nice as the real ones; here's one someone made earlier (than you Blue Peter).

 Of course, in a synthetic universe, we control everything, such as the laws of physics and the nature of dark matter.

Dark matter is typically treated as being cold, meaning that the particles that make up dark matter move at speeds much lower than the speed to light. But we can also consider hot dark matter, which travels at speeds close to the speed of light, or warm dark matter, which moves at speeds somewhere in between.

What's the effect of changing the temperature of dark matter? Here's an illustration
With cold at the top, warmer in the middle, and hottest at the bottom. And what you can see is that as we wind up the temperature, the small scale structure in the cluster gets washed out. Some think that warm dark matter might be the solution to missing satellite problem.

Hareth's had two samples of clusters, some from cold dark matter universes and some from warm, and he calculated the strength of gravitational lensing in both. The goal is to see if changing to warm dark matter can help fix another problem in astronomy, namely that the clusters we observe seem to be more efficient at producing lensed images than the ones we have in our simulated universes.

We can get some pictures of the lensing strengths of these clusters, which looks like this
This shows the mass distributions in cold dark matter universes, with a corresponding cluster in the warm dark matter universe. Because the simulations were set up with similar initial conditions, these are the same clusters seen in the two universe.

You can already see that there are some differences, but what about lensing efficiency? There are a few ways to characterise this, but one way is the cross-section to lensing. When we compare the two cosmologies, we get the following:

There is a rough one-to-one relationship, but notice that the warm dark matter clusters sit mainly above the black line. This means that the warm dark matter clusters are more efficient at lensing than their cold dark matter colleagues.

This is actually an unexpected result. Naively, we would expect warm dark matter to remove structure and make clusters puffy, and hence less efficient at lensing. So what is happening?

It took a bit of detective work, but we tracked it down. Yes, in warm dark matter clusters, the small scale structure is wiped out, but where does the mass go? It actually goes in to the larger mass halo, making them more efficient at lensing. Slightly bizarre, but it does mean that we have a way, if we can measure enough real clusters, it could give us a test of the temperature of dark matter!

But alas, even though the efficiency is stronger with warm dark matter, it is not strong enough to fix the lensing efficiency problem. As ever, there is more work to do, and I'll report it here.

Until then, well done Hareth!

Gravitational lensing in WDM cosmologies: The cross section for giant arcs

The nature of the dark sector of the Universe remains one of the outstanding problems in modern cosmology, with the search for new observational probes guiding the development of the next generation of observational facilities. Clues come from tension between the predictions from {\Lambda}CDM and observations of gravitationally lensed galaxies. Previous studies showed that galaxy clusters in the {\Lambda}CDM are not strong enough to reproduce the observed number of lensed arcs. This work aims to constrain the warm dark matter cosmologies by means of the lensing efficiency of galaxy clusters drawn from these alternative models. The lensing characteristics of two samples of simulated clusters in the warm dark matter ({\Lambda}WDM) and cold dark matter ({\Lambda}CDM) cosmologies have been studied. The results show that even though the CDM clusters are more centrally concentrated and contain more substructures, the WDM clusters have slightly higher lensing efficiency than their CDM counterparts. The key difference is that WDM clusters have more extended and more massive subhaloes than CDM analogues. These massive substructures significantly stretch the critical lines and caustics and hence they boost the lensing efficiency of the host halo. Despite the increase in the lensing efficiency due to the contribution of massive substructures in the WDM clusters, this is not enough to resolve the arc statistics problem.

by Cusp (noreply@blogger.com) at April 13, 2014 01:02 AM

April 11, 2014

Emily Lakdawalla - The Planetary Society Blog

Intro Astronomy Class 9: Titan, Uranus and Neptune Systems
Examine Saturn's moon Titan and explore the Uranian and Neptunian systems in this video of class 9 of Bruce Betts' Introduction to Planetary Science and Astronomy class.

April 11, 2014 07:31 PM

Emily Lakdawalla - The Planetary Society Blog

Intro Astronomy 2014. Class 8: Icy Galilean Satellites, Saturn System
Explore the icy moons of the Jupiter System and tour the Saturnian system in this video of class 8 of Bruce Betts' Introduction to Planetary Science and Astronomy class.

April 11, 2014 07:28 PM

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 4)

guest post by Steve Easterbrook

(4) Most of the heat is going into the oceans

The oceans have a huge thermal mass compared to the atmosphere and land surface. They act as the planet’s heat storage and transportation system, as the ocean currents redistribute the heat. This is important because if we look at the global surface temperature as an indication of warming, we’re only getting some of the picture. The oceans act as a huge storage heater, and will continue to warm up the lower atmosphere (no matter what changes we make to the atmosphere in the future).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. See text for data sources. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

(Box 3.1 Fig 1) Plot of energy accumulation in zettajoules within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

Note the relationship between this figure (which shows where the heat goes) and the figure from Part 2 that showed change in cumulative energy budget from different sources:

The Earth's energy budget from 1970 to 2011. Cumulative energy flux (in zettajoules) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

(Box 13.1 fig 1) The Earth’s energy budget from 1970 to 2011. Cumulative energy flux (in zettajoules) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

Both graphs show zettajoules accumulating over about the same period (1970-2011). But the graph from Part 1 has a cumulative total just short of 800 zettajoules by the end of the period, while today’s new graph shows the earth storing “only” about 300 zettajoules of this. Where did the remaining energy go? Because the earth’s temperature rose during this period, it also lost increasingly more energy back into space. When greenhouse gases trap heat, the earth’s temperature keeps rising until outgoing energy and incoming energy are in balance again.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 11, 2014 06:01 PM

The Great Beyond - Nature blog

Shorter list for gamma-ray telescope sites, but no home yet

Concept illustration of Cherenkov Telescope Array

Where will the world’s next generation ground-based γ-ray detector, the Cherenkov Telescope Array (CTA), be built? No one yet knows. But a panel of funders have narrowed the field slightly, following a meeting in Munich, Germany, this week.

Scientists had originally hoped to select two sites — a large one in the Southern Hemisphere and a smaller one in the North — by the end of 2013. But the selection process for the €200-million ($276-million) project has taken longer than originally foreseen.

At a meeting on 10 April, representatives from 12 government ministries narrowed the potential southern sites from five to two: Aar, a site in Southern Namibia; and Armazones, in Chile’s Atacama desert. They also picked a reserve site in Argentina.

The committee, a panel of representatives from Argentina, Austria, Brazil, France, Germany, Italy, Namibia, Poland, Spain, South Africa, Switzerland and the United Kingdom, decided that all four possible northern sites — in Mexico, Spain and the United States — needed further analysis. A statement from the board said that a final site decision will happen “as soon as possible”.

If the CTA is built, its two sites will contain around 120 telescopes, which will look for the faint blue light emitted when very-high-energy photons slam into Earth’s atmosphere and create cascades of particles. By triangulating the data from various detectors, astrophysicists hope to piece together the energy and path of such photons. This should help them not only identify the sources of the γ-rays — extreme environments such as supermassive black holes — but also answer fundamental questions about dark matter and quantum gravity.

Like many astronomy projects, the best site for the CTA would be a high-altitude, remote location with clear skies. But the site decision must also take into account environmental risks, such as earthquakes and high winds, and projected operational costs. How much each host country would be prepared to contribute is also a factor.

Last year, an evaluation by representatives of the CTA’s 1,000-strong consortium rated Aar in Southern Namibia as the best southern site, which would contain 99 telescopes spread out over 10 square kilometres. Two sites tied for second: another Namibian site, which already hosts the High Energy Stereoscopic System (HESS) γ-ray telescope; and Armazones, where the European Southern Observatory already has a base and plans to build the European Extremely Large Telescope. The group equally ranked the four contenders for the northern site, which would be a 19-telescope array spread out over one square kilometre. Mexico is already building the High-Altitude Water Cherenkov Observatory (HAWC),  a γ-ray observatory of different type.

Although the consortium’s ranking was based largely on the science case and observing conditions, the latest decisions follows the report of an external site selection committee, which also took into account political and financial factors. Further decisions will rest on detailed negotiations, including host country contributions and tax exemptions at the various sites.

The CTA now aims to pick a final southern site by the end of the year. Board chair Beatrix Vierkorn-Rudolph, of Germany’s Federal Ministry of Education and Research, told Nature it was not yet clear whether the same will be possible for the northern site.

by Elizabeth Gibney at April 11, 2014 03:46 PM

astrobites - astro-ph reader's digest

Mugged by a Passing Star

Fig 1:

Fig 1: (Top) Face-on view of a simulated disk after an encounter with a mass ratio of 1 and a periastron distance of 2 rinit. The vertical lines show the measured size of the disk. (Bottom) Initial (thin) and final (thick) particle density (solid lines) and mass density (dashed lines) of the disk. The vertical line shows the point of steepest mass density decrease.  (From Breslau et al. 2014)

Close Encounters of the Stellar Kind

Stars often form in clusters, where the chances of a close encounter with another star are high. “Close encounter” just means that the passing star comes close enough that its gravity significantly affects the motion of the other star. The distance between the two stars at their closest point during the encounter is called the periastron distance, and these “close” encounters can occur at periastron distances of tens or even hundreds of AU.

The protoplanetary disk of gas and dust that surrounds a young star can be affected by the gravity of the passing star and will often end up smaller than it was before the encounter. For example, the density of material in our own Solar System drops off greatly past about 30 AU. Since the Sun formed in a cluster, this truncation might have been caused by a stellar encounter after the Sun’s disk had formed but before the cluster dispersed. According to previous models, the other star needed to come within about 100 AU to truncate the Sun’s disk at 30 AU.

This disk truncation scenario has been studied by various theorists in the past, who modeled what happens to a disk during an encounter between equal-mass stars and found that disks are typically truncated to about 1/2-1/3 the periastron distance. The authors of this paper wanted to explore a much larger parameter space and determine what relationship the final disk size has to the mass ratio of the two stars and the periastron distance.

Simulations and Results

The authors ran over 100 N-body simulations of a disk of particles experiencing an encounter with another star. Each simulation had a different stellar mass ratio and periastron distance. The authors restricted their parameters to ensure their systems were realistic. For example, they chose from a range of mass ratios based on values estimated by observing young dense clusters in our solar neighborhood. They also chose periastron distances that were small enough that the disks were perturbed but not so close that the disks were destroyed, because unaffected or completely absent disks wouldn’t be very helpful to their investigation.

Disk edges are fuzzy and don’t have clear cutoffs, so the authors had to define what they meant by ‘disk size’ before they could measure it. For this paper, they defined the disk radius as the distance from the star at which they measured the steepest drop in mass surface density (see Fig 1). You can see the resulting measurements of final disk size vs. periastron distance for several different mass ratios and different initial disk sizes in Fig 2.

From their simulation results, the authors derived the following simple relationship between final disk size (rdisk), stellar mass ratio (m12), and periastron distance (rperi):

rdisk = 0.28 m12-0.32 rperi.

Notice that according to these results, the finial disk radius does not depend on initial disk radius! The only caveat is that the disks in these simulations never grow in size. For example, the equation above might predict that the final disk size should be 150 AU, but if the initial disk size was 100 AU, it will still be 100 AU at the end of the simulation.

Fig 2:

Fig 2: Final disk radius vs. periastron distance for simulated disks with initial radii of 100 AU (black squares) and 200 AU (white diamonds) for different mass ratios. (From Breslau et al. 2014)

What Does It All Mean?

Disks are truncated through gravitational perturbations in two ways. Some dust is removed by the passing star, but other dust  simply moves inwards as its angular momentum is transferred to the passing star. The authors show that this second effect is more significant than mass removal in truncating disks by noting that some of their simulated disks got smaller with no loss of mass. They also compared their results with the commonly-used approximations from previous equal-mass star studies and found that these approximations tend to overestimate the size of the truncated disk. However, the authors did confirm previous results that a disk like the Solar System could be truncated to 30 AU with a 100 AU periastron encounter with an equal-mass star, supporting the theory that the Solar System may have been affected by a stellar encounter. They also show that a larger or smaller mass star could be the culprit depending on its periastron distance.

The new ALMA observatory will let us measure disk sizes to much greater accuracy and allow us to study their dependence on mass ratio and periastron distance. This paper will prove very helpful in understanding those results and the encounter history of observed disks.

 

by Erika Nesvold at April 11, 2014 02:35 PM

arXiv blog

The Forecasting Challenge for Power Networks of the Future

The energy-efficient power networks of the future will require entirely new ways of forecasting demand on the scale of individual households. That won’t be easy.

April 11, 2014 02:00 PM

CERN Bulletin

ELENA gets a roof over its head

Today, Friday 11 April, CERN inaugurated the ELENA building (393) after less than a year's construction work.

 

Tacked on to the side of the Antiproton Decelerator (AD), this building will soon house a cleaning room, workshops and generators for the kickers in order to free space in the AD hall, where the future Extra Low ENergy Antiproton ring, ELENA, will be installed.

“Today we’re celebrating the completion of a project which, I’m happy to say, has gone very well,” exclaims François Butin, technical coordinator of the ELENA project (EN-MEF Group). “The deadlines and budgets have been perfectly respected and the building fully complies with our specifications. A great vote of thanks to GS-SE and the outside contractors who have enabled us to complete this project.”

Some 10,000 tonnes of earth had to be moved by around 500 trucks. The presence of the TT2 transfer tunnel directly beneath the building posed a number of technical challenges. An 800-mm thick shielding slab was implemented to protect the building from radiation.

Christian Carli, ELENA Project Leader (BE-ABP Group), adds: "The installation of the ELENA machine is approaching fast. The project's Technical Design Report has just been published and the work is progressing well, including on the transfer-line side.” ELENA’s magnetic deceleration ring, 30 m in circumference, will be installed in the AD hall mid-2015 and its research programme should begin two years later.


For more information on the ELENA project, read our article in the Bulletin issue 26-27/2012.

April 11, 2014 01:04 PM

Symmetrybreaking - Fermilab/SLAC

Tufte’s Feynman sculptures come to Fermilab

Edward Tufte, celebrated statistician and master of informational graphics, transforms physics notations into works of art.

If you ask a physicist how particles interact and you have a drawing surface handy, the explanation will likely come in the form of a series of lines, arrows, squiggles and loops.

These drawings, called Feynman diagrams, help organize a calculation. They represent the mathematical formulas of how particles interact, beginning to end, and also the rate at which the interaction happens.

A new exhibit at Fermi National Accelerator Laboratory examines the beauty and simplicity of this shorthand. 

by Amanda Solliday at April 11, 2014 01:00 PM

Axel Maas - Looking Inside the Standard Model

News about the state of the art
Right now, I am at workshop in Benasque, Spain. This workshop is called 'After the Discovery: Hunting for a non-standard Higgs Sector'. The topic is essentially this: We now have a Higgs. How can we find what else is out there? Or at least assure that it is currently out of our reach? That there is something more is beyond doubt. We know too many cases where our current knowledge is certainly limited.

I will not go on with describing all what is presented on this workshop. This is too much. And there are certainly other places on the web, where this is done. In this entry I will therefore just describe how what is discussed at the workshop relates to my own research.

One point is certainly what experiments find. At such specialized workshops, you can get much more details of what they actually do. Since any theoretical investigation is to some extent approximative, it is always good to know, what is known experimentally. Hence, if I get a result in disagreement with the experiment, I know that there is something wrong. Usually, it is the theory, or the calculations performed. Some assumption being too optimistic, some approximation being too drastic.

Fortunately, so far nothing is at odds with what I have. That is encouraging. Though no reason for becoming overly confident.

The second aspect is to see what other peoples do. To see, which other ideas still hold up against experiment, and which failed. Since different people do different things, combining the knowledge, successes and failures of the different approaches helps you. It helps not only in avoiding too optimistic assumptions or other errors. But other people's successes provide new input.

One particular example at this workshop is for me the so-called 2-Higgs-Doublet models. Such models assume that there exists besides the known Higgs another set of Higgs particles. Though this is not obvious, the doublet in the name indicates that they have four more Higgs particles, one of them being just a heavier copy of the one we know. I have recently considered to look also into such models, though for quite different reasons. Here, I learned how they can be motivated for entirely different reasons, and especially why there are so interesting for ongoing experiments. I also learned much about their properties, and what is known (and not known) about them. This gives me quite some new perspectives, and some new ideas.

Ideas, I will certainly realize, once being back.

Finally, collecting all the talks together, they draw the big picture. They tell me, where we are now. What we know about the Higgs, what we do not know, and where there is room (left) for much more than just the 'ordinary' Higgs. It is an update for my own knowledge about particle physics. And it finally delivers the list, of what will become looked at in the next couple of months and years. I now know better where to look for the next result relevant for my research, and relevant for the big picture.

by Axel Maas (noreply@blogger.com) at April 11, 2014 11:57 AM

CERN Bulletin

CERN Library | Author Talk: Quinn Slobodian | 15 April
Quinn Slobodian from the Dahlem Humanities Center, Freie Universität Berlin will give a presentation at the CERN Library about "The Laboratory of the World Economy: Globalization Theory around 1900".   Quinn Slobodian researches the history of Germany in the world with a focus on international political economy and transnational social movements. He is really interested to hear from CERN physicists and engineers about their responses to his theories about the Global Economy at the turn of the century: We may think of globalization theory as a recent phenomenon. Yet in the decades around 1900, German-speaking economists were already trying to make sense of an entity they called “the world economy,” coining a term that would not enter other languages until after the First World War. What was the nature of the world economy? How could one visualize and represent it? What was the status of national autonomy in an era of globalized communication and trade? My talk explores these questions through the work of German, Austrian, and Swiss economists around 1900. I follow a central debate. On one side were those who used maps and statistics to see the world economy as a globe-spanning “organism,” anticipating later sociological theories of the “network society.” On the other were those, including Joseph Schumpeter, who saw the stock exchange as a laboratory of the world economy. They believed that one could draw conclusions about the world economy at large by observing price movements on the major commodities markets. Serial snapshots of the world market, taken in the laboratory of the stock exchange, could be brought into motion to produce a vision of the world economy in movement along the path of a line graph, prefiguring the later optic of macroeconomics and finance. Their debate produced a primary division in the way we see the world economy that lasts until the present day. About the author Quinn Slobodian is assistant professor of modern European history at Wellesley College and currently Andrew W. Mellon Foundation and Volkswagen Stiftung Postdoctoral Fellow at the Dahlem Humanities Center at the Freie Universität Berlin.   Quinn Slobodian will give his presentation at the CERN Library on Tuesday 15 April at 4 p.m. *Coffee will be served at 3.30 p.m.* For more information, visit: https://indico.cern.ch/event/313789/

by CERN Library at April 11, 2014 09:09 AM

CERN Bulletin

Intefon
Découvrez nos news sur notre site  www.interfon.fr « News »  Interfon  « News »   Interfon « Au service de nos adhérents »   Interfon est un groupement de fonctionnaires nationaux et internationaux (y compris retraités) de la zone franco-genevoise regroupant plus de 6 000 adhérents. Mise en place avec le statut de « Coopérative » créée dans les années 60, elle a pour mission d’informer, de recommander et de faire bénéficier à ses sociétaires, d’un certain nombre de services et de prix préférentiels auprès de plus de 80 entreprises de la région. Gérée par des administrateurs bénévoles. Interfon s’est développée en créant une branche “mutuelle” d’assurance complémentaire maladie-chirurgie (ADREA). L’ensemble du personnel et des administrateurs vivent très proches de leurs adhérents afin de favoriser la communication.Nous vous invitons à découvrir nos services en consultant notre site internet  www.interfon.fr Nos bureaux du CERN et de St Genis (Technoparc) sont ouverts du lundi au vendredi (voir nos horaires au bas de cette page). Notre personnel, toujours disponible est là pour vous accueillir et répondre à vos questions, vous présenter nos offres, nos fournisseurs (brochures à votre disposition), enregistrer votre adhésion si vous souhaitez nous rejoindre, prendre vos commandes de fioul, de bois ou de vins/champagnes, les volailles de Bresse, marrons glacés, foie gras, etc… Vous pouvez aussi venir déposer vos courriers pour la Mutuelle (que nous transmettons) et payer vos factures fournisseurs. VITAM : Nous vendons également des billets d’entrées au VITAM de Neydens (74) à un prix préférentiel (SPA, Aquatique/Escalade enfant ou adulte). Séléction de vins :  Vous pouvez choisir dans nos bureaux quelques crus de différents fournisseurs. Vous pouvez acheter au bureau de Saint Genis, qui possède un espace (cave) dédié aux vins et champagnes. Au bureau du CERN, nous tenons à votre disposition les tarifs et nous pouvons enregistrer vos commandes. Ne manquez pas de venir déguster notre sélection à nos 2 journées «Portes ouvertes» annuelles (fin mars et début octobre) Venez nous rencontrer à l’un de nos deux bureaux :   Permanences du CERN (Bât 504) Interfon : du lundi au vendredi (12 h 30 à 15 h 30) tél. 73339 e-mail : interfon@cern.ch. Mutuelle : les jeudis « semaines impaires » (13 h 00 à 15 h 30) tél. 73939, e-mail : interfon@adrea-paysdelain.fr. Bureaux du Technoparc à Saint-Genis-Pouilly : Interfon et Mutuelle : du lundi au vendredi (13 h 30 à 17 h 30) Coopérative : 04 50 42 28 93 interfon@cern.ch Mutuelle : 04 50 42 74 57 interfon@adrea-paysdelain.fr.

by Interfon at April 11, 2014 09:01 AM

CERN Bulletin

April 10, 2014

The Great Beyond - Nature blog

Former NIH stem-cell chief joins New York foundation

Stem-cell biologist Mahendra Rao, who resigned last week as director of the Center for Regenerative Medicine (CRM) at the US National Institutes of Health (NIH), has a new job. On 9 April, he was appointed vice-president for regenerative medicine at the New York Stem Cell Foundation (NYSCF), a non-profit organization that funds embryonic stem-cell research.

Rao left the NIH abruptly on 28 March, apparently because of disagreements about the number of clinical trials of stem-cell therapies that the NIH’s intramural CRM programme would conduct. The CRM was established in 2010 to shepherd therapies using induced pluripotent stem cells (iPS cells) — adult cells that have been reprogrammed to an embryonic state — into clinical translation. One of the CRM’s potential therapies, which will use iPS cells to treat macular degeneration of the retina, will continue moving towards clinical trials at the NIH, although several others were not funded. NIH officials say that the CRM will not continue in its current direction, but the fate of the centre’s remaining budget and resources is undecided.

Rao says that he wants to move more iPS cell therapies towards trials than the NIH had been willing to do. He has already joined the advisory boards of several stem-cell-therapy companies: Q Therapeutics, a Salt Lake City-based neural stem cell company he co-founded; and Cesca Therapeutics (formerly known as ThermoGenesis) of Rancho Cordova, California, and Stemedica of San Diego, California, both of which are developing cell-based therapies for cardiac and vascular disorders.

Rao says that his initial focus at the NYSCF will be developing iPS cell lines for screening, and formulating a process for making clinical-grade cell lines from a patient’s own cells.

by Sara Reardon at April 10, 2014 09:47 PM

ZapperZ - Physics and Physicists

Graphene Closer To Commercial Use
When an article related to physics makes it to the CNN website, you know it is a major news.

This article covers the recent "breakthrough" in graphene that may make it even more viable for commercial use. I'm highlighting it here in case you or someone else needs more evidence of the "application" of physics, and if anyone who thinks that something that got awarded the Nobel Prize in is usually esoteric and useless.

Zz.

by ZapperZ (noreply@blogger.com) at April 10, 2014 02:47 PM

Symmetrybreaking - Fermilab/SLAC

From quark soup to ordinary matter

Scientists have gained new insight into how matter can change from a hot soup of particles to the matter we know today.

The early universe was a trillion-degree soup of subatomic particles that eventually cooled into matter as it is today.

This process is called “freezing out.” In the early universe, it was a smooth transition. But a group of scientists at Brookhaven National Laboratory recently found that, under the right conditions, it can occur differently.

The new research may offer valuable insight into the strong force, which accounts for 99.9 percent of the mass of visible matter in today’s world.

by Karen McNulty Walsh, Brookhaven National Laboratory at April 10, 2014 01:00 PM

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 3)

guest post by Steve Easterbrook

(3) The warming is largely irreversible

The summary for policymakers says:

A large fraction of anthropogenic climate change resulting from CO2 emissions is irreversible on a multi-century to millennial time scale, except in the case of a large net removal of CO2 from the atmosphere over a sustained period. Surface temperatures will remain approximately constant at elevated levels for many centuries after a complete cessation of net anthropogenic CO2 emissions.

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

The conclusions about irreversibility of climate change are greatly strengthened from the previous assessment report, as recent research has explored this in much more detail. The problem is that a significant fraction of our greenhouse gas emissions stay in the atmosphere for thousands of years, so even if we stop emitting them altogether, they hang around, contributing to more warming. In simple terms, whatever peak temperature we reach, we’re stuck at for millennia, unless we can figure out a way to artificially remove massive amounts of CO2 from the atmosphere.

The graph is the result of an experiment that runs (simplified) models for a thousand years into the future. The major climate models are generally too computational expensive to be run for such a long simulation, so these experiments use simpler models, so-called EMICS (Earth system Models of Intermediate Complexity).

The four curves in this figure correspond to four “Representative Concentration Pathways”, which map out four ways in which the composition of the atmosphere is likely to change in the future. These four RCPs were picked to capture four possible futures: two in which there is little to no coordinated action on reducing global emissions (worst case—RCP8.5 and best case—RCP6) and two on which there is serious global action on climate change (worst case—RCP4.5 and best case—RCP 2.6). A simple way to think about them is as follows. RCP8.5 represents ‘business as usual’—strong economic development for the rest of this century, driven primarily by dependence on fossil fuels. RCP6 represents a world with no global coordinated climate policy, but where lots of localized clean energy initiatives do manage to stabilize emissions by the latter half of the century. RCP4.5 represents a world that implements strong limits on fossil fuel emissions, such that greenhouse gas emissions peak by mid-century and then start to fall. RCP2.6 is a world in which emissions peak in the next few years, and then fall dramatically, so that the world becomes carbon neutral by about mid-century.

Note that in RCP2.6 the temperature does fall, after reaching a peak just below 2°C of warming over pre-industrial levels. That’s because RCP2.6 is a scenario in which concentrations of greenhouse gases in the atmosphere start to fall before the end of the century. This is only possible if we reduce global emissions so fast that we achieve carbon neutrality soon after mid-century, and then go carbon negative. By carbon negative, I mean that globally, each year, we remove more CO2 from the atmosphere than we add. Whether this is possible is an interesting question. But even if it is, the model results show there is no time within the next thousand years when it is anywhere near as cool as it is today.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 10, 2014 09:37 AM

The n-Category Cafe

The Modular Flow on the Space of Lattices

Guest post by Bruce Bartlett

The following is the greatest math talk I’ve ever watched!

  • Etienne Ghys (with pictures and videos by Jos Leys), Knots and Dynamics, ICM Madrid 2006. [See below the fold for some links.]

Etienne Ghys A modular knot

I wasn’t actually at the ICM; I watched the online version a few years ago, and the story has haunted me ever since. Simon and I have been playing around with some of this stuff, so let me share some of my enthusiasm for it!

The story I want to tell here is how, via modular flow of lattices in the plane, certain matrices in <semantics>SL(2,)<annotation encoding="application/x-tex">\SL(2,\mathbb{Z})</annotation></semantics> give rise to knots in the 3-sphere less a trefoil knot. Despite possibly sounding quite scary, this can be easily explained in an elementary yet elegant fashion.

As promised above, here are some links related to Ghys’ ICM talk.

I’m going to focus on the last third of the talk — the modular flow on the space of lattices. That’s what produced the beautiful picture above (credit for this and other similar pics below goes to Jos Leys; the animation is Simon’s.)

Lattices in the plane

For us, a lattice is a discrete subgroup of <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics>. There are three types: the zero lattice, the degenerate lattices, and the nondegenerate lattices:

Lattices

Given a lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> and an integer <semantics>n4<annotation encoding="application/x-tex">n \geq 4</annotation></semantics> we can calculate a number — the Eisenstein series of the lattice: <semantics>G n(L)= ωL,ω01ω n.<annotation encoding="application/x-tex"> G_{n}(L) = \sum _{\omega \in L, \omega \neq 0} \frac{1}{\omega ^{n}}. </annotation></semantics> We need <semantics>n3<annotation encoding="application/x-tex">n \geq 3</annotation></semantics> for this sum to converge. For, roughly speaking, we can rearrange it as a sum over <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> of the lattice points on the boundary of a square of radius <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics>. The number of lattice points on this boundary scales with <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics>, so we end up computing something like <semantics> r0rr n<annotation encoding="application/x-tex">\sum _{r \geq 0} \frac{r}{r^{n}}</annotation></semantics> and so we need <semantics>n3<annotation encoding="application/x-tex">n \geq 3</annotation></semantics> to make the sum converge.

Note that <semantics>G n(L)<annotation encoding="application/x-tex">G_{n}(L)</annotation></semantics> = 0 for <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> odd since every term <semantics>ω<annotation encoding="application/x-tex">\omega </annotation></semantics> is cancelled by the opposite term <semantics>ω<annotation encoding="application/x-tex">-\omega </annotation></semantics>. So, the first two nontrivial Eisenstein series are <semantics>G 4<annotation encoding="application/x-tex">G_{4}</annotation></semantics> and <semantics>G 6<annotation encoding="application/x-tex">G_{6}</annotation></semantics>. We can use them to put `Eisenstein coordinates’ on the space of lattices.

Theorem: The map <semantics>{lattices} 2 L (G 4(L),G 6(L))<annotation encoding="application/x-tex"> \begin{aligned} \{ \text{lattices} \} &\rightarrow \mathbb{C}^{2} \\ L & \mapsto (G_{4} (L), \, G_{6}(L)) \end{aligned} </annotation></semantics> is a bijection.

The nicest proof is in Serre’s A Course in Arithmetic, p. 89. It is a beautiful application of the Cauchy residue theorem, using the fact that <semantics>G 4<annotation encoding="application/x-tex">G_{4}</annotation></semantics> and <semantics>G 6<annotation encoding="application/x-tex">G_{6}</annotation></semantics> define modular forms on the upper half plane <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics>. (Usually, number theorists set up their lattices so that they have basis vectors <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> and <semantics>τ<annotation encoding="application/x-tex">\tau </annotation></semantics> where <semantics>τH<annotation encoding="application/x-tex">\tau \in H</annotation></semantics>. But I want to avoid this ‘upper half plane’ picture as far as possible, since it breaks symmetry and mystifies the geometry. The whole point of the Ghys picture is that not breaking the symmetry reveals a beautiful hidden geometry! Of course, sometimes you need the ‘upper half plane’ picture, like in the proof of the above result.)

Lemma: The degenerate lattices are the ones satisfying <semantics>20G 4 349G 6 2=0<annotation encoding="application/x-tex">20 G_{4}^{3} - 49G_{6}^{2} = 0</annotation></semantics>.

Let’s prove one direction of this lemma — that the degenerate lattices do indeed satisfy this equation. To see this, we need to perform a computation. Let’s calculate <semantics>G 4<annotation encoding="application/x-tex">G_{4}</annotation></semantics> and <semantics>G 6<annotation encoding="application/x-tex">G_{6}</annotation></semantics> of the lattice <semantics><annotation encoding="application/x-tex">\mathbb{Z} \subset \mathbb{C}</annotation></semantics>. Well, <semantics>G 4()= n01n 4=2ζ(4)=2π 490<annotation encoding="application/x-tex"> G_{4}(\mathbb{Z}) = \sum _{n \neq 0} \frac{1}{n^{4}} = 2 \zeta (4) = 2 \frac{\pi ^{4}}{90} </annotation></semantics> where we have cheated and looked up the answer on Wikipedia! Similarly, <semantics>G 6()=2π 6945<annotation encoding="application/x-tex">G_{6}(\mathbb{Z}) = 2 \frac{\pi ^{6}}{945}</annotation></semantics>.

So we see that <semantics>20G 4() 349G 6() 2=0<annotation encoding="application/x-tex">20 G_{4}(\mathbb{Z})^{3} - 49 G_{6}(\mathbb{Z})^{2} = 0</annotation></semantics>. Now, every degenerate lattice is of the form <semantics>t<annotation encoding="application/x-tex">t \mathbb{Z}</annotation></semantics> where <semantics>t<annotation encoding="application/x-tex">t \in \mathbb{C}</annotation></semantics>. Also, if we transform the lattice via <semantics>LtL<annotation encoding="application/x-tex">L \mapsto t L</annotation></semantics>, then <semantics>G 4t 4G 4<annotation encoding="application/x-tex">G_{4} \mapsto t^{-4} G_{4}</annotation></semantics> and <semantics>G 6t 6G 6<annotation encoding="application/x-tex">G_{6} \mapsto t^{-6} G_{6}</annotation></semantics>. So the equation remains true for all the degenerate lattices, and we are done.

Corollary: The space of nondegenerate lattices in the plane of unit area is homeomorphic to the complement of the trefoil in <semantics>S 3<annotation encoding="application/x-tex">S^{3}</annotation></semantics>.

The point is that given a lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> of unit area, we can scale it <semantics>LλL<annotation encoding="application/x-tex">L \mapsto \lambda L</annotation></semantics>, <semantics>λ +<annotation encoding="application/x-tex">\lambda \in \mathbb{R}^{+}</annotation></semantics> until <semantics>(G 4(L),G 6(L))<annotation encoding="application/x-tex">(G_{4}(L), G_{6}(L))</annotation></semantics> lies on the 3-sphere <semantics>S 3={(z,w):|z| 2+|w| 2=1} 2<annotation encoding="application/x-tex">S^{3} = \{ (z,w) : |z|^{2} + |w|^{2} = 1\} \subset \mathbb{C}^{2}</annotation></semantics>. And the equation <semantics>20z 349w 2=0<annotation encoding="application/x-tex">20 z^{3} - 49 w^{2} = 0</annotation></semantics> intersected with <semantics>S 3<annotation encoding="application/x-tex">S^{3}</annotation></semantics> cuts out a trefoil knot… because it is “something cubed plus something squared equals zero”. And the lemma above says that the nondegenerate lattices are precisely the ones which do not satisfy this equation, i.e. they represent the complement of this trefoil.

Since we have not divided out by rotations, but only by scaling, we have arrived at a 3-dimensional picture which is very different to the 2-dimensional moduli space (upper half-plane divided by <semantics>SL(2,)<annotation encoding="application/x-tex">\SL(2,\mathbb{Z})</annotation></semantics>) picture familiar to a number theorist.

The modular flow

There is an intriguing flow on the space of lattices of unit area, called the modular flow. Think of <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> as sitting in <semantics> 2<annotation encoding="application/x-tex">\mathbb{R}^{2}</annotation></semantics>, and then act on <semantics> 2<annotation encoding="application/x-tex">\mathbb{R}^{2}</annotation></semantics> via the transformation <semantics>(e t 0 0 e t),<annotation encoding="application/x-tex"> \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ), </annotation></semantics> dragging the lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> along for the ride. (This isn’t just some formula we pulled out the blue — geometrically this is the ‘geodesic flow on the unit tangent bundle of the modular orbifold’.)

We are looking for periodic orbits of this flow.

“Impossible!” you say. “The points of the lattice go off to infinity!” Indeed they do… but disregard the individual points. The lattice itself can ‘click’ back into its original position:

animation

How are we to find such periodic orbits? Start with an integer matrix <semantics>A=(a b c d)SL(2,)<annotation encoding="application/x-tex"> A = \left ( \begin{array}{cc} a & b \\ c & d \end{array}\right ) \in \SL(2, \mathbb{Z}) </annotation></semantics> and assume <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is hyperbolic, which simply means <semantics>|a+d|2<annotation encoding="application/x-tex">|a + d| \geq 2</annotation></semantics>. Under these conditions, we can diagonalize <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> over the reals, so we can find a real matrix <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> such that <semantics>PAP 1=±(e t 0 0 e t)<annotation encoding="application/x-tex"> P A P^{-1} = \pm \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ) </annotation></semantics> for some <semantics>t<annotation encoding="application/x-tex">t \in \mathbb{R}</annotation></semantics>. Now set <semantics>LP( 2)<annotation encoding="application/x-tex">L \coloneqq P(\mathbb{Z}^{2})</annotation></semantics>. We claim that <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> is a periodic orbit of period <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>. Indeed: <semantics>L t =(e t 0 0 e t)P( 2) =±PA( 2) =±P( 2) =L.<annotation encoding="application/x-tex"> \begin{aligned} L_{t} &= \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ) P (\mathbb{Z}^{2}) \\ &= \pm PA (\mathbb{Z}^{2}) \\ &= \pm P (\mathbb{Z}^{2}) \\ &= L. \end{aligned} </annotation></semantics> We have just proved one direction of the following.

Theorem: The periodic orbits of the modular flow are in bijection with the conjugacy classes of hyperbolic elements in <semantics>SL(2,)<annotation encoding="application/x-tex">\SL(2, \mathbb{Z})</annotation></semantics>.

These periodic orbits produce fascinating knots in the complement of the trefoil! In fact, they link with the trefoil (the locus of degenerate lattices) in fascinating ways. Here are two examples, starting with different matrices <semantics>ASL(2,)<annotation encoding="application/x-tex">A \in \SL(2, \mathbb{Z})</annotation></semantics>.

animation

The trefoil is the fixed orange curve, while the periodic orbits are the red and green curves respectively.

Ghys proved the following two remarkable facts about these modular knots.

  • The linking number of a modular knot with the trefoil of degenerate lattices equals the Rademacher function of the corresponding matrix in <semantics>SL(2,)<annotation encoding="application/x-tex">\SL(2, \mathbb{Z})</annotation></semantics> (the change in phase of the Dedekind eta function).
  • The knots occuring in the modular flow are the same as those occuring in the Lorenz equations!

Who would have thought that lattices in the plane could tell the weather!!

I must say I have thought about many aspects of these closed geodesics, but it had never crossed my mind to ask which knots are produced. – Peter Sarnak

by willerton (S.Willerton@sheffield.ac.uk) at April 10, 2014 08:54 AM

astrobites - astro-ph reader's digest

Can planets and stars spin in unison?

Title: Stellar rotation-planetary orbit period commensurability in the HAT-P-11 system
Authors: Bence Béky, Matthew J. Holman, David M. Kipping, Robert, W. Noyes
First Author’s institution: Harvard-Smithsonian Centre for Astrophysics
Status: Accepted to ApJ

The orbits of some recently discovered exoplanets seem to be synchronised with the rotation of their host stars. Can this mystery be explained?

Figure 1

Figure 1. Every sixth transit light curve shows the same transit anomaly. The same star spot is being transited each time, providing evidence for a 6:1 ratio between the planet’s orbit and the star’s rotation.

In 2008 the planetary system, \tau Boo was noticed to be unusually behaved: the star seemed to spin exactly once for every orbit of the planet. At the time it was unclear whether this synchronicity was due to star-planet interactions, or just plain coincidence. Since then however, a handful more systems with planets that orbit at integer multiples of their host star’s rotation periods have been discovered. In this paper, HAT-P-11 is confirmed as a synchronous system in which the star rotates once for every six orbits of the planet. As more and more systems like this are discovered we have to ask whether some physical mechanism is responsible; is it physics or fluke?

HAT-P-11

The orbital period of exoplanet, HAT-P-11 b seems to be almost exactly six times the rotation period of its host star. This was first observed in 2010 and further evidence for this perfect 6:1 ratio between planetary orbit and stellar spin is provided, in this paper, by the transit light curves themselves.

Transit anomalies

When exoplanets transit their host stars they produce a characteristic ‘U’ shaped light curve. If a star-spot lies along the transit chord, an upwards ‘blip’ (a transit anomaly) will appear in the light curve. Four transits, separated by six orbital periods are shown in figure 1.  Each of these light curves shows the same transit anomaly occurring at the same time. It looks like the same spot is being transited four times in a row. That means the star spot must be in exactly the same position every six orbital periods, i.e. the star spins once for every six orbits of the planet.

The orbital period of a transiting planet is easy to measure – it’s just the time elapsed between transits. Stellar rotation periods, on the other hand, provide a little more of a challenge.

Figure 2
Figure 2. The Kepler light curve of HAT-P-11 in red and the light curve model in black with confidence regions in grey. The model is calculated, independently of the real light curve, from information about the star spot in the transit light curves of figure 1. The parameters of the model have not been optimised to fit the data, the idea here is just to show a qualitative comparison between the two. The blue lines in the bottom panel show the times of the four transits in figure 1.

 

 

Measuring the stellar rotation period

A dark spot on the surface of a star will reduce its overall brightness. As the star rotates and the spot moves behind the stellar limb, out of view, the star’s brightness will increase. Spots on the surface of rotating stars produce periodic dips in brightness, so to measure stellar rotation periods you simply measure the timescale of variability in light curves. Béky et al use two different but complementary methods to detect periodic signals in the light curves of HAT-P-11:  a periodogram and an auto-correlation function. They measure a stellar rotation period of ~30 days.

 

Light curve modelling

Remarkably, from the four light curves in figure 1, the authors can infer enough about the occulted spot to model the out-of-transit light curve. By estimating the size of the spot from the transit anomaly, and assuming that it is the dominant spot on the stellar surface, they predict what the out-of-transit variability should look like. Figure 2 provides a qualitative comparison between the observations (in red) and the prediction (in black). Spot-induced stellar variability is notoriously difficult to reproduce with theoretical models, so the agreement in figure 2 is pretty good!

Confirming the 6:1 ratio

Measuring stellar rotation periods from light curves is an imprecise process; differential rotation (different latitudes rotating at different speeds) can produce deviations from strict periodicity. In order to independently validate the 6:1 ratio the authors use a neat trick. They look for matching transit anomalies in every sixth transit. They also look for matches in every fifth and fourth – a whole range of integer values. If transit anomalies occur in transits that lie six transits apart far more often than four or five, that provides evidence for the 6:1 ratio. Figure 3 shows  the occurrence rate of repeating anomalies versus the number of separating transits. There is an increased rate at 6, 12 and 18 transit separations, verifying the 6:1 ratio hypothesis.

Figure 3

Figure 3. An increased occurrence rate of repeating anomalies lying at multiples of 6 provides further evidence for the 6:1 ratio.

An explanation

Is this observed integer ratio between orbital and rotation periods the result of star-planet interaction, or a simple coincidence?

The authors offer an explanation for this phenomenon. They suggest that a planet’s magnetic field could interact with its star’s to increase the number of star spots at certain latitudes of the stellar surface. If one latitude of the star is particularly spotty and the star has differential rotation, then the rotation period measured from the light curve will not be the same as the equatorial rotation period of the star. Now imagine that the planet’s orbital period lies somewhere between the rotational period of the star’s pole and that of its equator. The planet will be synchronised with one particular latitude of the star and, if it excites star spot formation at that latitude, it will look like the two objects spin in unison.

Is the observed synchronicity of HAT-P-11 caused by planet-induced spot formation at preferred stellar latitudes? This question will only be answered when we find more of these mysterious systems. We had better keep an eye out for them!

by Ruth Angus at April 10, 2014 07:44 AM

April 09, 2014

The Great Beyond - Nature blog

Acid-bath stem-cell scientist apologizes and appeals

Posted on behalf of David Cyranoski

Haruko Obokata, the Japanese scientist at the centre of a controversy over studies purporting to turn mature cells to stem cells simply by bathing them in acid or subjecting them to mechanical stress, today apologized for her errors in the work.

Kicking off a press conference in Osaka amid a storm of snapping cameras and flanked by two lawyers, Obokata blamed her immaturity and her lack of awareness of research protocols for the errors that were found in her two high-profile papers on the studies, published in Nature in January (Note: Nature’s news and comment teams are editorially independent of its research editorial team). These included the use of a duplicated image.

She took full responsibility for the errors, and apologized to her co-authors for the mess she got them into. Obokata, in her first public statement in more than two months, also apologized to her institute, the RIKEN Center for Developmental Biology, for the embarrassing press the whole ordeal had brought. In addition, she sought forgiveness from the RIKEN committee whose report earlier this month found her guilty of scientific misconduct. She had attacked the report at the time.

Obokata held the press conference for two reasons: to apologize for the errors and to make the case that her research was still valid and that the inaccuracies in the papers were not deliberate. Yesterday, she submitted a formal appeal to RIKEN that their committee retract its misconduct findings. She insisted that the “stimulus-triggered activation pluripotency” or STAP phenomenon, as it has been dubbed, exists. RIKEN has 50 days to respond to her appeal.

In the STAP work, lead author Obokata, along with Japanese and US colleagues, described stunning experiments in which she reprogrammed mature mouse cells to an embryonic state merely by stressing them. But the two papers soon fell under suspicion and last month a RIKEN-appointed investigative committee found in a preliminary report that they contained numerous errors. In a further report on 1 April, the committee announced that two of the errors constituted scientific misconduct. On the same day, Obokata responded aggressively, with a written statement expressing “shock and anger” at conclusions she said had been reached without giving her a chance to explain herself. Today, her tone was very different: pleading for forgiveness and showering apologies. But she maintained that her findings hold true.

Obokata insisted that the two problems which led to the misconduct findings — the duplicated image and the swapping of a diagram of an electrophoresis gel — were only errors. She said she had not been given enough time to explain her side to the committee.

After her five plus minutes of introductory remarks, her lawyer gave a 20-minute presentation to make the case that neither problem added up to misconduct. Defining fraud as fabrication, he countered that in both cases Obokata had the original data that should have been used but merely added the wrong data by mistake. For the more damning finding — an image of teratomas that had appeared in her doctoral dissertation and then again in the recent papers — the committee had found that she had changed a caption, which made it look intentional. The lawyer however traced the image back to a slide, part of a presentation that Obokata had continually updated and reused, until its origin became obscured. (Later, in one of her many apologies, she said, “If I had gone back to carefully check the original data, there wouldn’t have been this problem.”)

After the lawyer’s talk, Obokata responded to journalists’ questions for more than 2 hours. In response to suspicions based on the fact that she only handed two laboratory notebooks over to the committee looking into her research, she said she had four or five more that the committee hadn’t requested. She denied that she ever agreed to retract the papers. She also corrected reports that she had asked to retract her PhD dissertation, saying that she merely sought advice on how to proceed. Her dissertation is under investigation at Waseda University, where she studied for her doctorate.

Obokata also denied the possibility that the STAP cells had resulted from contamination from embryonic stem cells, saying that she had not allowed embryonic cells in the same laboratory and that she had carried out tests which precluded that possibility.

She said that she had created STAP cells more than 200 times, adding that she knows someone who has independently achieved it but refused to give the name (citing privacy). She believes that a RIKEN group trying to demonstrate STAP cells will help her. She has not, she said, been asked to participate in those efforts. She added that she would consider doing a public replication experiment but that it was not up to her whether she could.

Two hours into the questioning, her lawyer cut off journalists, citing concern for Obokata’s frail emotional state, and said she had to return to the hospital where she has been staying. She bowed, apologized, then bowed again and left. The press cameras contined to snap away.

 

by Shaoni Bhattacharya at April 09, 2014 05:55 PM

Marco Frasca - The Gauge Connection

Looking for the Lucasian Professor

Michael Green, the current Lucasian Professor, is now 67 and Cambridge University is looking for his substitute (see here). It is interesting that Wikipedia is now reporting rumours from researchers at DAMPT (see here)

As of 3rd April 2014, the University of Cambridge is recruiting the 19th Lucasian Professor. Rumour amongst DAMPT researchers is that the process is a required formality, but that unless there’s a last minute surprise the Lucasian Professorship will be held for the first time by a woman. It may also be the first time that the chair is not held by somebody born in the British Isles.

Wikipedia is generally not doing gossip so I report this here because I think this will not last too long. Anyhow, the procedure will be brief. The candidates should send their CV for 15 April and the chosen one should take up the appointment on 1 September this year.  I would like to remember that this is a really prestigious chair held by Newton, Dirac and Hawking to name just a few. Surely, Cambridge University will choose for the best once again.


Filed under: News, Physics, Rumors Tagged: Cambridge University, Lucasian chair

by mfrasca at April 09, 2014 03:15 PM

astrobites - astro-ph reader's digest

Neutrinos from the Big Bang – in focus

When we look back in time throughout the history of the Universe, the earliest we can see is about 400,000 years after the Big Bang. At that time, the photons started travelling freely through the Universe, and they reach us today as the “Cosmic Microwave Background” (CMB). There has been quite a bit of excitement recently about what the CMB tells us about the properties of the Universe before it was 400,000 years old (see this astrobite). But is there any other way of probing the early Universe? The authors of this paper discuss another time machine: the “Cosmic Neutrino Background” (CνB), a sea of relic neutrinos that were emitted ~1 second after the Big Bang.

Neutrinos are very elusive particles, with no electric charge and little mass. This means that they can travel long distances without interacting with other particles. The best way of detecting them is to catch them undergoing a process similar to beta-decay: a relic neutrino interacts with a nucleus to create a daughter nucleus and an electron. Experiments such as KATRIN and PTOLEMY aim to detect relic neutrino interactions with tritium (an isotope of hydrogen with one proton and two neutrons) by looking at the distribution of energies of the out-coming electrons. But what does a “detection” look like? What results should we expect from these experiments? The authors of the paper make predictions of the neutrino detection signal when neutrinos are subject to the gravity of the Sun.

The expected detection signal depends on the spatial distribution of relic neutrinos and their velocities. Some analogies can be made to other problems in observational cosmology, specifically to the CMB or to the search for another very elusive particle: dark matter. For example, our Galaxy moves with a certain velocity through space, relative to the CMB. If dark matter surrounds us, we expect to see a “wind” of dark matter particles as the Earth moves about its orbit. Similarly, we expect to experience a wind of neutrinos. The figure below shows the geometry of the problem on the left.

The modulation of the CnuB background.

On the left, the authors depict the geometry of the problem. The Earth orbits around the Sun, which orbits around the Galactic Center. All of the Galaxy moves with respect to the CνB, creating the illusion of a “neutrino wind”. Two cases are considered: neutrinos can be gravitationally bound or not to the Milky Way. On the right, we see the Earth orbiting the Sun face-on. The effect of the Sun on the incident neutrinos is to gravitationally deflect them and focus them, enhancing their density in some parts of the orbit. When the Earth goes through that region, more detections are expected.  Figure 1 of Safdi et al.

For direct-detection experiments looking for dark matter particles, the main effect of a dark matter wind is to produce annual modulation of the predicted detection signal. When the particles come towards us, we should see more interactions than when they are moving in our same direction. Here, the analogy breaks. Because the rate of interaction of cosmic neutrinos does not depend on their velocities, the wind does not cause modulation. But there is another source of modulation: the focusing of the neutrinos by the Sun, shown in the right panel of the figure above. Neutrinos approaching the Sun will feel its gravitational pull. As a consequence, the Sun deflects them into a focusing point. As the Earth goes along its orbit, there is a time of the year when it encounters the highest density of neutrinos, closer to the focusing point.

The modulation of the neutrino detection signal.

The modulation of the neutrino detection signal for bound (solid) and unbound (dashed) neutrinos. The two signals would peak at different times of the year. The amplitude of the modulation depends on the mass of the neutrinos (two masses considered in blue and purple) and on the velocity of bound neutrinos around the Galactic Center (two cases shown in black and orange). Figure 2 of Safdi et al.

The figure to the right shows the percent modulation of the signal of neutrinos interacting with the detector throughout the year. The authors study two possibilities for the properties of the wind: neutrinos can be gravitationally bound to the Milky Way, or they can be unbound. The signal from bound or unbound neutrinos peaks at different times of the year, since they approach the Solar System from different directions. In the case of unbound neutrinos, the modulation is larger for more massive neutrinos. Because those are moving more slowly, they spend more time near the Sun and their trajectories can be easily deflected (blue vs. purple lines). For neutrinos bound to our Galaxy,  the efficiency of gravitational focusing, and hence the modulation, depends on their velocity about the Galactic Center. Modulation is larger if neutrinos are moving more slowly around the Galactic Center (black vs. orange lines).

Can the neutrino background and its modulation by gravitational focusing be detected? The authors conclude that an experiment like PTOLEMY will be able to measure approximately 10 neutrino interactions per year, allowing for a detection of the CνB, but not its modulation. A detection would provide a confirmation that neutrinos were indeed emitted at the first second, and it would constrain the neutrino mass. The modulation, which carries information on the velocity distribution of neutrinos and also their mass, could be detected only if the density of neutrinos were very high (which some models allow) or if a detector 100 times bigger than PTOLEMY is built. Nevertheless, the prospect of detecting the CνB and going back to the first second is a very exciting one.

by Elisa Chisari at April 09, 2014 02:59 PM

The n-Category Cafe

Operads and Trees

Nina Otter is a master’s student in mathematics at ETH Zürich who has just gotten into the PhD program at Oxford. She and I are writing a paper on operads and the tree of life.

Anyone who knows about operads knows that they’re related to trees. But I’m hoping someone has proved some precise theorems about this relationship, so that we don’t have to.

By operad I’ll always mean a symmetric topological operad. Such a thing has an underlying ‘symmetric collection’, which in turn has an underlying ‘collection’. A collection is just a sequence of topological spaces <semantics>O n<annotation encoding="application/x-tex">O_n</annotation></semantics> for <semantics>n0<annotation encoding="application/x-tex">n \ge 0</annotation></semantics>. In a symmetric collection, each space <semantics>O n<annotation encoding="application/x-tex">O_n</annotation></semantics> has an action of the symmetric group <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics>.

I’m hoping that someone has already proved something like this:

Conjecture 1. The free operad on the terminal symmetric collection has, as its space of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary operations, the set of rooted trees having some of their leaves labelled <semantics>{1,,n}<annotation encoding="application/x-tex">\{1, \dots, n\}</annotation></semantics>.

Conjecture 2. The free operad on the terminal collection has, as its space of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary operations, the set of planar rooted trees having some of their leaves labelled <semantics>{1,,n}<annotation encoding="application/x-tex">\{1, \dots, n\}</annotation></semantics>.

Calling them ‘conjectures’ makes it sound like they might be hard — but they’re not supposed to be hard. If they’re right, they should be easy and in some sense well-known! But there are various slightly different concepts of ‘rooted tree’ and ‘rooted planar tree’, and we have to get the details right to make these conjectures true. For example, a graph theorist might draw a rooted planar tree like this:

while an operad theorist might draw it like this:

If the conjectures are right, we can use them to define the concepts of ‘rooted tree’ and ‘rooted planar tree’, thus side-stepping these details. And having purely operad-theoretic definitions of ‘tree’ and ‘rooted tree’ would make it a lot easier to use these concepts in operad theory. That’s what I want to do, ultimately. But proving these conjectures, and of course providing the precise definitions of ‘rooted tree’ and ‘rooted planar tree’ that make them true, would still be very nice.

And it would be even nicer if someone had already done this. So please provide references… and/or correct mistakes in the following stuff!

Rooted Trees

Definition. For any natural number <semantics>n=0,1,2,<annotation encoding="application/x-tex">n = 0, 1, 2, \dots</annotation></semantics>, an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-tree is a quadruple <semantics>T=(V,E,s,t)<annotation encoding="application/x-tex">T=(V,E,s,t)</annotation></semantics> where:

  • <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is a finite set whose elements are called internal vertices;
  • <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics> is a finite non-empty set whose elements are called edges;
  • <semantics>s:EV{1,,n}<annotation encoding="application/x-tex">s: E \to V\sqcup \{1,\dots, n\}</annotation></semantics> and <semantics>t:EV{0}<annotation encoding="application/x-tex">t: E \to V \sqcup\{0\}</annotation></semantics> are maps sending any edge to its source and target, respectively.

Given <semantics>u,vV{0}{1,,n}<annotation encoding="application/x-tex">u,v\in V \sqcup\{0\} \sqcup \{1,\dots, n\}</annotation></semantics>, we write <semantics>uev<annotation encoding="application/x-tex">u \stackrel{e}{\longrightarrow} v</annotation></semantics> if <semantics>eE<annotation encoding="application/x-tex">e\in E</annotation></semantics> is an edge such that <semantics>s(e)=u<annotation encoding="application/x-tex">s(e)=u</annotation></semantics> and <semantics>t(e)=v<annotation encoding="application/x-tex">t(e)=v</annotation></semantics>.

This data is required to satisfy the following conditions:

  • <semantics>s:EV{1,,n}<annotation encoding="application/x-tex">s : E \to V\sqcup \{1,\dots, n\}</annotation></semantics> is a bijection;
  • there exists exactly one <semantics>eE<annotation encoding="application/x-tex">e\in E</annotation></semantics> such that <semantics>t(e)=0<annotation encoding="application/x-tex">t(e)=0</annotation></semantics>;
  • for any <semantics>vV{1,,n}<annotation encoding="application/x-tex">v \in V\sqcup\{1,\dots , n\} </annotation></semantics> there exists a directed edge path from <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>: that is, a sequence of edges <semantics>e 0,,e n<annotation encoding="application/x-tex">e_0, \dots, e_n</annotation></semantics> and vertices <semantics>v 1,,v n<annotation encoding="application/x-tex">v_1 , \dots , v_n</annotation></semantics> such that <semantics>ve 0v 1,v 1e 1v 2,,v ne n0.<annotation encoding="application/x-tex"> v \stackrel{e_0}{\longrightarrow} v_1 , \; v_1 \stackrel{e_1}{\longrightarrow} v_2, \; \dots, \; v_n \stackrel{e_n}{\longrightarrow} 0 .</annotation></semantics>

So the idea is that our tree has <semantics>V{0}{1,,n}<annotation encoding="application/x-tex"> V \sqcup\{0\} \sqcup \{1,\dots, n\}</annotation></semantics> as its set of vertices. There could be lots of leaves, but we’ve labelled some of them by numbers <semantics>1,,n<annotation encoding="application/x-tex">1, \dots, n</annotation></semantics>. In our pictures, the source of each edge is at its top, while the target is at bottom.

There is a root, called <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, but also a ‘pre-root’: the unique vertex with an edge going from it to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>. I’m not sure I like this last bit, and we might be able to eliminate this redundancy, but it’s built into the operad theorist’s picture here:

It might be a purely esthetic issue. Like everything else, it gets a bit more scary when we consider degenerate special cases.

I’m hoping there’s an operad <semantics>Tree<annotation encoding="application/x-tex">Tree</annotation></semantics> whose set of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary operations, <semantics>Tree n<annotation encoding="application/x-tex">Tree_n</annotation></semantics>, consists of isomorphism classes of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-trees as defined above. I’m hoping someone has already proved this. And I hope someone has characterized this operad <semantics>Tree<annotation encoding="application/x-tex">Tree</annotation></semantics> in a more algebraic way, as follows.

Definition. A symmetric collection <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> consists of topological spaces <semantics>{C n} n0<annotation encoding="application/x-tex">\{C_n\}_{n \ge 0}</annotation></semantics> together with a continuous action of the symmetric group <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics> on each space <semantics>C n<annotation encoding="application/x-tex">C_n</annotation></semantics>. A morphism of symmetric collections <semantics>f:CC<annotation encoding="application/x-tex">f : C \to C'</annotation></semantics> consists of an <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics>-equivariant continuous map <semantics>f n:C nC n<annotation encoding="application/x-tex">f_n : C_n \to C'_n</annotation></semantics> for each <semantics>n0<annotation encoding="application/x-tex">n \ge 0</annotation></semantics>. Symmetric collections and morphisms between them form a category <semantics>STop<annotation encoding="application/x-tex">{STop}</annotation></semantics>.

(More concisely, if we denote the groupoid of sets of the <semantics>{1,,n}<annotation encoding="application/x-tex">\{1, \dots, n\}</annotation></semantics> and bijections between these as <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, then <semantics>STop<annotation encoding="application/x-tex">STop</annotation></semantics> is the category of functors from <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> to <semantics>Top<annotation encoding="application/x-tex">Top</annotation></semantics>.)

There is a forgetful functor from operads to symmetric collections

<semantics>U:OpSTop<annotation encoding="application/x-tex"> U : Op \to STop </annotation></semantics>

with a left adjoint

<semantics>F:STopOp<annotation encoding="application/x-tex"> F: STop \to Op </annotation></semantics>

assigning to each symmetric collection the operad freely generated by it.

Definition. Let <semantics>Comm<annotation encoding="application/x-tex">Comm</annotation></semantics> be the terminal operad: that is, the operad, unique up to isomorphism, such that <semantics>Comm n<annotation encoding="application/x-tex">Comm_n</annotation></semantics> is a 1-element set for each <semantics>n0<annotation encoding="application/x-tex">n \ge 0</annotation></semantics>.

The algebras of <semantics>Comm<annotation encoding="application/x-tex">\Comm</annotation></semantics> are commutative topological monoids.

Conjecture 1. There is a unique isomorphism of operads

<semantics>ϕ:F(U(Comm))Tree<annotation encoding="application/x-tex"> \phi : F (U (Comm)) \stackrel{\sim}{\longrightarrow} Tree </annotation></semantics>

that for each <semantics>n0<annotation encoding="application/x-tex">n \ge 0</annotation></semantics> sends the unique <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary operation in <semantics>Comm<annotation encoding="application/x-tex">Comm</annotation></semantics> to the corolla with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> leaves: that is, the isomorphism class of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-trees with no internal vertices.

(When I say “the unique <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary operation in <semantics>Comm<annotation encoding="application/x-tex">Comm</annotation></semantics>”, but treating it as an operation in <semantics>F(U(Comm))<annotation encoding="application/x-tex">F(U(Comm))</annotation></semantics>, I’m using the fact that the unique operation in <semantics>Comm n<annotation encoding="application/x-tex">\Comm_n</annotation></semantics> gives an element in <semantics>U(Comm) n<annotation encoding="application/x-tex">U(\Comm)_n</annotation></semantics>, and thus an operation in <semantics>F(U(Comm)) n<annotation encoding="application/x-tex">F(U(\Comm))_n</annotation></semantics>.)

Planar Rooted Trees

And there should be a similar result relating planar rooted trees to collections without symmetric group actions!

Definition. A planar <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-tree is an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-tree in which each internal vertex <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> is equipped with a linear order on the set of its children, i.e. the set <semantics>t 1(v)<annotation encoding="application/x-tex">t^{-1}(v)</annotation></semantics>.

I’m hoping someone has constructed an operad <semantics>PTree<annotation encoding="application/x-tex">PTree</annotation></semantics> whose set of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary operations, <semantics>PTree n<annotation encoding="application/x-tex">PTree_n</annotation></semantics>, consists of isomorphism classes of planar <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-trees. And I hope someone has characterized this operad <semantics>PTree<annotation encoding="application/x-tex">PTree</annotation></semantics> as follows:

Definition. A collection <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> consists of topological spaces <semantics>{C n} n0<annotation encoding="application/x-tex">\{C_n\}_{n \ge 0}</annotation></semantics>. A morphism of collections <semantics>f:CC<annotation encoding="application/x-tex">f :C \to C'</annotation></semantics> consists of a continuous map <semantics>f n:C nC n<annotation encoding="application/x-tex">f_n : C_n \to C'_n</annotation></semantics> for each <semantics>n0<annotation encoding="application/x-tex">n \ge 0</annotation></semantics>. Collections and morphisms between them form a category <semantics>Top<annotation encoding="application/x-tex">\mathbb{N}Top</annotation></semantics>.

(If we denote the category with natural numbers as objects and only identity morphisms between as <semantics><annotation encoding="application/x-tex">\mathbb{N}</annotation></semantics>, then <semantics>Top<annotation encoding="application/x-tex">\mathbb{N}Top</annotation></semantics> is the category of functors from <semantics><annotation encoding="application/x-tex">\mathbb{N}</annotation></semantics> to <semantics>Top<annotation encoding="application/x-tex">Top</annotation></semantics>.)

There is a forgetful functor

<semantics>ϒ:OpTop<annotation encoding="application/x-tex"> \Upsilon : Op \to \mathbb{N}Top </annotation></semantics>

with a left adjoint

<semantics>Φ:TopOp<annotation encoding="application/x-tex"> \Phi : \mathbb{N}Top \to Op </annotation></semantics>

assigning to each collection the operad freely generated by it. This left adjoint is the composite

<semantics>TopGSTopFOp<annotation encoding="application/x-tex"> \mathbb{N}Top \stackrel{G}{\longrightarrow} STop \stackrel{F}{\longrightarrow} Op </annotation></semantics>

where the first functor freely creates a symmetric collection <semantics>G(C)<annotation encoding="application/x-tex">G(C)</annotation></semantics> from a collection <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> by setting <semantics>G(C) n=S n×C n<annotation encoding="application/x-tex">G(C)_n = S_n \times C_n</annotation></semantics>, and the second freely generates an operad from a symmetric collection, as described above.

Conjecture 2. There is a unique isomorphism of operads

<semantics>ψ:Φ(ϒ(Comm))PTree<annotation encoding="application/x-tex"> \psi : \Phi(\Upsilon (Comm)) \stackrel{\sim}{\longrightarrow} PTree </annotation></semantics>

that for each <semantics>n0<annotation encoding="application/x-tex">n \ge 0</annotation></semantics> sends the unique <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary operation in <semantics>Comm<annotation encoding="application/x-tex">Comm</annotation></semantics> to the corolla with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> leaves ordered so that <semantics>1<<n<annotation encoding="application/x-tex">1 &lt; \cdots &lt; n</annotation></semantics>.

Have you seen a proof of this stuff???

by john (baez@math.ucr.edu) at April 09, 2014 02:49 PM

Andrew Jaffe - Leaves on the Line

Spring & Summer Science

As the academic year winds to a close, scientists’ thoughts turn towards all of the warm-weather travel ahead (in order to avoid thinking about exam marking). Mostly, that means attending scientific conferences, like the upcoming IAU Symposium, Statistical Challenges in 21st Century Cosmology in Lisbon next month, and (for me and my collaborators) the usual series of meetings to prepare for the 2014 release of Planck data. But there are also opportunities for us to interact with people outside of our technical fields: public lectures and festivals.

Next month, parallel to the famous Hay Festival of Literature & the Arts, the town of Hay-on-Wye also hosts How The Light Gets In, concentrating on the also-important disciplines of philosophy and music, with a strong strand of science thrown in. This year, along with comic book writer Warren Ellis, cringe-inducing politicians like Michael Howard and George Galloway, ubiquitous semi-intellectuals like Joan Bakewell, there will be quite a few scientists, with a skew towards the crowd-friendly and controversial. I’m not sure that I want to hear Rupert Sheldrake talk about the efficacy of science and the scientific method, although it might be interesting to hear Julian Barbour, Huw Price, and Lee Smolin talk about the arrow of time. Some of the descriptions are inscrutable enough to pique my interest: Nancy Cartwright and George Ellis will discuss “Ultimate Proof” — I can’t quite figure out if that means physics or epistemology. Perhaps similarly, chemist Peter Atkins will ask “Can science explain all of existence” (and apparently answer in the affirmative). Closer to my own wheelhouse, Roger Penrose, Laura Mersini-Houghton, and John Ellis will discuss whether it is “just possible the Big Bang will turn out to be a mistake”. Penrose was and is one of the smartest people to work out the consequences of Einstein’s general theory of relativity, though in the last few years his cosmological musings have proven to be, well, just plain wrong — but, as I said, controversial and crowd-pleasing… (Disclosure: someone from the festival called me up and asked me to write about it here.)

Alas, I’ll likely be in Lisbon, instead of Hay. But if you want to hear me speak, you can make your way up North to Grantham, where Isaac Newton was educated, for this year’s Gravity Fields festival in late September. The line-up isn’t set yet, but I’ll be there, as will my fellow astronomers Chris Lintott and Catherine Heymans and particle physicist Val Gibson, alongside musicians, dancers, and lots of opportunities to explore the wilds of Lincolnshire. Or if you want to see me before then (and prefer to stay in London), you can come to Imperial for my much-delayed Inaugural Professorial Lecture on May 21, details TBC…

by Andrew at April 09, 2014 12:08 PM

John Baez - Azimuth

What Does the New IPCC Report Say About Climate Change? (Part 2)

guest post by Steve Easterbrook

(2) Humans caused the majority of it

The summary for policymakers says:

It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.

The Earth's energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

(Box 13.1 fig 1) The Earth’s energy budget from 1970 to 2011. Cumulative energy flux (in zettajoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

This chart summarizes the impact of different drivers of warming and/or cooling, by showing the total cumulative energy added to the earth system since 1970 from each driver. Note that the chart is in zettajoules (1021J). For comparison, one zettajoule is about the energy that would be released from 200 million bombs of the size of the one dropped on Hiroshima. The world’s total annual global energy consumption is about 0.5 zettajoule.

Long lived greenhouse gases, such as CO2, contribute the majority of the warming (the purple line). Aerosols, such as particles of industrial pollution, block out sunlight and cause some cooling (the dark blue line), but nowhere near enough to offset the warming from greenhouse gases. Note that aerosols have the largest uncertainty bar; much of the remaining uncertainty about the likely magnitude of future climate warming is due to uncertainty about how much of the warming might be offset by aerosols. The uncertainty on the aerosols curve is, in turn, responsible for most of the uncertainty on the black line, which shows the total effect if you add up all the individual contributions.

The graph also puts into perspective some of other things that people like to blame for climate change, including changes in energy received from the sun (‘solar’), and the impact of volcanoes. Changes in the sun (shown in orange) are tiny compared to greenhouse gases, but do show a very slight warming effect. Volcanoes have a larger (cooling) effect, but it is short-lived. There were two major volcanic eruptions in this period, El Chichón in 1982 and and Pinatubo in 1992. Each can be clearly seen in the graph as an immediate cooling effect, which then tapers off after a a couple of years.


You can download all of Climate Change 2013: The Physical Science Basis here. It’s also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

by John Baez at April 09, 2014 10:59 AM

Quantum Diaries

Major harvest of four-leaf clover

The LHCb Collaboration at CERN has just confirmed the unambiguous observation of a very exotic state, something that looks strangely like a particle being made of four quarks. As exotic as it might be, this particle is sternly called Z(4430)-, which gives its mass at 4430 MeV, roughly four times heavier than a proton, and indicates it is has a negative electric charge. The letter Z shows that it belongs to a strange series of particles that are referred to as XYZ states.

So what’s so special about this state? The conventional and simple quark model states that there are six different quarks, each quark coming with its antiparticle.  All these particles form bound states by either combining two or three of them. Protons and neutrons for example are made of three quarks. All states made of three quarks are called baryons. Other particles like pions and kaons, which are often found in the decays of heavier particles, are made of one quark and one antiquark. These form the mesons category. Until 2003, the hundreds of particles observed were classified either as mesons or baryons.

And then came the big surprise: in 2003, the BELLE experiment found a state that looked like a bound state of four quarks. Many other exotic states have been observed since. These states often look like charmonium or bottomonium states, which contain a charm quark and a charm antiquark, or a bottom and antibottom quarks. Last spring, the BESIII collaboration from Beijing confirmed the observation of the Zc(3900)+ state also seen by BELLE.

On April 8, the LHCb collaboration reported having found the Z(4430)- with ten times more events than all other groups before. The data sample is so large that it enabled LHCb to measure some of its properties unambiguously. Determining the exact quantum numbers of a particle is like getting its fingerprints: it allows physicists to find out exactly what kind of particle it is. Hence, the Z(4430)- state appears to be made of a charm, an anti-charm, a down and an anti up quarks. Their measurement rules out several other possibilities.

LHCb-Z(4430)

The squared mass distribution for the 25,200 B meson decays to ψ’ π- found by LHCb in their entire data set. The black points represent the data, the red curve the result of the simulation when including the presence of the Z(4430)- state. The dashed light brown curve below shows that the simulation fails to reproduce the data if no contribution from Z(4430)- is included, establishing the clear presence of this particle with 13.9σ (that is, the signal is 13.9 times stronger than all possible combined statistical fluctuations. These are the error bars represented by the small vertical line attached to each point).

Theorists are hard at work now trying to come up with a model to describe these new states. Is this a completely new tetraquark, a bound state of four quarks, or some strange combination of two charmed mesons (mesons containing at least one charm quark)? The question is still open.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline
 or sign-up on this mailing list to receive and e-mail notification.

For more information, see the LHCb website

by CERN at April 09, 2014 09:56 AM

Quantum Diaries

Grosse moisson de trèfles à quatre feuilles

La collaboration LHCb du CERN vient de confirmer hors de tout doute l’existence d’un état très exotique, quelque chose qui ressemble étrangement à une particule formée de quatre quarks. Aussi exotique qu’elle puisse paraître, cette particule porte le nom très pragmatique de Z(4430)-. Ce nom indique sa masse à 4430 MeV, soit  environ quatre fois celle d’un proton, et signale qu’elle a une charge électrique négative. La lettre Z montre qu’elle appartient à une étrange série de particules communément regroupées sous l’appellation d’états XYZ.

Mais qu’est-ce que cet état a donc de si spécial? Le modèle conventionnel des quark est tout simple: il existe six quarks différents, chacun venant avec son antiparticule. Ces douze particules peuvent se combiner pour former des états liés en regroupant deux ou trois d’entre eux. Par exemple, les protons et des neutrons sont composés de trois quarks. Tous les états faits de trois quarks sont appelés baryons. D’autres particules comme les pions et les kaons, qu’on retrouve souvent dans les désintégrations de particules plus lourdes, sont formées d’un quark et d’un antiquark. Elles appartiennent à la catégorie des mésons. Les centaines de particules observées jusqu’en 2003 étaient toutes classifiées soit comme mésons, soit comme baryons.

Puis vint la grande surprise: en 2003, l’expérience BELLE trouva le premier état lié fait en apparence de quatre quarks. Beaucoup d’autres états exotiques similaires ont été observés depuis. Ces états ressemblent souvent à des états de charmonium ou de bottomonium, des particules qui contiennent respectivement un quark charmé et un antiquark charmé, ou un quark bottom et un anti-bottom. Au printemps dernier, la collaboration BESIII de Beijing a confirmé l’observation du Zc(3900)+, un état aussi détecté par BELLE.

Le 8 avril, la collaboration LHCb a rapporté avoir trouvé l’état Z(4430)- avec dix fois plus d’événements que tous les autres groupes précédents. Leur échantillon de données est si grand qu’il a permis à LHCb de mesurer certaines de ses propriétés sans équivoque. La détermination des nombres quantiques exacts d’une particule équivaut à l’obtention de ses empreintes digitales: cela permet aux physicien-ne-s de cerner plus exactement à quelle particule on a affaire. Il en ressort que l’état Z(4430)- serait formé d’un quark charmé, d’un antiquark charmé, d’un quark bottom et un anti-bottom. Leur mesure exclut toutes autres possibilités.

LHCb-Z(4430)

La distribution de la masse (au carré) des 25200 mésons B se désintégrant en paires de ψ’ π- trouvés par LHCb dans l’ensemble de leurs données. Les points noirs représentent les données expérimentales et la courbe en rouge, le résultat de la simulation lorsqu’on inclut la présence du Z(4430)-. La courbe en pointillés juste en dessous en brun clair montre que la simulation ne peut reproduire les données si on supprime la contribution du Z(4430)-. Ceci établit clairement la présence de cette particule avec 13.9σ (c’est-à-dire le signal est 13.9 fois plus fort que toutes les fluctuations statistiques combinées possible. La fluctuation de chaque point est représentée par la petite ligne verticale qui lui est attachée).

Les théoricien-ne-s sont à pied d’oeuvre pour essayer d’imaginer un modèle pouvant décrire ces nouveaux états. S’agit-il d’états complètement nouveaux faits de quatre quarks liés ensemble, des tétraquarks, ou est-ce une étrange combinaison de deux mésons charmés (des mésons contenant au moins un quark charmé)? La question est toujours ouverte.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

Pour plus de tails (en anglais) voir le site de l’expérience LHCb

 

by CERN (Francais) at April 09, 2014 09:55 AM

Tommaso Dorigo - Scientificblogging

What Next ?
Yesterday I was in Rome, at a workshop organized by the Italian National Institute for Nuclear Physics (INFN), titled "What Next". The event was meant to discuss the plan for basic research in fundamental physics and astrophysics beyond the next decade or so, given the input we have and the input we might collect in the next few years at accelerators and other facilities.

read more

by Tommaso Dorigo at April 09, 2014 07:49 AM

April 08, 2014

Sean Carroll - Preposterous Universe

Chaos, Hallucinogens, Virtual Reality, and the Science of Self

Chaotic Awesome is a webseries hosted by Chloe Dykstra and Michele Morrow, generally focused on all things geeky, such as gaming and technology. But the good influence of correspondent Christina Ochoa ensures that there is also a healthy dose of real science on the show. It was a perfect venue for Jennifer Ouellette — science writer extraordinaire, as well as beloved spouse of your humble blogger — to talk about her latest masterwork, Me, Myself, and Why: Searching for the Science of Self.

Jennifer’s book runs the gamut from the role of genes in forming personality to the nature of consciousness as an emergent phenomenon. But it also fits very naturally into a discussion of gaming, since our brains tend to identify very strongly with avatars that represent us in virtual spaces. (My favorite example is Jaron Lanier’s virtual lobster — the homuncular body map inside our brain is flexible enough to “grow new limbs” when an avatar takes a dramatically non-human form.) And just for fun for the sake of scientific research, Jennifer and her husband tried out some psychoactive substances that affect the self/other boundary in a profound way. I’m mostly a theorist, myself, but willing to collect data when it’s absolutely necessary.

by Sean Carroll at April 08, 2014 10:54 PM

Symmetrybreaking - Fermilab/SLAC

Searching for the holographic universe

Physicist Aaron Chou keeps the Holometer experiment—which looks for a phenomenon whose implications border on the unreal—grounded in the realities of day-to-day operations.

The beauty of the small operation—the mom-and-pop restaurant or the do-it-yourself home repair—is that pragmatism begets creativity. The industrious individual who makes do with limited resources is compelled onto paths of ingenuity, inventing rather than following rules to address the project’s peculiarities.

by Leah Hesla at April 08, 2014 06:45 PM

ZapperZ - Physics and Physicists

"An Engineering Guide To Photoinjectors"
How would you like to own a 335-page book on the physics and engineering of electron photoinjectors? For free!

That is what you will get if you click on the link. If you are ever interested in electron accelerators, especially at the "birthing" end where the electrons are generated and given the initial acceleration, this is the review book to get. It explores not only the engineering aspect of the photoinjectors, but also the physics of photocathodes, and what makes a good photocathode for accelerator applications.

Highly recommended.

Zz.

by ZapperZ (noreply@blogger.com) at April 08, 2014 02:50 PM

ZapperZ - Physics and Physicists

arXiv blog

Lubos Motl - string vacua and pheno

Alan Guth's 1979 handwritten notes
First, let me mention that the CERN accelerator complex is in the middle of the Two Years' Vacation which is the right moment to begin its reawakening. See LHC begins long road to restart at the Symmetry Magazine.



They first restart the source, then the smaller rings and boosters etc. – the chronology of the restart pretty much mimics the 6-minute video above that describes the CERN rings etc. culminating in the LHC itself. In one year, e.g. in April 2015, the collisions at 13 TeV should begin. Because of the quantitative leap in energy, all "null results" may be instantly forgotten and there will be a completely new chance – but not certainty – to detect previously undetected physical phenomena.

But I want to mention another article at the Symmetry Magazine that was posted in 2005.




The article is called Inflation and it contains the photographs of some of the first pages at which Alan Guth made the realization that allowed cosmic inflation to be born.




Now, Alan Guth won the "messiest office" contest but that didn't prevent him from storing the first inflationary notebook pages from December 1979!



Click to zoom in.

If you can't read it (the notebook is a part of the permanent exhibition at the Adler museum in Chicago), it says:
EV 5
Dec 7, 1979

Spectacular realization

This kind of supercooling can explain why the universe today is so incredibly flat – and therefore resolve the fine-tuning paradox pointed out by Bob Dicke in his Einstein day lectures.

Let me first rederive the Dicke paradox. He relies on the empirical fact that the deacceleration parameter today \(q_0\) is of order 1.
Just to be sure, the Dicke paradox is the puzzling realization that \(\Omega=1\), the Universe seems to be flat without no good reason in the Big Bang cosmology. As far as I understand, the "supercooling" mechanism was just his original terminology for the old inflation – with a false vacuum state that ultimately tunnels into the true vacuum. Only weeks after these notes were written, Guth would hear about the horizon problem. But he would already be giving talks about "inflation" at the end of January 1980 (01/23/1980, three days after the first inauguration of Ronald Reagan).

Note that the inflationary expansion is a fancy example of "supercooling", i.e. so fast cooling without sufficiently good condensation nuclei that the liquid doesn't become solid during the cooling process. The temperature during inflation decreases 100,000 times (no, it is not the same googol-like huge numbers that describe the linear expansion of the space, but it is still large), from \(10^{27}\) to \(10^{22}\) kelvins. At the end of inflation, the temperature increases again – it's the "reheating" when the seeds of matter that later turns to galaxies and us is born out of the kinetic energy of the inflaton field.



A continuation of the page above. Click to zoom in.

"So, after a few of the most productive hours I had ever spent at my desk, I had learned something remarkable. Would the supercooled phase transition affect the expansion rate of the universe? By 1:00 a.m. I knew the answer: Yes, more than I could have ever imagined."

Alan Guth, The Inflationary Universe, Cambridge MA, 1998

by Luboš Motl (noreply@blogger.com) at April 08, 2014 11:53 AM