Particle Physics Planet

August 29, 2015

Christian P. Robert - xi'an's og

no country for ‘Og snaps?!

A few days ago, I got an anonymous comment complaining about my tendency to post pictures “no one is interested in” on the ‘Og and suggesting I moved them to another electronic media like Twitter or Instagram as to avoid readers having to sort through the blog entries for statistics related ones, to separate the wheat from the chaff… While my first reaction was (unsurprisingly) one of irritation, a more constructive one is to point out to all (un)interested readers that they can always subscribe by RSS to the Statistics category (and skip the chaff), just like R bloggers only post my R related entries. Now, if more ‘Og’s readers find the presumably increasing flow of pictures a nuisance, just let me know and I will try to curb this avalanche of pixels… Not certain that I succeed, though!

Filed under: Mountains, pictures, Travel Tagged: North Cascades National Park, Og, R, Rainy Pass, Washington State

by xi'an at August 29, 2015 10:15 PM

Peter Coles - In the Dark

Statistics in Astronomy

A few people at the STFC Summer School for new PhD students in Cardiff last week asked if I could share the slides. I’ve given the Powerpoint presentation to the organizers so presumably they will make the presentation available, but I thought I’d include it here too. I’ve corrected a couple of glitches I introduced trying to do some last-minute hacking just before my talk!

As you will inferfrom the slides, I decided not to compress an entire course on statistical methods into a one-hour talk. Instead I tried to focus on basic principles, primarily to get across the importance of Bayesian methods for tackling the usual tasks of hypothesis testing and parameter estimation. The Bayesian framework offers the only mathematically consistent way of tackling such problems and should therefore be the preferred method of using data to test theories. Of course if you have data but no theory or a theory but no data, any method is going to struggle. And if you have neither data nor theory you’d be better off getting one of the other before trying to do anything. Failing that, you could always go down the pub.

Rather than just leave it at that I thought I’d append some stuff  I’ve written about previously on this blog, many years ago, about the interesting historical connections between Astronomy and Statistics.

Once the basics of mathematical probability had been worked out, it became possible to think about applying probabilistic notions to problems in natural philosophy. Not surprisingly, many of these problems were of astronomical origin but, on the way, the astronomers that tackled them also derived some of the basic concepts of statistical theory and practice. Statistics wasn’t just something that astronomers took off the shelf and used; they made fundamental contributions to the development of the subject itself.

The modern subject we now know as physics really began in the 16th and 17th century, although at that time it was usually called Natural Philosophy. The greatest early work in theoretical physics was undoubtedly Newton’s great Principia, published in 1687, which presented his idea of universal gravitation which, together with his famous three laws of motion, enabled him to account for the orbits of the planets around the Sun. But majestic though Newton’s achievements undoubtedly were, I think it is fair to say that the originator of modern physics was Galileo Galilei.

Galileo wasn’t as much of a mathematical genius as Newton, but he was highly imaginative, versatile and (very much unlike Newton) had an outgoing personality. He was also an able musician, fine artist and talented writer: in other words a true Renaissance man.  His fame as a scientist largely depends on discoveries he made with the telescope. In particular, in 1610 he observed the four largest satellites of Jupiter, the phases of Venus and sunspots. He immediately leapt to the conclusion that not everything in the sky could be orbiting the Earth and openly promoted the Copernican view that the Sun was at the centre of the solar system with the planets orbiting around it. The Catholic Church was resistant to these ideas. He was hauled up in front of the Inquisition and placed under house arrest. He died in the year Newton was born (1642).

These aspects of Galileo’s life are probably familiar to most readers, but hidden away among scientific manuscripts and notebooks is an important first step towards a systematic method of statistical data analysis. Galileo performed numerous experiments, though he certainly carry out the one with which he is most commonly credited. He did establish that the speed at which bodies fall is independent of their weight, not by dropping things off the leaning tower of Pisa but by rolling balls down inclined slopes. In the course of his numerous forays into experimental physics Galileo realised that however careful he was taking measurements, the simplicity of the equipment available to him left him with quite large uncertainties in some of the results. He was able to estimate the accuracy of his measurements using repeated trials and sometimes ended up with a situation in which some measurements had larger estimated errors than others. This is a common occurrence in many kinds of experiment to this day.

Very often the problem we have in front of us is to measure two variables in an experiment, say X and Y. It doesn’t really matter what these two things are, except that X is assumed to be something one can control or measure easily and Y is whatever it is the experiment is supposed to yield information about. In order to establish whether there is a relationship between X and Y one can imagine a series of experiments where X is systematically varied and the resulting Y measured.  The pairs of (X,Y) values can then be plotted on a graph like the example shown in the Figure.


In this example on it certainly looks like there is a straight line linking Y and X, but with small deviations above and below the line caused by the errors in measurement of Y. This. You could quite easily take a ruler and draw a line of “best fit” by eye through these measurements. I spent many a tedious afternoon in the physics labs doing this sort of thing when I was at school. Ideally, though, what one wants is some procedure for fitting a mathematical function to a set of data automatically, without requiring any subjective intervention or artistic skill. Galileo found a way to do this. Imagine you have a set of pairs of measurements (xi,yi) to which you would like to fit a straight line of the form y=mx+c. One way to do it is to find the line that minimizes some measure of the spread of the measured values around the theoretical line. The way Galileo did this was to work out the sum of the differences between the measured yi and the predicted values mx+c at the measured values x=xi. He used the absolute difference |yi-(mxi+c)| so that the resulting optimal line would, roughly speaking, have as many of the measured points above it as below it. This general idea is now part of the standard practice of data analysis, and as far as I am aware, Galileo was the first scientist to grapple with the problem of dealing properly with experimental error.


The method used by Galileo was not quite the best way to crack the puzzle, but he had it almost right. It was again an astronomer who provided the missing piece and gave us essentially the same method used by statisticians (and astronomy) today.

Gauss_11Karl Friedrich Gauss (left) was undoubtedly one of the greatest mathematicians of all time, so it might be objected that he wasn’t really an astronomer. Nevertheless he was director of the Observatory at Göttingen for most of his working life and was a keen observer and experimentalist. In 1809, he developed Galileo’s ideas into the method of least-squares, which is still used today for curve fitting.

This approach involves basically the same procedure but involves minimizing the sum of [yi-(mxi+c)]2 rather than |yi-(mxi+c)|. This leads to a much more elegant mathematical treatment of the resulting deviations – the “residuals”.  Gauss also did fundamental work on the mathematical theory of errors in general. The normal distribution is often called the Gaussian curve in his honour.

After Galileo, the development of statistics as a means of data analysis in natural philosophy was dominated by astronomers. I can’t possibly go systematically through all the significant contributors, but I think it is worth devoting a paragraph or two to a few famous names.

I’ve already written on this blog about Jakob Bernoulli, whose famous book on probability was (probably) written during the 1690s. But Jakob was just one member of an extraordinary Swiss family that produced at least 11 important figures in the history of mathematics.  Among them was Daniel Bernoulli who was born in 1700.  Along with the other members of his famous family, he had interests that ranged from astronomy to zoology. He is perhaps most famous for his work on fluid flows which forms the basis of much of modern hydrodynamics, especially Bernouilli’s principle, which accounts for changes in pressure as a gas or liquid flows along a pipe of varying width.
But the elder Jakob’s work on gambling clearly also had some effect on Daniel, as in 1735 the younger Bernoulli published an exceptionally clever study involving the application of probability theory to astronomy. It had been known for centuries that the orbits of the planets are confined to the same part in the sky as seen from Earth, a narrow band called the Zodiac. This is because the Earth and the planets orbit in approximately the same plane around the Sun. The Sun’s path in the sky as the Earth revolves also follows the Zodiac. We now know that the flattened shape of the Solar System holds clues to the processes by which it formed from a rotating cloud of cosmic debris that formed a disk from which the planets eventually condensed, but this idea was not well established in the time of Daniel Bernouilli. He set himself the challenge of figuring out what the chance was that the planets were orbiting in the same plane simply by chance, rather than because some physical processes confined them to the plane of a protoplanetary disk. His conclusion? The odds against the inclinations of the planetary orbits being aligned by chance were, well, astronomical.

The next “famous” figure I want to mention is not at all as famous as he should be. John Michell was a Cambridge graduate in divinity who became a village rector near Leeds. His most important idea was the suggestion he made in 1783 that sufficiently massive stars could generate such a strong gravitational pull that light would be unable to escape from them.  These objects are now known as black holes (although the name was coined much later by John Archibald Wheeler). In the context of this story, however, he deserves recognition for his use of a statistical argument that the number of close pairs of stars seen in the sky could not arise by chance. He argued that they had to be physically associated, not fortuitous alignments. Michell is therefore credited with the discovery of double stars (or binaries), although compelling observational confirmation had to wait until William Herschel’s work of 1803.

It is impossible to overestimate the importance of the role played by Pierre Simon, Marquis de Laplace in the development of statistical theory. His book A Philosophical Essay on Probabilities, which began as an introduction to a much longer and more mathematical work, is probably the first time that a complete framework for the calculation and interpretation of probabilities ever appeared in print. First published in 1814, it is astonishingly modern in outlook.

Laplace began his scientific career as an assistant to Antoine Laurent Lavoiser, one of the founding fathers of chemistry. Laplace’s most important work was in astronomy, specifically in celestial mechanics, which involves explaining the motions of the heavenly bodies using the mathematical theory of dynamics. In 1796 he proposed the theory that the planets were formed from a rotating disk of gas and dust, which is in accord with the earlier assertion by Daniel Bernouilli that the planetary orbits could not be randomly oriented. In 1776 Laplace had also figured out a way of determining the average inclination of the planetary orbits.

A clutch of astronomers, including Laplace, also played important roles in the establishment of the Gaussian or normal distribution.  I have also mentioned Gauss’s own part in this story, but other famous astronomers played their part. The importance of the Gaussian distribution owes a great deal to a mathematical property called the Central Limit Theorem: the distribution of the sum of a large number of independent variables tends to have the Gaussian form. Laplace in 1810 proved a special case of this theorem, and Gauss himself also discussed it at length.

A general proof of the Central Limit Theorem was finally furnished in 1838 by another astronomer, Friedrich Wilhelm Bessel– best known to physicists for the functions named after him – who in the same year was also the first man to measure a star’s distance using the method of parallax. Finally, the name “normal” distribution was coined in 1850 by another astronomer, John Herschel, son of William Herschel.

I hope this gets the message across that the histories of statistics and astronomy are very much linked. Aspiring young astronomers are often dismayed when they enter research by the fact that they need to do a lot of statistical things. I’ve often complained that physics and astronomy education at universities usually includes almost nothing about statistics, because that is the one thing you can guarantee to use as a researcher in practically any branch of the subject.

Over the years, statistics has become regarded as slightly disreputable by many physicists, perhaps echoing Rutherford’s comment along the lines of “If your experiment needs statistics, you ought to have done a better experiment”. That’s a silly statement anyway because all experiments have some form of error that must be treated statistically, but it is particularly inapplicable to astronomy which is not experimental but observational. Astronomers need to do statistics, and we owe it to the memory of all the great scientists I mentioned above to do our statistics properly.

by telescoper at August 29, 2015 11:56 AM

August 28, 2015

Christian P. Robert - xi'an's og

walking the PCT

The last book I read in the hospital was wild, by Cheryl Strayed, which was about walking the Pacific Crest Trail (PCT) as a regenerating experience. The book was turned into a movie this year. I did not like the book very much and did not try to watch the film, but when I realised my vacation rental would bring me a dozen miles from the PCT, I planned a day hike along this mythical trail… Especially since my daughter had dreams of hiking the trail one day. (Not realising at the time that Cheryl Strayed had not come that far north, but had stopped at the border between Oregon and Washington.OLYMPUS DIGITAL CAMERA)

The hike was really great, staying on a high ridge for most of the time and offering 360⁰ views of the Eastern North Cascades (as well as forest fire smoke clouds in the distance…) Walking on the trail was very smooth as it was wide enough, with a limited gradient and hardly anyone around. Actually, we felt like intruding tourists on the trail, with our light backpacks, since the few hikers we crossed were long-distance hikers, “doing” the trail with sometimes backpacks that looked as heavy as Strayed’s original “Monster”. And sometimes with incredibly light ones. A great specificity of those people is that they all were more than ready to share their experiences and goals, with no complaint about the hardship of being on the trail for several months! And sounding more sorry than eager to reach the Canadian border and the end of the PCT in a few more dozen miles… For instance, a solitary female hiker told us of her plans to get back to the section near Lake Chelan she had missed the week before due to threatening forest fires. A great entry to the PCT, with the dream of walking a larger portion in an undefined future…

Filed under: Books, Kids, Mountains, pictures, Running, Travel Tagged: backpacking, North Cascades National Park, Oregon, Pacific crest trail, PCT, vacations, Washington State

by xi'an at August 28, 2015 10:15 PM

astrobites - astro-ph reader's digest

A Cepheid Anomaly

Title: The Strange Evolution of Large Magellanic Cloud Cepheid OGLE-LMC-CEP1812

Authors: Hilding R. Neilson, Robert G. Izzard, Norbert Langer, and Richard Ignace

First Author’s Institution: Department of Astronomy & Astrophysics, University of Toronto

Status: Accepted to A&A

Figure 1: The Cepheid RS Puppis, one of the brightest and longest-period (41.4 days) Cepheids in the Milky-Way.  The striking appearance of this Cepheid is a result of the light echoes around it. Image taken with the Hubble Space Telescope.

Figure 1: RS Puppis, one of the brightest and longest-period (41.4 days) Cepheids in the Milky-Way. The striking appearance of this Cepheid is a result of the light echoes around it. Image taken with the Hubble Space Telescope.

It’s often tempting to think of stars as unchanging—especially on human timescales—but the more we study the heavens, the more it becomes clear that that isn’t true. Novae, supernovae, and gamma-ray bursts are all examples of sudden increases in brightness that stars can experience. There are also many kinds of variable stars—stars that regularly or irregularly change in brightness from a variety of mechanisms. Classical Cepheid variables are supergiant stars that periodically increase and decrease in luminosity due to their radial pulsations. They are stars that breathe, expanding and contracting like your lungs do when you inhale and exhale. Their regular periods, which are strongly related to their luminosity by the Leavitt law, make them important for measuring distance. Despite their importance in cosmology (as standard candles) and stellar astrophysics (by giving us insight into stellar evolution), there is still a lot that we don’t understand about classical Cepheid variables. One of the biggest mysteries that remains in characterizing them is the Cepheid mass discrepancy.

The Cepheid mass discrepancy refers to the fact that, at the same luminosity and temperature, stellar pulsation models generally predict that the Cepheids will have lower masses than stellar evolution models suggest they would. Several possible resolutions to the Cepheid mass discrepancy have been proposed, including pulsation-driven stellar mass loss, rotation, changes in radiative opacities, convective core overshooting in the Cepheid’s progenitor, or a combination of all of these. Measuring the Cepheid’s mass independently would help us constrain this problem, but as you might imagine, it’s not easy to weigh a star. Instead of a scale, our gold standard for determining stellar masses is an eclipsing binary.

An eclipsing binary system is just a system in which one of the orbiting stars passes in front of the other in our line of sight, blocking some of the other star’s light. This leads to variations in the amount of light that we see from the system. Because the orbits of the stars must be aligned just edge on to us this happen, eclipsing binaries are quite rare discoveries. However, when we do have such a system, we know with exactly what angle of inclination we are observing it. This makes it possible for us to accurately apply Kepler’s laws to get a measurement for the mass. Eclipsing binaries are highly prized for this reason (they’ve also gained some attention for being a highly-accurate way to measure extragalactic distances, but that’s another story altogether).

Cepheids in eclipsing binary systems are even rarer—there are currently a total of four that have been discovered in the LMC. One has been discussed on astrobites before (it’s worth looking at the previous bite just to check out the crazy light curve). Since there are so few, and since their masses are so integral to understanding them and determining their basic properties, it’s even more important to study each system as carefully as we can to help us solve these stellar mysteries. The authors of today’s paper take a close look at one of the few eclipsing binary systems we have that contains a Cepheid. 

Screenshot 2015-08-28 07.58.08

Figure 2: Figure 1 from the paper, depicting the stellar evolution model for the 3.8 solar mass Cepheid and its 2.7 solar mass red giant companion. The blue and orange shapes show the regions of the Hertzsprung-Russell diagram for each star that is consistent with its measured radius.

Unfortunately, rather than helping us, the subject of today’s astrobite, CEP1812, seems to cause more problems for us. Stellar evolution models indicate that the Cepheid appears to be about 100 Myr younger than its companion star, a red giant (and we expect them to be the same age). Figure 2 shows the evolutionary tracks of the red giant and the Cepheid. Previous papers have suggested that the Cepheid could have captured its companion, but today’s authors believe that this Cepheid could be even stranger—it may have formed from the merger of two smaller main sequence stars. This would mean that the current binary system was once a system of three stars, with the current red giant being the largest of the three. The merger would explain the apparently-younger age of the Cepheid because the resulting star would then evolve like a star that started its life with a mass the sum of the two merged stars, but it would look younger. The red giant, which would have been the outermost star, could have played a role in inducing oscillations in the orbits of the two smaller stars that caused them to merge.

The authors test this proposal by evolving a stellar model of two smaller stars for about 300 Myr before allowing them to merge. The mass gets accreted onto the larger of the two stars in about 10 Myr, and the resulting star then evolves normally, but appears younger because the merger mixes additional hydrogen (lower mass stars are less efficient at hydrogen burning) into the core of the star, which matches the observations.

The authors argue that if CEP1812 is formed from the merger of two main sequence stars, it would be an anomalous Cepheid. Anomalous Cepheids have short periods (up to ~2.5 days), are formed by the merger of two low-mass stars, usually only about ~1 solar mass, have low metallicity, and are found in older stellar populations. There they stand out, since these Cepheids end up being much larger and younger than the surrounding ones. However, CEP1812 is about 3.8 solar masses and also at a higher metallicity than most anomalous Cepheids, making it a highly unusual candidate. CEP1812’s period-luminosity and light curve shape are also consistent with both classical Cepheids (which are the kind we use in measuring distances) and with anomalous Cepheids.

If CEP1812 is an anomalous Cepheid, it straddles the border between what we think of as “classical” Cepheids and what we think of as “anomalous” Cepheids. This would make it a risky Cepheid to use for cosmology, but interesting for bridging the gap between two different classes of Cepheids. The possibility of it being an anomalous Cepheid is just one resolution to its unique properties. However, if it does turn out that CEP1812 not just another classical Cepheid, it could be the first of a fascinating subset of rare stars that we haven’t studied yet. Ultimately it’s still too soon to tell, but the authors have opened up an interesting new possibility for exploration. 


by Caroline Huang at August 28, 2015 06:21 PM

Peter Coles - In the Dark

Talking and Speaking

Just back to Brighton after a pleasant couple of days in Cardiff, mainly dodging the rain but also making a small contribution to the annual STFC Summer School for new PhD students in Astronomy. Incidentally it’s almost exactly 30 years since I attended a similar event, as a new student myself, at the University of Durham.

Anyway, I gave a lecture yesterday morning on Statistics in Astronomy (I’ll post the slides on here in due course). I was back in action later in the day at a social barbecue held at Techniquest in Cardiff Bay.

Here’s the scene just before I started my turn:


It’s definitely an unusual venue to be giving a speech, but it was fun to do. Here’s a picture of me in action, taken by Ed Gomez:


I was asked to give a “motivational speech” to the assembled students but I figured that since they had all already  chosen to do a PhD they probably already had enough motivation. In any case I find it a bit patronising when oldies like me assume that they have to “inspire” the younger generation of scientists. In my experience, any inspiring is at least as likely to happen in the opposite direction! So in the event  I just told a few jokes and gave a bit of general advice, stressing for example the importance of ignoring your supervisor and of locating the departmental stationery cupboard as quickly as possible. 

It was very nice to see some old friends as well as all the new faces at the summer school. I’d like to take this opportunity to wish everyone about to  embark on a PhD, whether in Astronomy or some other subject, all the very best. You’ll find it challenging but immensely rewarding, so enjoy the adventure!

Oh, and thanks to the organisers for inviting me to take part. I was only there for one day, but the whole event seemed to go off very well!

by telescoper at August 28, 2015 04:29 PM

ATLAS Experiment

Lepton Photon 2015 – Into the Dragon’s Lair

This was my first time in Ljubljana, the capital city of Slovenia – a nation rich with forests and lakes, and the only country that connects the Alps, the Mediterranean and the Pannonian Plain. The slight rain was not an ideal welcome, but knowing that such an important conference that was to be held there – together with a beautiful evening stroll – relaxed my mind.

The guardian.

The guardian.

At first, I thought I was somewhere in Switzerland. The beauty of the city and kindness of the local people just amazed me. Similar impressions overwhelmed me once the conference started – it was extremely well organized, with top-level speakers and delicious food. And though I met several colleagues there that I already knew, I felt as though I knew all the participants – so the atmosphere at the presentations was nothing short of enthusiastic and delightful.

Before the beginning of the conference, the ATLAS detector just started getting the first data from proton collisions at 13 TeV center-of-mass energy, with a proton bunch spacing of 25 ns. The conference’s opening ceremony was followed by two excellent talks: Dr. Mike Lamont presented the LHC performance in Run 2 and Prof. Beate Heinemann discussed the ATLAS results from Run 2.

Furthermore, at the start of the Lepton Photon 2015 conference, the ALICE experiment announced results confirming the fundamental symmetry of nature (CPT), agreeing with the recent BASE experiment results from lower energy scale measurement.

The main building of the University of Ljubljana

The main building of the University of Ljubljana

The public lecture by Prof. Alan Guth on cosmic inflation and multiverse was just as outstanding as expected. He entered the conference room with a student bag on his shoulder and a big, warm smile on his face – the ultimate invitation to both scientists and Ljubljana’s citizens. His presentation did an excellent job at explaining, to both experienced and young scientists, the hard work of getting to know the unexplored. While listening to Prof. Guth’s presentation, it seemed like the hour passed in only a few minutes – so superb his talk was.

I was also impressed by some of the participants. Many showed great interest in the lectures, and asked tough, interesting questions.

To briefly report on the latest results, as well as the potential of future searches for physics beyond the Standard Model, the following achievements were covered during the conference: the recent discovery by the LHCb experiment of a new class of particles known as pentaquark;, the observed flavor anomalies in semi-leptonic B meson decay rates seen by the BaBar, the Belle and the LHCb experiments; the muon g-2 anomaly; recent results on charged lepton flavor violation; hints of violation of lepton universality in RK and R(D(*)); and the first observation and evidence of the very rare decays of Bs and B0 mesons, respectively.

The conference centre.

The conference center.

The second part of the conference featured poster sessions, where younger scientists were able to present their latest working achievements. Six of them were selected and offered the opportunity to give a plenary presentation, where they gave useful and well prepared talks.

The ending conference lecture was given by Prof. Jonathan Ellis, who provided an excellent closing summary and overview of the conference talks and presented results, with an emphasis on future potential discoveries and underlying theories.

To conclude, I have to stress that our very competent and kind colleagues from the Josef Stefan Institute in Ljubljana (as well as other international collaborative institutes) did a great job organizing this tremendous symposium. They’ve set a high standard for the future conferences to come.

pic_tatjana_jovin1 Tatjana Agatonovic Jovin is research assistant at the Institute of Physics in Belgrade, Serbia. She joined ATLAS in 2009, doing her PhD at the University of Belgrade. Her research included searches for new physics that can show up in decays of strange B mesons by measuring CP-violating weak mixing phase and decay rate difference using time-dependent angular analysis. In addition to her fascination with physics she loves hiking, skiing, music and fine arts!

by Tatjana Agatonovic Jovin at August 28, 2015 04:11 PM

Lubos Motl - string vacua and pheno

Decoding the near-Planckian spectrum from CMB patterns
In March 2015, physics stars Juan Maldacena and Nima Arkani-Hamed (IAS Princeton) wrote their first paper together:
Cosmological Collider Physics
At that time, I didn't discuss it because it looked a bit technical for the blogosphere but things look a bit different now, partially because some semi-popular news outlets discussed it.

Cliff Moore's camera, Seiberg, Maldacena, Witten, Arkani-Hamed, sorted by distance

What they propose is to pretty much reverse-engineer very fine irregularities in the Cosmic Microwave Background – the non-Gaussianities – and decode them according to their high-tech method and write down the spectrum of elementary particles that are very heavy (comparably heavy to the Hubble scale during inflation) which may include Kaluza-Klein modes or excited strings.

Numerous people have said that "the Universe is a wonderful and powerful particle collider" because it allows us to study particle physics phenomena at very high energies – by looking into the telescope (because the expansion of the Universe has stretched these tiny length scales of particle physics to cosmological length scales). But Juan and Nima went further – approximately by 62 pages further. ;-)

What are the non-Gaussianities? If I simplify a little bit, we may measure the temperature of the cosmic microwave background in different directions. The temperature is about\[

T = 2.7255\pm 0.0006 \,{\rm K}

\] and the microwave leftover (remnant: we call it the Relict Radiation in Czech) from the Big Bang looks like a thermal, black-body radiation emitted by an object whose temperature is this \(T\). Such an object isn't exactly hot – it's about minus 270 Celsius degrees – but the absolute temperature \(T\) is nonzero, nevertheless, so the object thermally radiates. The typical frequency is in the microwave range – the kind of waves from your microwave oven. (And 1% of the noise on a classical TV comes from the CMB.) The object – the whole Universe – used to be much hotter but it has calmed down as it expanded, along with all the wavelengths.

The Universe was in the state of (near) thermal equilibrium throughout much of its early history. Up to the year 380,000 after the cosmic Christ (the Big Bang: what the cosmic non-Christians did with their Christ at that moment was so stunning that it still leaves cosmologists speechless) or so, the temperature was so high that the atoms were constantly ionized.

Only when the temperature gradually dropped beneath a critical temperature for atomic physics, it became a good idea for electrons to sit down in the orbits and help to create the atoms. Unlike the free particles in the plasma, atoms are electrically neutral, and therefore interact with the electromagnetic field much more weakly (at least with the modes of the electromagnetic field that have low enough frequencies).

OK, around 380,000 AD, the Universe became almost transparent for electromagnetic waves and light. The light – that was in equilibrium at that time – started to propagate freely. Its spectrum was the black-body curve and the only thing that has changed since that time was the simple cooling i.e. reduction of the frequency (by a factor) and the reduction of the intensity (by a similarly simple factor).

The CMB is the most accurate natural thermal black body radiation (the best Planck curve) we know in Nature. However, when we look at the CMB radiation more carefully, we see that the temperature isn't quite constant. It varies by 0.001% or 0.01% in different directions:\[

T(\theta,\phi) = 2.725\,{\rm K} + \Delta T (\theta,\phi)

\] The function \(\Delta T\) arises from some direction-dependent "delay" of the arrival of business-as-usual after the inflationary era. If the inflaton stabilized a bit later, we get a slightly higher (or lower?) temperature in that direction – which was also associated with a little bit different energy density in that region (region in some direction away from us, at the right distance so that the light it sent at 380,000 AD just hit our telescopes today).

The function \(\Delta T\) depends on the spherical angles and may be expanded in the spherical harmonics. To study the magnitude of the temperature fluctuations, you want to measure things like \[

\sum_{m=-\ell}^\ell\Delta T_{\ell, m} \Delta T_{\ell', m'}

\] The different spherical harmonic coefficients \(\Delta T_{\ell m}\) are basically uncorrelated with one another, so you expect to get close to zero, up to noise, unless \(\ell=\ell'\) and \(m=-m'\). In that case, you get a nonzero result and it's a function of \(\ell\) that you know as the "CMB power spectrum".

I wrote \(\Delta T\) as a function of the angles or the quantum numbers \(\ell,m\) but in the early cosmology, it's more natural to derive this \(\Delta T\) from the inflaton field and appreciate that this field is a function of \(\vec k\), the momentum 3-vector. By looking at the \(\Delta T\), we may only determine the dependence on two variables in a slice, not three.

At any rate, the correlation functions\[

\langle \Delta T (\vec k_1) \Delta T (\vec k_2) \Delta T(\vec k_3) \rangle

\] averaged over all directions etc. seem to be zero according to our best observations so far. No non-Gaussianities have been observed. Again, why non-Gaussianities? Because the probability density for the \(\Delta T\) function to be something is given by the Ansatz\[

\rho[\Delta T(\theta,\phi)] = \exp \zav{ - \Delta T\cdot M \cdot \Delta T }

\] where \(M\) is a "matrix" that depends on two continuous indices – that take values on the two-sphere – and the dot product involves an integral over the two-sphere instead of a discrete summation over the indices. Fine. You see that probability density functional generalizes the function \(\exp(-x^2)\), a favorite function of Carl Friedrich Gauß, which is why this Ansatz is referred to as the Gaussian one.

The probability distribution is mathematically analogous to the wave function or functional of the multi-dimensional or infinite-dimensional harmonic oscillator or the wave functional for the ground state of a free (non-interacting, quadratic) quantum field theory (which is an infinite-dimensional harmonic oscillator, anyway). Or the integrand of the path integral in a free quantum field theory.

And this mathematical analogy may be exploited to calculate lots of things. In fact, it's not just a mathematical analogy. Within the inflationary framework, the \(n\)-point functions calculated from the CMB temperature are \(n\)-point functions of the inflaton field in a quantum field theory.

The \(n\) in the \(n\)-point function counts how many points on the two-sphere, or how many three-vectors \(\vec k\), the correlation function depends on. The correlation functions in QFT may be computed using the Feynman diagrams. In free QFTs, you have no vertices and just connect \(n\) external propagators. It's clear that in a free QFT, the 3-point functions vanish. All the odd-point functions vanish, in fact. And the 4-point and other even higher-point functions may be computed by Wick's theorem – the summation over different pairings of the propagator.

Back to 2015

No non-Gaussianities have been seen so far – all observations are compatible with the assumption that the probability density functional for \(\Delta T\) has the simple Gaussian form, a straightforward infinite-dimensional generalization of the normal distribution \(\exp(-x^2/2\sigma^2)\). However, cosmologists and cosmoparticle physicists have dreamed about the possible discovery of non-Gaussianities and what it could teach us.

It could be a signal of some inflaton (cubic or more complex) self-interactions, new particles, new effects of many kinds. But which of them? Almost all previous physicists wanted to barely see "one new physical effect around the corner" that is stored in the first non-Gaussianities that someone may discover.

Only Nima and Juan started to think big. Even though no one has seen any non-Gaussianity yet, they are already establishing a new computational industry to get tons of detailed information from lots of numbers describing the non-Gaussianity that will be observed sometime in the future. They don't want to discover just "one" new effect that modestly generalizes inflation by one step, like most other model builders.

They ambitiously intend to extract all the information about the particle spectrum and particle interactions (including all hypothetical new particle species) from the correlation functions of \(\Delta T\) and its detailed non-Gaussianities once they become available. Their theoretical calculations were the hardest step, of course. The other steps are easy. Once Yuri Milner finds the extraterrestrial aliens, he, Nima, and Juan will convince them to fund a project to measure the non-Gaussianities really accurately, assuming that the ETs are even richer than the Chinese.

OK, once it's done, you will have functions like\[

\langle \Delta T(\vec k_1) \,\Delta T (\vec k_2) \,\Delta T(\vec k_3) \rangle

\] By the translational symmetry (or momentum conservation), this three-point function is only nonzero for \[

\vec k_1+\vec k_2+ \vec k_3 = 0

\] which means if and only if the three vectors define sides of a triangle (oriented, in a closed loop). The three-point functions seem to be zero according to the observations so far. But once they will be seen to be nonzero, the value may be theoretically calculated as the effect of extra particle species (or the same inflaton, if it is self-interacting).

A new field of spin \(s\) and mass \(m\) will contribute a function of \(\vec k_1, \vec k_2,\vec k_3\) to the three-point function – a function of the size and shape of the triangle – whose dependence on the shape stores the information about \(s\) and \(m\). When you focus on triangles that are very "thin", they argue and show, the mass of the particle (naturally expressed in the units of the Hubble radius, if you wish) is stored in the exponent of a power law that says how much the correlation function drops (or increases?) when the triangle becomes even thinner.

Some dependence on the spin \(s\) is imprinted to the dependence on some angle defining the triangle.

And all the new particles' contributions add up. In fact, they "interfere" with each other and the relative phase has observable implications, too.

It's a big new calculational framework, basically mimicking the map between "Lagrangians of a QFT" and "its \(n\)-point functions" in a different context. They look at three-point functions as well as four-point functions. The contributions to these correlation functions seem to resemble correlation functions we know from the world sheet of string theory.

And they also show how these expressions have to simplify when the system is conformally (or slightly broken conformally or de Sitter) symmetric. Theirs is a very sophisticated toolkit that may serve as a dictionary between the patterns in the CMB and the particle spectrum and interactions near the inflationary Hubble scale.


I was encouraged to write this blog post by this text in the Symmetry Magazine,
Looking for strings inside inflation
Troy Rummler wrote about a very interesting topic and he has included some useful and poetic remarks. For example, Edward Witten called Juan's and Nima's work "the most innovative one" he heard about at Strings 2015. Juan's slides are here and the 29-minute YouTube talk is here. And Witten has also said that science doesn't repeat itself but it "rhymes" because Nature's clever tricks are recycled at many levels.

Well, I still feel some dissatisfaction with that article.

First, it doesn't really make it clear that Arkani-Hamed, Maldacena, and Witten are not just three of the random physicists or even would-be physicists that are the heroes of most of the hype in the popular science news outlets. All of them are undoubtedly among the top ten physicists who live on Earth right now.

Second, I just hate this usual post-2006 framing of the story in terms of the slogan that "string theory would be nearly untestable which is why all the theoretical physicists have to work hard on doable tests of string theory".

What's wrong with that slogan in the present context?
  1. String theory is testable in principle, it has been known to be testable for decades, and that's what matters for its being a 100% sensible topic of deep scientific research.
  2. String theory seems hard to test by realistic experiments that will be performed in several years and almost all sane people have always thought so already when they started to work on strings.
  3. The work by Arkani-Hamed and Maldacena hasn't changed that: it will surely take a lot of time to observe non-Gaussianities and observe them accurately enough for their dictionary and technology to become truly relevant. So even though theirs is a method to look into esoteric short-distance physics via ordinary telescopes, it's still a very futuristic project.
  4. The Nima-Juan work doesn't depend on detailed features of string theory much.
It is meant to recover the spectrum and interactions of an effective field theory at the Hubble scale, whether this effective field theory is an approximation to string theory or not. In fact, the term "string" appears in one paragraph of their paper only (in the introduction).

The paragraph talks about some characteristic particles predicted by string theory whose discovery (through the CMB) could "almost settle" string theory. For example, they believe that a weakly coupled (but not decoupled) spin-4 particle would make string theory unavoidable because non-string theories are incompatible with weakly coupled particles of spin \(s\gt 2\). This is a part of the lore, somewhat ambitious lore. I think it's morally correct but it only applies to "elementary" particles and the definition of "elementary" is guaranteed to become problematic as we approach the Planck scale. For example, the lightest black hole microstates – the heavier cousins of elementary particles but with Planckian masses – are guaranteed to be "in between" composite and elementary objects. Quantum gravity provides us with "bootstrap" constraints that basically say that the high-mass behavior of the spectrum must be a reshuffling of the low-mass spectrum (UV-IR correspondence, something that is seen both in perturbative string theory as well as the quantum physics of black holes).

The scale of inflation is almost certainly "at least several orders of magnitude" beneath the Planck scale so this problem may be absent in their picture. But maybe it's not absent. Theorists want to be sure that they have the right wisdom about all these things – but truth to be told, we haven't seen a spin-4 particle in the CMB yet. ;-)

It's a very interesting piece of work that is almost guaranteed to remain in the domain of theorists for a very long time. And it's unfortunate that the media – including "media published by professional institutions such as the Fermilab and SLAC" – keep on repeating this ideology that the theorists are "obliged" to work on practical tests of theories and they are surely doing so. They are not "obliged" and they are mostly not doing these things.

The Planckian physics has always seemed to be far from practically doable experiments. The Juan-Nima paper is an example of the efforts that have the chance to reduce this distance. But I think that they would agree that this distance remains extremely large and the small chance that the distance will shrink down to zero isn't the only and necessary motivation of their research. Theorists just want to know – they are immensely curious about – the relationships between pairs of groups of ideas and data even if both sides of the link remain unobservable in practice!

The most likely shape of a newborn galaxy, extracted from the spectrum of our Calabi-Yau compactification through the Nima-Juan algorithm

I am a bit confused about the actual chances that the sufficient number of non-Gaussianities and their patterns may ever be extracted from the CMB data. The low enough \(\ell\) modes of the CMB simply seem to be Gaussian and we won't get any new numbers or new patterns stored in them, will we? The cosmic variance – the unavoidable noise resulting from the "finiteness of the two-sphere and or the visible Universe" i.e. from the finite number of the relevant spherical harmonics – seems to constrain the accuracy of the data we may extract "permanently". So maybe such patterns could be encoded in the very high values of \(\ell\) i.e. small angular distances on the sky?

For example, there could be patterns in the shape of the galaxies (inside the galaxies), and not just the broad intergalactic space. For example, if it turned out that the most likely shape of the newborn galaxy is given by the blue picture above (one gets the spiraling mess resembling the Milky Way once the shape evolves for a long enough time), it could prove that God and His preferred compactification of string/M-theory is neither Argentinian nor Persian. I am not quite sure whether such a discovery would please Witten, however. ;-)

by Luboš Motl ( at August 28, 2015 02:24 PM

Quantum Diaries

Double time

In particle physics, we’re often looking for very rare phenomena, which are highly unlikely to happen in any given particle interaction. Thus, at the LHC, we want to have the greatest possible proton collision rates; the more collisions, the greater the chance that something unusual will actually happen. What are the tools that we have to increase collision rates?

Remember that the proton beams are “bunched” — there isn’t a continuous current current of protons in a beam, but a series of smaller bunches of protons, each only a few centimeters long, with gaps of many centimeters between each bunch.  The beams are then timed so that bunches from each beam pass through each other (“cross”) inside one of the big detectors.  A given bunch can have 10E11 protons in it, and when two bunches cross, perhaps tens of the protons in each bunch — a tiny fraction! — will interact.  This bunching is actually quite important for the operation of the detectors — we can know when bunches are crossing, and thus when collisions happen, and then we know when the detectors should really be “on” to record the data.

If one were to have a fixed number of protons in the machine (and thus a fixed total amount of beam current), you could imagine two ways to create the same number of collisions: have N bunches per beam, each with M protons, or 2N bunches per beam with M/sqrt(2) protons.  The more bunches in the beam, the more closely spaced they would have to be, but that can be done.  From the perspective of the detectors, the second scenario is much preferred.  That’s because you get fewer proton collisions per bunch crossing, and thus fewer particles streaming through the detectors.  The collisions are much easier to interpret if you have fewer collisions per crossing; among other things, you need less computer processing time to reconstruct each event, and you will have fewer mistakes in the event reconstruction because there aren’t so many particles all on top of each other.

In the previous LHC run (2010-12), the accelerator had “50 ns spacing” between proton bunches, i.e. bunch crossings took place every 50 ns.  But over the past few weeks, the LHC has been working on running with “25 ns spacing,” which would allow the beam to be segmented into twice as many bunches, with fewer protons per bunch.  It’s a new operational mode for the machine, and thus some amount of commissioning and tuning and so forth are required.  A particular concern is “electron cloud” effects due to stray particles in the beampipe striking the walls and ejecting more particles, which is a larger effect with smaller bunch spacing.  But from where I sit as one of the experimenters, it looks like good progress has been made so far, and as we go through the rest of this year and into next year, 25 ns spacing should be the default mode of operation.  Stay tuned for what physics we’re going to be learning from all of this!

by Ken Bloom at August 28, 2015 03:32 AM

August 27, 2015

Christian P. Robert - xi'an's og

beyond subjective and objective in Statistics

“At the level of discourse, we would like to move beyond a subjective vs. objective shouting match.” (p.30)

This paper by Andrew Gelman and Christian Hennig calls for the abandonment of the terms objective and subjective in (not solely Bayesian) statistics. And argue that there is more than mere prior information and data to the construction of a statistical analysis. The paper is articulated as the authors’ proposal, followed by four application examples, then a survey of the philosophy of science perspectives on objectivity and subjectivity in statistics and other sciences, next to a study of the subjective and objective aspects of the mainstream statistical streams, concluding with a discussion on the implementation of the proposed move.

“…scientists and the general public celebrate the brilliance and inspiration of greats such as Einstein, Darwin, and the like, recognizing the roles of their personalities and individual experiences in shaping their theories and discoveries” (p.2)

I do not see the relevance of this argument, in that the myriad of factors leading, say, Marie Curie or Rosalind Franklin to their discoveries are more than subjective, as eminently personal and the result of unique circumstance, but the corresponding theories remain within a common and therefore objective corpus of scientific theories. Hence I would not equate the derivation of statistical estimators or even less the computation of statistical estimates to the extension or negation of existing scientific theories by scientists.

“We acknowledge that the “real world” is only accessible to human beings through observation, and that scientific observation and measurement cannot be independent of human preconceptions and theories.” (p.4)

The above quote reminds me very much of Poincaré‘s

“It is often said that experiments should be made without preconceived ideas. That is impossible. Not only would it make every experiment fruitless, but even if we wished to do so, it could not be done. Every man has his own conception of the world, and this he cannot so easily lay aside.” Henri Poincaré, La Science et l’Hypothèse

The central proposal of the paper is to replace `objective’ and `subjective’ with less value-loaded and more descriptive terms. Given that very few categories of statisticians take pride in their subjectivity, apart from a majority of Bayesians, but rather use the term as derogatory for other categories, I fear the proposal stands little chance to see this situation resolved. Even though I agree we should move beyond this distinction that does not reflect the complexity and richness of statistical practice. As the discussion in Section 2 makes it clear, all procedures involve subjective choices and calibration (or tuning), either plainly acknowledged or hidden under the carpet. Which is why I would add (at least) two points to the virtues of subjectivity:

  1. Spelling out unverifiable assumptions about the data production;
  2. Awareness of calibration of tuning parameters.

while I do not see consensus as necessarily a virtue. The following examples in Section 3 are all worth considering as they bring more details, albeit in specific contexts, to the authors’ arguments. Most of them give the impression that the major issue stands with the statistical model itself, which may be both the most acute subjectivity entry in statistical analyses and the least discussed one. Including the current paper, where e.g. Section 3.4 wants us to believe that running a classical significance test is objective and apt to detect an unfit model. And the hasty dismissal of machine learning in Section 6 is disappointing, because one thing machine learning does well is to avoid leaning too much on the model, using predictive performances instead. Furthermore, apart from Section 5.3, I actually see little in the paper about the trial-and-error way of building a statistical model and/or analysis, while subjective inputs from the operator are found at all stages of this construction and should be spelled out rather than ignored (and rejected).

“Yes, Bayesian analysis can be expressed in terms of subjective beliefs, but it can also be applied to other settings that have nothing to do with beliefs.” (p.31)

The survey in Section 4 about what philosophy of sciences says about objectivity and subjectivity is quite thorough, as far as I can judge, but does not expand enough the issue of “default” or all-inclusive statistical solutions, used through “point-and-shoot” software by innumerate practitioners in mostly inappropriate settings, with the impression of conducting “the” statistical analysis. This false feeling of “the” proper statistical analysis and its relevance for this debate also transpire through the treatment of statistical expertises by media and courts. I also think we could avoid the Heisenberg principle to be mentioned in this debate, as it does not really contribute anything useful. More globally, the exposition of a large range of notions of objectivity is as often the case in philosophy not conclusive and I feel nothing substantial c28omes out of it… And that it is somehow antagonistic with the notion of a discussion paper, since every possible path has already been explored. Even forking ones. As a non-expert in philosophy, I would not feel confident in embarking upon a discussion on what realism is and is not.

“the subjectivist Bayesian point of view (…) can be defended for honestly acknowledging that prior information often does not come in ways that allow a unique formalization” (p.25)

When going through the examination of the objectivity of the major streams of statistical analysis, I get the feeling of exploring small worlds (in Lindley‘s words) rather than the entire spectrum of statistical methodologies. For instance, frequentism seems to be reduced to asymptotics, while completely missing the entire (lost?) continent of non-parametrics. (Which should not be considered to be “more” objective, but has the advantage of loosening the model specification.) While the error-statistical (frequentist) proposal of Mayo (1996) seems to consume a significant portion [longer than the one associated with the objectivist Bayesianism section] of the discussion with respect to its quite limited diffusion within statistical circles. From a Bayesian perspective, the discussions of subjective, objective, and falsificationist Bayes do not really bring a fresh perspective to the debate between those three branches, apart from suggesting we should give up such value loaded categorisations. As an O-Bayes card-carrying member, I find the characterisation of the objectivist branch somehow restrictive, by focussing solely on Jaynesmaxent solution. Hence missing the corpus of work on creating priors with guaranteed frequentist or asymptotic properties. Like matching priors. I also find the defence of the falsificationist perspective, i.e. of Gelman and Shalizi (2013) both much less critical and quite extensive, in that, again, this is not what one could call a standard approach to statistics. Resulting in an implicit (?) message that this may the best way to proceed.

In conclusion, on the positive side [for there is a positive side!], the paper exposes the need to spell out the various inputs (from the operator) leading to a statistical analysis, both for replicability or reproducibility, and for “objectivity” purposes, although solely conscious choices and biases can be uncovered this way. It also reinforces the call for model awareness, by which I mean a critical stance on all modelling inputs, including priors!, a disbelief that any model is true, applying to statistical procedures Popper’s critical rationalism. This has major consequences on Bayesian modelling in that, as advocated in Gelman and Shalizi (2013) , as well as Evans (2015), sampling and prior models should be given the opportunity to be updated when they are inappropriate for the data at hand. On the negative side, I fear the proposal is far too idealistic in that most users (and some makers) of statistics cannot spell out their assumptions and choices, being unaware of those. This is in a way [admitedly, with gross exaggeration!] the central difficulty with statistics that almost anyone anywhere can produce an estimate or a p-value without ever being proven wrong. It is therefore difficult to perceive how the epistemological argument therein [that objective versus subjective is a meaningless opposition] is going to profit statistical methodology, even assuming the list of Section 2.3 was to be made compulsory. The eight deadly sins listed in the final section would require expert reviewers to vanish from publication (and by expert, I mean expert in statistical methodology), while it is almost never the case that journals outside our field make a call to statistics experts when refereeing a paper. Apart from banning all statistics arguments from a journal, I am afraid there is no hope for a major improvement in that corner…

All in all, the authors deserve a big thank for making me reflect upon those issues and (esp.) back their recommendation for reproducibility, meaning not only the production of all conscious choices made in the construction process, but also through the posting of (true or pseudo-) data and of relevant code for all publications involving a statistical analysis.

Filed under: Books, Statistics, University life Tagged: academic journals, Basic and Applied Social Psychology, Dennis Lindley, Error-Statistical philosophy, falsification, frequentist inference, Henri Poincaré, Karl Popper, Marie Curie, objective Bayes, p-values, refereeing, reproducible research, subjective versus objective Bayes

by xi'an at August 27, 2015 10:15 PM

Emily Lakdawalla - The Planetary Society Blog

Dropping Orion in the Desert: NASA Completes Key Parachute Test
NASA’s Orion spacecraft completed a key parachute test Aug. 26 at the U.S. Army Yuma Proving Ground in Yuma, Arizona.

August 27, 2015 06:05 PM

Symmetrybreaking - Fermilab/SLAC

Looking for strings inside inflation

Theorists from the Institute for Advanced Study have proposed a way forward in the quest to test string theory.

Two theorists recently proposed a way to find evidence for an idea famous for being untestable: string theory. It involves looking for particles that were around 14 billion years ago, when a very tiny universe hit a growth spurt that used 15 billion times more energy than a collision in the Large Hadron Collider.

Scientists can’t crank the LHC up that high, not even close. But they could possibly observe evidence of these particles through cosmological studies, with the right technological advances.

Unknown particles

During inflation—the flash of hyperexpansion that happened 10-33 seconds after the big bang— particles were colliding with astronomical power. We see remnants of that time in tiny fluctuations in the haze of leftover energy called the cosmic microwave background.

Scientists might be able to find remnants of any prehistoric particles that were around during that time as well.

“If new particles existed during inflation, they can imprint a signature on the primordial fluctuations, which can be seen through specific patterns,” says theorist Juan Maldacena of the Institute for Advanced Study in Princeton, New Jersey.

Maldacena and his IAS collaborator, theorist Nima Arkani-Hamed, have used quantum field theory calculations to figure out what these patterns might look like. The pair presented their findings at an annual string theory conference held this year in Bengaluru, India, in June.

The probable, impossible string

String theory is frequently summed up by its basic tenet: that the fundamental units of matter are not particles. They are one-dimensional, vibrating strings of energy.

The theory’s purpose is to bridge a mathematic conflict between quantum mechanics and Einstein’s theory of general relativity. Inside a black hole, for example, quantum mechanics dictates that gravity is impossible. Any attempt to adjust one theory to fit the other causes the whole delicate system to collapse. Instead of trying to do this, string theory creates a new mathematical framework in which both theories are natural results. Out of this framework emerges an astonishingly elegant way to unify the forces of nature, along with a correct qualitative description of all known elementary particles.

As a system of mathematics, string theory makes a tremendous number of predictions. Testable predictions? None so far.

Strings are thought to be the smallest objects in the universe, and computing their effects on the relatively enormous scales of particle physics experiments is no easy task. String theorists predict that new particles exist, but they cannot compute their masses.

To exacerbate the problem, string theory can describe a variety of universes that differ by numbers of forces, particles or dimensions. Predictions at accessible energies depend on these unknown or very difficult details. No experiment can definitively prove a theory that offers so many alternative versions of reality.

Putting string theory to the test

But scientists are working out ways that experiments could at least begin to test parts of string theory. One prediction that string theory makes is the existence of particles with a unique property: a spin of greater than two.

Spin is a property of fundamental particles. Particles that don’t spin decay in symmetric patterns. Particles that do spin decay in asymmetric patterns, and the greater the spin, the more complex those patterns get. Highly complex decay patterns from collisions between these particles would have left signature impressions on the universe as it expanded and cooled.

Scientists could find the patterns of particles with greater than spin 2 in subtle variations in the distribution of galaxies or in the cosmic microwave background, according to Maldacena and Arkani-Hamed. Observational cosmologists would have to measure the primordial fluctuations over a wide range of length scales to be able to see these small deviations.

The IAS theorists calculated what those measurements would theoretically be if these massive, high-spin particles existed. Such a particle would be much more massive than anything scientists could find at the LHC.

A challenging proposition

Cosmologists are already studying patterns in the cosmic microwave background. Experiments such as Planck, BICEP and POLAR BEAR are searching for polarization, which would be evidence that a nonrandom force acted on it. If they rewind the effects of time and mathematically undo all other forces that have interacted with this energy, they hope that what pattern remains will match the predicted twists imbued by inflation.

The patterns proposed by Maldacena and Arkani-Hamed are much subtler and much more susceptible to interference. So any expectation of experimentally finding such signals is still a long way off.

But this research could point us toward someday finding such signatures and illuminating our understanding of particles that have perhaps left their mark on the entire universe.

The value of strings

Whether or not anyone can prove that the world is made of strings, people have proven that the mathematics of string theory can be applied to other fields.

In 2009, researchers discovered that string theory math could be applied to conventional problems in condensed matter physics. Since then researchers have been applying string theory to study superconductors.

Fellow IAS theorist Edward Witten, who received the Fields Medal in 1990 for his mathematical contributions to quantum field theory and Supersymmetry, says Maldacena and Arkani-Hamed’s presentation was among the most innovative work he saw at the Strings ‘15 conference.

Witten and others believe that such successes in other fields indicate that string theory actually underlies all other theories at some deeper level.

"Physics—like history—does not precisely repeat itself,” Witten says. However, with similar structures appearing at different scales of lengths and energies, “it does rhyme.”


Like what you see? Sign up for a free subscription to symmetry!


by Troy Rummler at August 27, 2015 05:52 PM

Lubos Motl - string vacua and pheno

LHCb: 2-sigma violation of lepton universality
Since the end of June, I mentioned the ("smaller") LHCb collaboration at the LHC twice. They organize their own Kaggle contest and they claim to have discovered a pentaquark.

In their new article Evidence suggests subatomic particles could defy the standard model, Phys.ORG just made it clear that I largely missed a hep-ex paper at the end of June,
Measurement of the ratio of branching fractions \(\mathcal{B}(\overline{B}^0 \to D^{*+}τ^{-}\overlineν_τ))/\mathcal{B}(\overline{B}^0 \to D^{*+}μ^{-}\overlineν_μ)\)
by Brian Hamilton and about 700 co-authors. The paper will appear in Physical Review Letters in a week – which is why it made it to Phys.ORG now. An early June TRF blog post could have been about the same thing but the details weren't available.

What is going on? They measured the number of decays of the \(\bar B^0\) mesons produced within their detector in 2011 and 2012 that have another meson, \(D^{*+}\), in the final state, along with the negative-lepton and the corresponding antineutrino.

Well, the decay obviously needs the cubic vertex with the \(W^\pm\)-boson – i.e. the charged current – and this current should contain the term "creating the muon and its antineutrino" and the term "creating the tau and its antineutrino" with equal coefficients. There are no Higgs couplings involved in the process so the different generations of the leptons behave "the same" because they transform as the "same kind of doublets" that the \(W^\pm\)-bosons are mixing with each other.

The decay rates with all the \(\mu\) replaced by \(\tau\) should be "basically" the same. Well, because the masses and therefore kinematics is different, the Standard Model predicts the ratio of the two decay rates to be\[

{\mathcal R}(D^*) = \frac{\mathcal{B}(\overline{B}^0 \to D^{*+}\tau^{-}\overline \nu_\tau)}{
\mathcal{B}(\overline{B}^0 \to D^{*+}\mu^{-}\overline \nu_\mu)} = 0.252\pm 0.003

\] The error of the theoretical prediction is just 1 percent or so. This is the usual accuracy that the Standard Model allows us, at least when the process doesn't depend on the messy features of the strong force too much. This decay ultimately depends on the weak interactions – those with the \(W^\pm\)-bosons as the intermediate particle – which is why the accuracy is so good.

Well, the LHCb folks measured that quantity and got the following value of the ratio\[

{\mathcal R}(D^*) = 0.336 \pm 0.027 \text{ (stat) } \pm 0.030 \text{ (syst) }

\] which is 33% higher and, using the "Pythagorean" total error \(0.040\) combining the statistical and systematic one, it is about 2.1 standard deviations higher than the (accurately) predicted value.

As always, 2.1 sigma is no discovery to be carved in stone (even though it's formally or naively some "96% certainty of a new effect") but it is an interesting deviation, especially because there are other reasons to think that the "lepton universality" could fail. What am I talking about?

In the "lepton universality", the coupling of the \(W^\pm\)-boson to the charged_lepton-plus-neutrino pair is proportional to a \(3\times 3\) unit matrix in the space of the three generations. The unit matrix and its multiples are nice and simple.

Well, there are two related ways how a generic matrix differs from a multiple of the unit matrix:
  1. its diagonal elements are not equal to each other
  2. the off-diagonal elements are nonzero
In a particular basis, these are two different "failures" of a matrix. We describe them (or the physical effects that they cause if the matrix is used for the charged currents) as "violations of lepton university" and "flavor violations", respectively. But it's obvious that in a general basis, you can't distinguish them. A diagonal matrix with different diagonal entries looks like a non-diagonal matrix in other bases.

So the violation of the "lepton universality" discussed in this LHCb paper and this blog post (different diagonal entries for the muon and tau) is "fundamentally" a symptom of the same effect as the "flavor violation" (non-zero off-diagonal entries). And the number of these flavor-violating anomalies has grown pretty large! Most interestingly, CMS saw a 2.4 excess in decays of the Higgs to \(\mu\) and \(\tau\) which seem to represent about 1% (plus minus 0.4% if you wish) of the decays even though such flavor-violating decays are prohibited.

LHCb has announced several other minor flavor-violating results but because they depend on some mesons, they are less catchy for an elementary particle physicist.

The signs of the flavor violation may be strengthening. If a huge, flavor-violating deviation from the Standard Model is seen and some discoveries are made, we will be able to say that "we saw that paradigm shift coming". Although right now, we may be seeing that this discovery may also be going away if Nature is less generous, too. ;-)

by Luboš Motl ( at August 27, 2015 04:51 PM

Quantum Diaries

The Tesla experiment
CMS scientist Bo Jayatilaka assumes the driver seat in a Tesla Model S P85D as part of a two-day road trip experiment. Photo: Sam Paakkonen

CMS scientist Bo Jayatilaka assumes the driver seat in a Tesla Model S P85D as part of a two-day road trip experiment. Photo: Sam Paakkonen

On May 31, about 50 miles from the Canadian border, an electric car struggled up steep hills, driving along at 40 miles per hour. The sun was coming up and rain was coming down. Things were looking bleak. The car, which usually plotted the route to the nearest charging station, refused to give directions.

“It didn’t even say turn around and go back,” said Bo Jayatilaka, who was driving the car. “It gave up and said, ‘You’re not going to make it.’ The plot disappeared.”

Rewind to a few weeks earlier: Tom Rammer, a Chicago attorney, had just won two days with a Tesla at a silent cell phone auction for the American Cancer Society. He recruited Mike Kirby, a Fermilab physicist, to figure out how to get the most out of those 48 hours.

Rammer and Kirby agreed that the answer was a road trip. Their initial plan was a one-way trip to New Orleans. Another involved driving to Phoenix and crossing the border to Mexico for a concert. Tesla politely vetoed these options. Ultimately, Rammer and Kirby decided on an 867-mile drive from Chicago to Boston. Their goal was to pick up Jayatilaka, a physicist working on the CMS experiment, and bring him back to Fermilab. To document their antics, the group hired a film crew of six to follow them on their wild voyage from the Windy City to Beantown.

Jayatilaka joked that he didn’t trust Rammer and Kirby to arrange the trip on their own, so they also drafted Jen Raaf, a Fermilab physicist on the MicroBooNE experiment, whose organizational skills would balance their otherwise chaotic approach.

“There was no preparing. Every time I brought it up Tom said, ‘Eh, it’ll get done,’” Raaf laughed. Jayatilaka added that shortly after Raaf came on board they started seeing spreadsheets sent around and itineraries being put together.

“I had also made contingency plans in case we couldn’t make it to Boston,” Raaf said, with a hint of foreshadowing.

The Tesla plots the return trip to Chicago, locating the nearest charging station. Photo: Sam Paakkonen

The Tesla plots the return trip to Chicago, locating the nearest charging station. Photo: Sam Paakkonen

On May 29, Rammer, Kirby and Raaf picked up the Tesla and embarked on their journey. The car’s name was Barbara. She was a black Model S P85D, top of the line, and she could go from zero to 60 in 3.2 seconds.

“I think the physics of it is really interesting,” Jayatilaka said. “The reason it’s so fast is that the motor is directly attached to wheels. With cars we normally drive there is a very complicated mechanical apparatus that converts small explosions into something that turns far away from where the explosions are. And this thing just goes. You press the button and it goes.”

The trip started out on flat terrain, making for smooth, easy driving. But eventually the group hit mountains, which ate up Barbara’s battery capacity. In the spirit of science, these physicists pushed the boundaries of what they knew, testing Barbara’s limits as they braved undulating roads, encounters with speed-hungry Porsches and Canadian border patrol.

“If you have something and it’s automated, you need to know the limitations of that algorithm. The computer does a great job of calculating the range for a given charge, but we do much better knowing the terrain and what’s going to happen. We need to figure out what we are better at and what the algorithm is better at,” Kirby said. “The trip was about learning the car. The algorithm is going to get better because of all of the experiences of all of the drivers.”

The result of the experiment was that Barbara didn’t make it all the way to Boston. As they approached the east coast, it became clear to Kirby and Raaf that they wouldn’t have made it back in time to drop off the car. Although Rammer was determined to see the trip through to the end, he eventually gave in somewhere in New Jersey, and they decided to cut the trip short. Jayatilaka met the group in a parking lot in Springfield, Massachusetts, and they plotted the quickest route back to Chicago.

Flash forward to that bleak moment on May 31. After crossing the border, just as things were looking hopeless, Barbara’s systems suddenly came back to life. She directed the group to a charging station in chilly Kingston, Ontario. Around 6:30 in the morning, they rolled into the station. The battery level: zero percent. After a long charge and another full day of driving, they pulled into the Tesla dealership in Chicago around 8:55 p.m., minutes before their time with Barbara was up.

“The car was just alien technology to us when we started,” Jayatilaka said. “It was completely unfamiliar. We all came away from it thinking that we could have done this road trip so much better with those two days of experience. We felt like we actually understood.”

Ali Sundermier

by Fermilab at August 27, 2015 01:40 PM

Tommaso Dorigo - Scientificblogging

Thou Shalt Have One Higgs - $100 Bet Won !
One of the important things in life is to have a job you enjoy and which is a motivation for waking up in the morning. I can say I am lucky enough to be in that situation. Besides providing me with endless entertainment through the large dataset I enjoy analyzing, and the constant challenge to find new ways and ideas to extract more information from data, my job also gives me the opportunity to gamble - and win money, occasionally.

read more

by Tommaso Dorigo at August 27, 2015 09:41 AM

arXiv blog

The 20 Most Infamous Cyberattacks of the 21st Century (Part II)

The number of cyberattacks are on the increase. Here is the second part of a list of the the most egregious attacks this millennium.

August 27, 2015 04:21 AM

astrobites - astro-ph reader's digest

From Large to Small: Astrophysical Signs of Dark Matter Particle Interactions

Title: Dark Matter Halos as Particle Colliders: A Unified Solution to Small-Scale Structure Puzzles from Dwarfs to Clusters
Authors: M. Kaplinghat, S. Tulin, H.-B. Yu
First Author’s Institution: Department of Physics and Astronomy, University of California, Irvine, CA



The very large helps us to learn about the very small, as anyone who’s stubbed a toe—rudely brought face to face with the everyday quantum reality of Pauli’s exclusion principle and the electrostatic repulsion of electrons—knows.  Astrophysics, the study of the largest things in existence, is no less immune to this marvelous fact. One particularly striking example is dark matter. It’s been a few decades since we realized that it exists, but we remain woefully unenlightened as to what this mysterious substance might be made of. Theories on its nature are legion—it’s hot! it’s cold! it’s warm! it’s sticky! it’s fuzzy! it’s charged! it’s atomic! it’s MACHO! it’s WIMP-y! it’s a combo of the above!

How are we to navigate and whittle down this veritable circus of dark matter particle theories? It turns out that an assumption about the nature of the subatomic dark matter particle can lead to observable effects on astrophysical scales.  The game of tracing from microphysics to astrophysics has identified a clear set of dark matter properties: it’s cold (thus its common appellation, “cold dark matter,” or CDM, for short), collisionless, stable (i.e. it doesn’t spontaneously decay), and neutrally charged (unlike protons and electrons). CDM’s been wildly successful at explaining many astrophysical observations, except for one—it fails to reproduce the small scale structure of the universe (that is, at galaxy cluster scales and smaller). Dark matter halos at such scales, for instance, are observed to have constant-density cores, while CDM predicts peaky centers.

What aspect of the dark matter particle might we have overlooked that can explain away the small scale problems of CDM?  One possibility is that dark matter is “sticky.” Sticky dark matter particles can collide with other dark matter particles, or are “self-interacting” (thus the model’s formal name, self-interacting dark matter, or SIDM for short). Collisions between dark matter particles can redistribute angular momentum in the centers of dense dark matter halos, pushing particles with little angular momentum in the centers of peaky dark matter halos outwards—producing cores.  If you know how sticky the dark matter is (quantitatively described by the dark matter particle’s self-interaction cross section, which gives the probability that two dark matter particles will collide) you can predict the sizes of these cores.

The authors of today’s paper derived the core sizes of observed dark matter halos ranging in mass from 10^9 to 10^15 solar masses—which translates to dwarf galaxies up through clusters of galaxies—then derived the self-interaction cross sections that the size of each halo’s core implied. This isn’t particularly new work, but it’s the first time that this has been done for an ensemble of dark matter halos.  Since halos with different masses have different characteristic velocities (i.e. velocity dispersions), this lets us measure whether dark matter is more or less sticky at different velocities.  Their range of halo masses allowed them to probe a velocity range from 20 km/s (in dwarf galaxies) to 2000 km/s (in galaxy clusters).

And what did they find? The cross section appears to have a velocity dependence, but a weak one. For the halos of dwarf galaxies, a cross section of about 1.9 cm^2/g is preferred, whereas for the largest halos, those of galaxy clusters, they find that a cross section that’s an order of magnitude smaller—about 0.1 cm^2/g—is preferred. There’s some scatter in the results, but the scatter can be accounted for by differences in how concentrated the dark matter in each halo is (which depends on how it formed).

But that’s just the tip of the iceberg.  The velocity dependence can be used to back out even more details about the dark matter particle itself. To demonstrate this, the authors assume a simple dark matter model, in which dark matter-dark matter interactions occur with the help of a second particle (the “mediator”) that’s comparatively massless—the “dark photon” model. Under these assumptions, they predict that the dark matter particle has a mass of about 15 GeV, and the mediator has a mass of 17 Mev.

These are exciting and illuminating results, but we are still a long ways from our goal of identifying the dark matter particle.  The authors’ analysis did not include baryonic effects such as supernovae feedback, which can also help produce cores (but may not be able to fully account for them), and better constraints on the self-interaction cross section are needed (based on merging galaxy clusters, for instance).  The astrophysical search for more details on the elusive dark matter particle continues!




Cover image:  The Bullet Cluster.  Overlaid on an HST image is a weak lensing mass map in blue and a map of the gas (as measured by x-ray emission by Chandra) in pink.  The clear separation between the mass and the gas was a smoking gun for the existence of dark matter. It’s also been intensely studied for signs of dark matter self-interactions.

Disclaimer:  I’ve collaborated with the first author of this paper, but chose to write on this paper because I thought it was cool, not as an advertisement!

by Stacy Kim at August 27, 2015 02:52 AM

August 26, 2015

Christian P. Robert - xi'an's og

abcfr 0.9-3

garden tree, Jan. 12, 2012In conjunction with our reliable ABC model choice via random forest paper, about to be resubmitted to Bioinformatics, we have contributed an R package called abcrf that produces a most likely model and its posterior probability out of an ABC reference table. In conjunction with the realisation that we could devise an approximation to the (ABC) posterior probability using a secondary random forest. “We” meaning Jean-Michel Marin and Pierre Pudlo, as I only acted as a beta tester!

abcrfThe package abcrf consists of three functions:

  • abcrf, which constructs a random forest from a reference table and returns an object of class `abc-rf’;
  • plot.abcrf, which gives both variable importance plot of a model choice abc-rf object and the projection of the reference table on the LDA axes;
  • predict.abcrf, which predict the model for new data and evaluate the posterior probability of the MAP.

An illustration from the manual:

mc.rf <- abcrf(snp[1:1e3, 1], snp[1:1e3, -1])
predict(mc.rf, snp[1:1e3, -1], snp.obs)

Filed under: R, Statistics, University life Tagged: ABC, ABC model choice, abcrf, bioinformatics, CRAN, R, random forests, reference table, SNPs

by xi'an at August 26, 2015 10:15 PM

Lubos Motl - string vacua and pheno

Gaillard vs Ferrara 1981: are these discussions sane?
A sickeningly direct perspective into the feminist manipulations within the Academia

A month ago, the achieved phenomenologist Mary Gaillard (Berkeley) released her book "A Singularly Unfeminine Profession: One Woman's Journey in Physics". As you can imagine, the title has a similar effect on me as a red towel has on a bull. I am even inclined to think that the title – and probably much of the content – is considered repulsive by most of the potential readers. only offers two short reviews – one five-star review (saying "great read" and "fantastic") and one one-star review which says that the book is boring and at one point, it talks about skiing trips and getting a new credit card. Nothing interesting to be found in the book, we hear.

Gaillard has collected more than 16,000 citations and has worked on things like superstring phenomenology – after she was deriving masses of heavy quarks from grand unified theories and similar things. There could be many interesting physics things to explain.

Instead, most of the book seems to be about whining, boasting, feminism, and similar crap. At least that's how I understand the review in Nature titled Physics: She did it all. No, I don't believe for a second that "she did it all".

Val Gibson, the reviewer, tells us quite some details about the purely personal stories that this book is full of. When she was starting the graduate school, her husband (or soon-to-be-husband) Jean-Marc Gaillard (so far a postdoc) got a job in Orsay. Such "two-body problems" may be difficult but it was natural for her to move to Orsay, too. She complains that for a year, she had to do some work at home. What's wrong with that? Most people – and especially women – do some work at home. And a graduate student who moved from a different continent just can't be offered a professor job just to save her from household chores.

Later, she complains that she had to deal with three children and also face "gender bias". What's wrong with children? And didn't she have to answer "Yes" to a question before they were born, anyway? We learn that she forgot to pick her son from a music lesson in a cold weather, gave him insufficient money for a bus so he was fined. But she also thanked him (when he was 9) for some calculations in the acknowledgements of the "penguin diagram" paper (I am pretty sure that this acknowledgement was bogus).

The framing of the stories seems to make it obvious that for this lady, the career and the credit were always above the scientific truth.

The greatest injustice apparently was that she wasn't offered a job by the CERN theory group in 1981. Peter W*it at the notorious far-left anti-science website, "Not Even Wr*ng", even complains that Sergio Ferrara was probably hired instead of her. Even if this story were accurate, we must say: What a crime! You may check that the number of Ferrara's citations is twice as high as Gaillard's but close to 40,000 according to Google Scholar and 32,000 according to INSPIRE. Especially because of Ferrara's pioneering contributions to the newly born supergravity, I think that Ferrara is ultimately a more important and more original physicist.

Someone may disagree. But are these people serious when they try to attack the 1981 CERN admission committees just because they didn't hire a female candidate from a list of similarly fit candidates? Aren't these critics ashamed for their staggeringly obvious "gender bias"?

And I am not even mentioning the sociological issues that may have played a role in those decisions. The CERN theory group was employing both Jean-Marc Gaillard, Mary Gaillard's first husband whom she divorced in 1981, as well as Bruno Zumino, her second husband whom she married soon afterwards (and who died in 2014).

Whatever the exact relationships and justifications of the divorce were, can you imagine the tension of a workplace that would employ both the wife as well as the two husbands? And I think that the intense mixture of the physics and personal topics makes it obvious that Mary Gaillard worked hard to entangle the hiring decisions with her decisions in the personal life in many convoluted ways.

It seems plausible that in 1981, she demanded CERN to fire her first husband so that she may stay there happily with the second one or something of the sort. At any rate, something like that would probably be needed if they wanted to hire her. Such considerations shouldn't be primary but they may still be sufficient. It was much healthier for Bruno Zumino and his new wife to leave.

Even if there were a hiring mistake of CERN in 1981, and I don't see any evidence that there was one, it was simply a decision that was made by some people who had the power to decide about these matters in 1981. You can't rewrite the history. You are not the director general of CERN from 1981.

As the previous paragraph suggests, the top villain of the book is Leon Van Hove (yes, the Van Hove singularity in crystals), a former director general of CERN, who subjected her to the "determined antifeminism". What a sin. In reality, every decent person is a "determined antifeminist". Like fascism, feminism is a totalitarian ideology that has to be firmly opposed, especially in the workplace where some people attempt to achieve certain things by combining this ideology with their being female.

The book apparently mixes the picture of her being permanently suppressed and her being a "grande dame". This mixture of pride and victimism sounds kind of inconsistent to me.

We learn that she was a prolific member of assorted feminist task forces as early as in 1980 when she complained that 3% of CERN staff were female. We hear that Gaillard became a "feminist out of the necessity". Well, right, if you're female and so obsessed with your career that you are willing to do any dirty thing one may imagine, you will probably become a feminist. It just happens that the first female appointed to a CERN senior position afterwards (in 1994), Fabiola Gianotti, will become the director general from January 2016. Nothing against her – I think she's great – but if you claimed that this promotion has had nothing to do with reverse sexism, you will leave me deeply skeptical.

Pretty much every scientist – and perhaps every human being – has been facing similar hurdles and enjoying similar successes. The idea that these human stories prove that there was some systematic harassment against the women is just plain dishonest. The main difference is that most men wouldn't whine about such common things so much.

I think that if I wanted to viscerally hate her, I would buy and read the book. At the same moment, I probably feel nice about this book's being so extremely far from the bestseller status.

by Luboš Motl ( at August 26, 2015 08:26 PM

Peter Coles - In the Dark

A Very Clever Experimental Test of a Bell Inequality

Travelling and very busy most of today so not much time to post. I did, however, however get time to peruse a very nice paper I saw on the arXiv with the following abstract:

For more than 80 years, the counterintuitive predictions of quantum theory have stimulated debate about the nature of reality. In his seminal work, John Bell proved that no theory of nature that obeys locality and realism can reproduce all the predictions of quantum theory. Bell showed that in any local realist theory the correlations between distant measurements satisfy an inequality and, moreover, that this inequality can be violated according to quantum theory. This provided a recipe for experimental tests of the fundamental principles underlying the laws of nature. In the past decades, numerous ingenious Bell inequality tests have been reported. However, because of experimental limitations, all experiments to date required additional assumptions to obtain a contradiction with local realism, resulting in loopholes. Here we report on a Bell experiment that is free of any such additional assumption and thus directly tests the principles underlying Bell’s inequality. We employ an event-ready scheme that enables the generation of high-fidelity entanglement between distant electron spins. Efficient spin readout avoids the fair sampling assumption (detection loophole), while the use of fast random basis selection and readout combined with a spatial separation of 1.3 km ensure the required locality conditions. We perform 245 trials testing the CHSH-Bell inequality S≤2 and find S=2.42±0.20. A null hypothesis test yields a probability of p=0.039 that a local-realist model for space-like separated sites produces data with a violation at least as large as observed, even when allowing for memory in the devices. This result rules out large classes of local realist theories, and paves the way for implementing device-independent quantum-secure communication and randomness certification.

While there’s nothing particularly surprising about the result – the nonlocality of quantum physics is pretty well established – this is a particularly neat experiment so I encourage you to read the paper!

Perhaps some day someone will carry out this, even neater, experiment!

PS Anyone know where I can apply to for a randomness certificate?

by telescoper at August 26, 2015 04:45 PM

Emily Lakdawalla - The Planetary Society Blog

Webcomic: Poetry in space
Take a delightful, pixelated journey with French artist Boulet as he explains his love for the "infinite void" of the "mathematical skies."

August 26, 2015 03:11 PM

Symmetrybreaking - Fermilab/SLAC

Scientists accelerate antimatter

Accelerating positrons with plasma is a step toward smaller, cheaper particle colliders.

A study led by researchers from SLAC National Accelerator Laboratory and the University of California, Los Angeles, has demonstrated a new, efficient way to accelerate positrons, the antimatter opposites of electrons. The method may help boost the energy and shrink the size of future linear particle colliders—powerful accelerators that could be used to unravel the properties of nature’s fundamental building blocks.

The scientists had previously shown that boosting the energy of charged particles by having them “surf” a wave of ionized gas, or plasma, works well for electrons. While this method by itself could lead to smaller accelerators, electrons are only half the equation for future colliders. Now the researchers have hit another milestone by applying the technique to positrons at SLAC’s Facility for Advanced Accelerator Experimental Tests, a US Department of Energy Office of Science user facility.

“Together with our previous achievement, the new study is a very important step toward making smaller, less expensive next-generation electron-positron colliders,” says SLAC’s Mark Hogan, co-author of the study published today in Nature. “FACET is the only place in the world where we can accelerate positrons and electrons with this method.”

SLAC Director Chi-Chang Kao says, “Our researchers have played an instrumental role in advancing the field of plasma-based accelerators since the 1990s. The recent results are a major accomplishment for the lab, which continues to take accelerator science and technology to the next level.”

Shrinking particle colliders

Researchers study matter’s fundamental components and the forces between them by smashing highly energetic particle beams into one another. Collisions between electrons and positrons are especially appealing, because unlike the protons being collided at CERN’s Large Hadron Collider – where the Higgs boson was discovered in 2012—these particles aren’t made of smaller constituent parts.

“These collisions are simpler and easier to study,” says SLAC’s Michael Peskin, a theoretical physicist not involved in the study. “Also, new, exotic particles would be produced at roughly the same rate as known particles; at the LHC they are a billion times more rare.”

However, current technology to build electron-positron colliders for next-generation experiments would require accelerators that are tens of kilometers long. Plasma wakefield acceleration is one way researchers hope to build shorter, more economical accelerators.

Previous work showed that the method works efficiently for electrons: When one of FACET’s tightly focused bundles of electrons enters an ionized gas, it creates a plasma “wake” that researchers use to accelerate a trailing second electron bunch.

Computer simulations of the interaction of electrons (left) and positrons (right) with a plasma.

Artwork by: W. An, UCLA

Creating a plasma wake for antimatter

For positrons—the other required particle ingredient for electron-positron colliders—plasma wakefield acceleration is much more challenging. In fact, many scientists believed that no matter where a trailing positron bunch was placed in a wake, it would lose its compact, focused shape or even slow down.

“Our key breakthrough was to find a new regime that lets us accelerate positrons in plasmas efficiently,” says study co-author Chandrashekhar Joshi from UCLA.

Instead of using two separate particle bunches—one to create a wake and the other to surf it—the team discovered that a single positron bunch can interact with the plasma in such a way that the front of it generates a wake that both accelerates and focuses its trailing end. This occurs after the positrons have traveled about four inches through the plasma.  

“In this stable state, about 1 billion positrons gained 5 billion electronvolts of energy over a short distance of only 1.3 meters,” says former SLAC researcher Sebastien Corde, the study’s first author, who is now at the Ecole Polytechnique in France. “They also did so very efficiently and uniformly, resulting in an accelerated bunch with a well-defined energy.”

Looking into the future

All of these properties are important qualities for particle beams in accelerators. In the next step, the team will look to further improve their experiment.

“We performed simulations to understand how the stable state was created,” says co-author Warren Mori of UCLA. “Based on this understanding, we can now use simulations to look for ways of exciting suitable wakes in an improved, more controlled way. This will lead to ideas for future experiments.”

This study underscores the critical importance of test facilities such as FACET, says Lia Merminga, associate laboratory director for accelerators at TRIUMF in Canada.

“Plasma wakefield acceleration of positrons has been a longstanding problem in this field,” she says. “Today's announcement is a breakthrough that offers a possible solution.”

Although plasma-based particle colliders will not be built in the near future, the method could be used to upgrade existing accelerators much sooner.

“It’s conceivable to boost the performance of linear accelerators by adding a very short plasma accelerator at the end,” Corde says. “This would multiply the accelerator’s energy without making the entire structure significantly longer.”

Additional contributors included researchers from the University of Oslo in Norway and Tsinghua University in China. The research was supported by the US Department of Energy, the National Science Foundation, the Research Council of Norway and the Thousand Young Talents Program of China.

This article is based on a SLAC press release.


Like what you see? Sign up for a free subscription to symmetry!

August 26, 2015 01:00 PM

ZapperZ - Physics and Physicists

She's Still Radioactive!
She, as in Marie Curie.

This article examines what has happened to the personal effects of Marie Curie, the "Mother of Modern Physics".

Still, after more than 100 years, much of Curie's personal effects including her clothes, furniture, cookbooks, and laboratory notes remain contaminated by radiation, the Christian Science Monitor reports.

Regarded as national and scientific treasures, Curie's laboratory notebooks are stored in lead-lined boxes at France's national library in Paris.

While the library allows visitors to view Curie's manuscripts, all guests are expected to sign a liability waiver and wear protective gear as the items are contaminated with radium 226, which has a half-life of about 1,600 years, according to Christian Science Monitor.

What they didn't report, and this is where the devil-is-in-the-details part is missing, is what level of radioactivity is given off by these objects. You just don't want to sign something and not know the level you will be exposed to (which, btw, if you work in the US or at a US National Lab, a RWP (radiation work permit) must be posted at the door detailing the type of radiation and the level of radiation at a certain distance).

I suspect that this level is just slightly above background, and that's why they are isolated, but not large enough for concern. Still, the nit-picker in me would like to know such details!


by ZapperZ ( at August 26, 2015 12:55 PM

astrobites - astro-ph reader's digest

SETI Near and Far – Searching for Alien Technology




Think like an Alien

Without a doubt, one of the most profound questions ever asked is if there are other sentient, intelligent lifeforms in the Universe. Countless books, movies, TV shows, and radio broadcasts have fueled our imagination as to what intelligent alien life might look like, what technology they would harness, and what they would do when confronted by humanity. Pioneers such as Francis Drake and Carl Sagan transformed this quandary from the realm of the imagination to the realm of science with the foundation of the Search for Extraterrestrial Intelligence (SETI) Institute. The search for extraterrestrial intelligence goes far beyond listening for radio transmissions and sending out probes carrying golden disks encrypted with humanity’s autobiography. Some of the other ways astronomers have been attempting to quantify the amount of intelligent life in the Universe can be found in all these astrobites, and today’s post will be summarizing two recent additions to astro-ph that study how we might look for alien technology using the tools in our astrophysical arsenal. The aim of both these studies is to search for extraterrestrial civilizations that may have developed technologies and structures that are still out of our reach, and these technologies may have observable effects that we can see from Earth. This post provides a very brief overview of these studies, so check out the actual articles for a more in-depth and interesting read!

Sailing through the Solar System


Figure 1. Artist rendition of a light sail.

The future of space exploration via rocket propulsion faces a dilemma. To travel interplanetary distances in a reasonable amount of time we need to travel really fast, and to go really fast rockets need lots of fuel. However, lots of fuel means lots of weight, and lots of weight means it takes more fuel to accelerate. One popular idea for the future of space travel is the use of light sails (see figure 1), which would use radiation pressure to accelerate a spacecraft without the burden of exorbitant amounts of chemical fuel. Though the sail could reflect sunlight as a means of propulsion, beaming intense radiation from a planet to the light sail could provide more propulsion especially at greater distances from the star (if the sail had perfect reflectivity and was located 1 AU away from a sun-like star, the solar radiation would only provide a force of about 10 micronewtons per square meter of the sail, which is about the force required to hold up a strand of hair against the acceleration of Earth’s gravity). Hopefully in the not-so-distant future, we will be able to use this technology for quick and efficient interplanetary travel. Intelligent life in our galaxy, however, may already be utilizing this means of transportation. But how would we be able to tell if someone out there is is using something like this?

Screen Shot 2015-08-25 at 2.06.52 PM

Figure 2. Diagram showing the likely leakage for a light sail system developed for Earth-Mars transit. The dashed cyan arrow shows the path of the light sail, and the beam profile is shaded in green. The inset shows the log of the intensity within the beam profile in the Fraunhofer regime. Figure 1 in paper 1.

The authors of paper 1 analyze this means of transportation and the accompanying electromagnetic signature we may be able to observe by studying a mock launch of a spacecraft from Earth to Mars. Without delving into too much detail about the optics of the beam, during part of the acceleration period of the spacecraft some of the beamed radiation will be subject to “leakage” missing the sail and propagating out into space (figure 2). Since the two planets the ship is travelling between would lie on nearly the same orbital plane (like Earth and Mars), the radiation beam and subsequent leakage would be directed along the orbital plane as well. However, like a laser beam the leakage would be concentrated on a very small angular area, and to have any chance of detecting the leakage from this mode of transportation we would need to be looking at exoplanetary systems that are edge-on as viewed from the Earth…exactly the kind of systems that are uncovered by transit exoplanet surveys like Kepler! Also, assuming an alien civilization is as concerned we are about minimizing cost and maximizing efficiency, the beaming arrays would likely utilize microwave radiation. This would make the beam more easily distinguishable from the light of the host star and allow it to be detectable from distances on the order of 100 parsecs by SETI radio searches using telescopes such as the Parkes and Green Bank Telescope.

Nature’s Nuclear Reactors


Figure 3. Artist rendition of a Dyson sphere.

Though most SETI efforts are confined to our own galaxy, there are potential methods by which we can uncover an alien supercivilization in a galaxy far, far away. As an intelligent civilization grows in population and technological capabilities, it is assumed that their energy needs will exponentially increase. A popular concept in science fiction to satisfy this demand for energy is a Dyson sphere (see figure 3). These megastructure essentially act as giant solar panels that completely encapsulate a star, capturing most or all of the star’s energy and using it for the energy needs of an advanced civilization. To get an idea of how much energy this could provide, if we were able to capture all of the energy leaving the Sun with an 100% efficient Dyson sphere, it would give us enough energy to power 2 trillion Earths given our current energy consumption. If this much energy isn’t enough, alien super-civilizations could theoretically repeat this process for other stars in their galaxy. Paper 2 considers this type of super-civilization (known as a Kardashev Type III Civilization) and how we may be able to detect their presence in distant galaxies.

The key to detecting this incredibly advanced type of astroengineering is by using the Tully-Fisher relationship – an empirical relationship relating the luminosity of a spiral galaxies to the width of its emission lines (a gauge of how fast the galaxy is rotating). If an alien super-civilization were to harness the power of a substantial fraction of the stars in their galaxy, the galaxy would appear dimmer to a distant observer since a large portion of its starlight is being absorbed by the Dyson spheres. These galaxies would then appear to be distinct outliers in the Tully-Fisher relationship, since the construction of Dyson spheres would have little effect on galaxy’s gravitational potential and rotational velocity, but decrease its observable luminosity. The authors of this study looked at a large sample of spiral galaxies, and picked out the handful that were underluminous by 1.5 magnitudes (75% less luminous) compared to the Tully-Fisher relationship for further analysis (figure 4).

Screen Shot 2015-08-25 at 9.49.07 AM

Figure 4. A Tully-Fisher diagram containing the sample of objects chosen in the study. The solid line indicates the Tully-Fisher relationship, with the y-axis as the I-band magnitude and the x-axis as the log line width. Numbered dots mark the 11 outliers more than 1.5 mag less luminous from the Tully-Fisher relationship, with blue, green, and red indicating classes of differing observational certainty (see paper 2 for more details). Figure 1 in paper 2.

To further gauge whether these candidates have sufficient evidence supporting large-scale astroengineering, the authors looked at their infrared emission. Dyson spheres would likely be efficient at absorbing optical and ultraviolet radiation, but would still need to radiate away excess heat in the infrared. In theory, if one of the candidate galaxies had low optical/ultraviolet luminosity but an excess in the infrared it could provide more credence to the galaxy-wide Dyson sphere hypothesis. However, in reality, this becomes a highly non-trivial problem that depends on the types of stars associated with Dyson spheres, the temperature at which the spheres operate, and the dust content of the galaxy (see paper 2 for more details). Needless to say, better evidence of large-scale astroengineering in a distant galaxy would require a spiral galaxy with very well-measured parameters be a strong outlier in the Tully-Fisher relationship. Though none of the candidates in this study showed clear signs of alien engineering, the authors were able to set a tentative upper limit of ~0.3% of disk galaxies harboring Kardashev Type III Civilizations. Though an extraterrestrial species this advanced is difficult to fathom, the Universe would be a very lonely place if humans were the only form of intelligent life, and this kind of imaginative exploration may one day tell us that we have company in the cosmos.

by Michael Zevin at August 26, 2015 02:01 AM

August 25, 2015

Christian P. Robert - xi'an's og

forest fires

fire1Wildfires rage through the US West, with currently 33 going in the Pacific Northwest, 29 in Northern California, and 18 in the northern Rockies, with more surface burned so far this year than in any of the past ten years. Drought, hot weather, high lightning frequency, and a shortage of firefighters across the US all are contributing factors…fire2Washington State is particularly stricken and when we drove to the North Cascades from Mt. Rainier, we came across at least two fires, one near Twisp and the other one around Chelan… The visibility was quite poor, due to the amount of smoke, and, while the road was open, we saw many burned areas with residual fumaroles and even a minor bush fire that was apparently let to die out by itself. The numerous orchards around had been spared, presumably thanks to their irrigation system.fire3The owner of a small café and fruit stand on Highway 20 told us about her employee, who had taken the day off to protect her home, near Chelane, that had already burned down last year. Among 300 or so houses. Later on our drive north, the air cleared up, but we saw many instances of past fires, like the one below near Hart’s Pass, which occurred in 2003 and has not yet reached regeneration. Wildfires have always been a reality in this area, witness the first US smokejumpers being based (in 1939) at Winthrop, in the Methow valley, but this does not make it less of an objective danger. (Which made me somewhat worried as we were staying in a remote wooden area with no Internet or phone coverage to hear about evacuation orders. And a single evacuation route through a forest…)fire5Even when crossing the fabulous North Cascades Highway to the West and Seattle-Tacoma airport, we saw further smoke clouds, like this one near Goodall, after Lake Ross, with closed side roads and campgrounds.fire4And, when flying back on Wednesday, along the Canadian border, more fire fronts and smoke clouds were visible from the plane. Little did we know then that the town of Winthrop, near which we stayed, was being evacuated at the time, that the North Cascades Highway was about to be closed, and that three firefighters had died in nearby Twisp… Kudos to all firefighters involved in those wildfires! (And close call for us as we would still be “stuck” there!)fire6

Filed under: Mountains, pictures, Travel Tagged: Chelane, Goodall, Hart's Pass, Lake Ross, Methow river, Mondrian forests, Mount Rainier, North Cascades National Park, Pacific North West, Rockies, smokejumper, Twisp, Washington State, wildfire, Winthrop

by xi'an at August 25, 2015 10:15 PM

ATLAS Experiment

Getting ready the next discovery

I’m just on my way back home after a great week spent in Ljubljana where I joined (and enjoyed!) the XXVII edition of the Lepton-Photon conference.


Ljubljana city center (courtesy of Revital Kopeliansky).

During the Lepton-Photon conference many topics were discussed, including particle physics at colliders, neutrino physics, astroparticle physics as well as cosmology.

In spite of the wide spectrum of scientific activities shown in Lepton-Photon, the latest measurements by the experiments at the Large Hadron Collider (LHC) based on 13 TeV proton-proton collision data were notable highlights of the conference and stimulated lively discussions.

The investigation of the proton-proton interactions in this new, yet unexplored, energy regime is underway using new data samples provided by LHC. One of the first analyses performed by ATLAS is the measurement of proton-proton inelastic cross section; this analysis has a remarkable relevance for the understanding of cosmic-ray interactions in the terrestrial atmosphere, thus offering a natural bridge between experiments in high-energy colliders and astroparticle physics.

Dragon sculpture in the Dragon Bridge in Ljubljana.

Dragon sculpture in the Dragon Bridge in Ljubljana.

While we are already greatly excited about the new results based on the 13 TeV collisions provided by LHC, it is also clear that the best is yet to come! As discussed during the conference, the Higgs boson discovered in 2012 by the ATLAS and CMS collaborations still has many unknown properties; its couplings with quarks and leptons need to be directly measured. Remarkably, by the end of next year, the data provided by LHC will have enough Higgs boson events to perform the measurements of many Higgs-boson couplings with good experimental accuracy.

Precision measurements of the Higgs boson properties offer a way to look for new physics at LHC, complementary to direct searches for new particles in the data. Direct searches for new particles, or new physics, at LHC will play a major role in the coming months and years.

A few “hints” of possible new-physics signals were already observed in the data collected by ATLAS at lower energy in 2011 and 2012. Unfortunately such hints are still far from any confirmation and the analysis of the 13 TeV proton-proton collision data will clarify the current intriguing scenarios.

Although LHC is in its main running phase, with many years of foreseen operation ahead of us, the future of particle physics is already being actively discussed, starting from the future world-wide accelerator facilities.

During Lepton-Photon, many projects were presented including proposals for new infrastructure at CERN, in Japan and in China. All these proposals show a strong potential for major scientific discoveries and will be further investigated, posing the basis for particle physics for the next fifty years to come.

Social dinner during the Lepton-Photon conference.

Social dinner during the Lepton-Photon conference.

Without a doubt one of the most inspiring moments of this conference was the public lecture about cosmological inflation given by Alan Guth. It attracted more than one thousand people from Ljubljana and stimulated an interesting debate. In his lecture, Alan Guth stressed the relevant steps forward taken by the scientific community in the understanding of the formation and the evolution of the Universe.

At the same time, Alan Guth remarked on our lack of knowledge of many basic aspects of our Universe, including the dark matter and dark energy puzzles. Dark energy is typically associated to very high energy scales, about one quadrillion times higher than the energy of protons accelerated by LHC; therefore, it is expected that dark energy can’t be studied with accelerated particle beams. On the other hand, dark matter particles are associated with much lower energy scales, and thus they are within the reach of many experiments, including ATLAS and CMS!

miapittura_new Nicola joined the ATLAS experiment in 2009 as a Master’s student at INFN Lecce and Università del Salento in Italy, where he also contributed to the ATLAS physics program as PhD student. He is currently a postdoctoral researcher at Aristotle University of Thessaloniki. His main research activity concerns the ATLAS Standard Model physics, including hard strong-interactions and electroweak measurements. Beyond particle physics, he loves traveling, hiking, kayaking, martial arts, contemporary art, and rock-music festivals.

by orlando at August 25, 2015 08:02 PM

Emily Lakdawalla - The Planetary Society Blog

Three space fan visualizations of New Horizons' Pluto-Charon flyby
It has been a difficult wait for new New Horizons images, but the wait is almost over; Alan Stern announced at today's Outer Planets Advisory Group meeting that image downlink will resume September 5. In the meantime, a few space fans are making the most of the small amount of data that has been returned to date.

August 25, 2015 06:42 PM

Emily Lakdawalla - The Planetary Society Blog

Outer Planet News
NASA's Outer Planet Analysis Group is currently meeting to hear the agency's current plans and to provide the feedback of the scientific community on those plans.

August 25, 2015 01:20 PM

Symmetrybreaking - Fermilab/SLAC

All about supernovae

Exploding stars have an immense capacity to destroy—and create.

Somewhere in the cosmos, a star is reaching the end of its life.

Maybe it’s a massive star, collapsing under its own gravity. Or maybe it’s a dense cinder of a star, greedily stealing matter from a companion star until it can’t handle its own mass.

Whatever the reason, this star doesn’t fade quietly into the dark fabric of space and time. It goes kicking and screaming, exploding its stellar guts across the universe, leaving us with unparalleled brightness and a tsunami of particles and elements. It becomes a supernova. Here are ten facts about supernovae that will blow your mind.

1. The oldest recorded supernova dates back almost 2000 years

In 185 AD, Chinese astronomers noticed a bright light in the sky. Documenting their observations in the Book of Later Han, these ancient astronomers noted that it sparkled like a star, appeared to be half the size of a bamboo mat and did not travel through the sky like a comet. Over the next eight months this celestial visitor slowly faded from sight. They called it a “guest star.”

Two millennia later, in the 1960s, scientists found hints of this mysterious visitor in the remnants of a supernova approximately 8000 light-years away. The supernova, SN 185, is the oldest known supernova recorded by humankind.

2. Many of the elements we’re made of come from supernovae

Everything from the oxygen you’re breathing to the calcium in your bones, the iron in your blood and the silicon in your computer was brewed up in the heart of a star.

As a supernova explodes, it unleashes a hurricane of nuclear reactions. These nuclear reactions produce many of the building blocks of the world around us. The lion’s share of elements between oxygen and iron comes from core-collapse supernovae, those massive stars that collapse under their own gravity. They share the responsibility of producing the universe’s iron with thermonuclear supernovae, white dwarves that steal mass from their binary companions. Scientists also believe supernovae are a key site for the production of most of the elements heavier than iron.

3. Supernovae are neutrino factories

In a 10-second period, a core-collapse supernova will release a burst of more than 1058 neutrinos, ghostly particles that can travel undisturbed through almost everything in the universe.

Outside of the core of a supernova, it would take a light-year of lead to stop a neutrino. But when a star explodes, the center can become so dense that even neutrinos take a little while to escape. When they do escape, neutrinos carry away 99 percent of the energy of the supernova.

Scientists watch for that burst of neutrinos using an early warning system called SNEWS. SNEWS is a network of neutrino detectors across the world. Each detector is programmed to send a datagram to a central computer whenever it sees a burst of neutrinos. If more than two experiments observe a burst within 10 seconds, the computer issues an automatic alert to the astronomical community to look out for an exploding star.

But you don’t have to be an expert astronomer to receive an alert. Anyone can sign up to be among the first to know that a star's core has collapsed.

4. Supernovae are powerful particle accelerators

Supernovae are natural space laboratories; they can accelerate particles to at least 1000 times the energy of particles in the Large Hadron Collider, the most powerful collider on Earth.

The interaction between the blast of a supernova and the surrounding interstellar gas creates a magnetized region, called a shock. As particles move into the shock, they bounce around the magnetic field and get accelerated, much like a basketball being dribbled closer and closer to the ground. When they are released into space, some of these high-energy particles, called cosmic rays, eventually slam into our atmosphere, colliding with atoms and creating showers of secondary particles that rain down on our heads.

5. Supernovae produce radioactivity

In addition to forging elements and neutrinos, the nuclear reactions inside of supernovae also cook up radioactive isotopes. Some of this radioactivity emits light signals, such as gamma rays, that we can see in space.

This radioactivity is part of what makes supernovae so bright. It also provides us with a way to determine if any supernovae have blown up near Earth. If a supernova occurred close enough to our planet, we’d be sprayed with some of these unstable nuclei. So when scientists come across layers of sediment with spikes of radioactive isotopes, they know to investigate whether what they’ve found was spit out by an exploding star.

In 1998, physicists analyzed crusts from the bottom of the ocean and found layers with a surge of 60Fe, a rare radioactive isotope of iron that can be created in copious amounts inside supernovae. Using the rate at which 60Fe decays over time, they were able to calculate how long ago it landed on Earth. They determined that it was most likely dumped on our planet by a nearby supernova about 2.8 million years ago.

6. A nearby supernova could cause a mass extinction

If a supernova occurred close enough, it could be pretty bad news for our planet. Although we’re still not sure about all the ways being in the midst of an exploding star would affect us, we do know that supernovae emit truckloads of high-energy photons such as X-rays and gamma rays. The incoming radiation would strip our atmosphere of its ozone. All of the critters in our food chain from the bottom up would fry in the sun’s ultraviolet rays until there was nothing left on our planet but dirt and bones.

Statistically speaking, a supernova in our own galaxy has been a long time coming.

Supernovae occur in our galaxy at a rate of about one or two per century. Yet we haven’t seen a supernova in the Milky Way in around 400 years. The most recent nearby supernova was observed in 1987, and it wasn’t even in our galaxy. It was in a nearby satellite galaxy called the Large Magellanic Cloud.

But death by supernova probably isn’t something you have to worry about in your lifetime, or your children’s or grandchildren’s or great-great-great-grandchildren’s lifetime. IK Pegasi, the closest candidate we have for a supernova, is 150 light-years away—too far to do any real damage to Earth.

Even that 2.8-million-year-old supernova that ejected its radioactive insides into our oceans was at least 100 light-years from Earth, which was not close enough to cause a mass-extinction. The physicists deemed it a “near miss.”

7. Supernovae light can echo through time

Just as your voice echoes when its sound waves bounce off a surface and come back again, a supernova echoes in space when its light waves bounce off cosmic dust clouds and redirect themselves toward Earth.

Because the echoed light takes a scenic route to our planet, this phenomenon opens a portal to the past, allowing scientists to look at and decode supernovae that occurred hundreds of years ago. A recent example of this is SN1572, or Tycho’s supernova, a supernova that occurred in 1572. This supernova shined brighter than Venus, was visible in daylight and took two years to dim from the sky.

In 2008, astronomers found light waves originating from the cosmic demolition site of the original star. They determined that they were seeing light echoes from Tycho’s supernova. Although the light was 20 billion times fainter than what astronomer Tycho Brahe observed in 1572, scientists were able to analyze its spectrum and classify the supernova as a thermonuclear supernova.

More than four centuries after its explosion, light from this historical supernova is still arriving at Earth.

8. Supernovae were used to discover dark energy

Because thermonuclear supernovae are so bright, and because their light brightens and dims in a predictable way, they can be used as lighthouses for cosmology.

In 1998, scientists thought that cosmic expansion, initiated by the big bang, was likely slowing down over time. But supernova studies suggested that the expansion of the universe was actually speeding up.

Scientists can measure the true brightness of supernovae by looking at the timescale over which they brighten and fade. By comparing how bright these supernovae appear with how bright they actually are, scientists are able to determine how far away they are.

Scientists can also measure the increase in the wavelength of a supernova’s light as it moves farther and farther away from us. This is called the redshift.

Comparing the redshift with the distances of supernovae allowed scientists to infer how the rate of expansion has changed over the history of the universe. Scientists believe that the culprit for this cosmic acceleration is something called dark energy.

9. Supernovae occur at a rate of approximately 10 per second

By the time you reach the end of this sentence, it is likely a star will have exploded somewhere in the universe.

As scientists evolve better techniques to explore space, the number of supernovae they discover increases. Currently they find over a thousand supernovae per year.

But when you look deep into the night sky at bright lights shining from billions of light-years away, you’re actually looking into the past. The supernovae that scientists are detecting stretch back to the very beginning of the universe. By adding up all of the supernovae they’ve observed, scientists can figure out the rate at which supernovae occur across the entire universe.

Scientists estimate about 10 supernovae occur per second, exploding in space like popcorn in the microwave.

10. We’re about to get much better at detecting far-away supernovae

Even though we’ve been aware of these exploding stars for millennia, there’s still so much we don’t know about them. There are two known types of supernovae, but there are many different varieties that scientists are still learning about.

Supernovae could result from the merger of two white dwarfs. Alternatively, the rotation of a star could create a black hole that accretes material and launches a jet through the star. Or the density of a star’s core could be so high that it starts creating electron-positron pairs, causing a chain reaction in the star.

Right now, scientists are mapping the night sky with the Dark Energy Survey, or DES. Scientists can discover new supernova explosions by looking for changes in the images they take over time.

Another survey currently going on is the All-Sky Automated Survey for Supernovae, or the ASAS-SN, which recently observed the most luminous supernova ever discovered.

In 2019, the Large Synoptic Survey Telescope, or LSST, will revolutionize our understanding of supernovae. LSST is designed to collect more light and peer deeper into space than ever before. It will move rapidly across the sky and take more images in larger chunks than previous surveys. This will increase the number of supernovae we see by hundreds of thousands per year.

Studying these astral bombs will expand our knowledge of space and bring us even closer to understanding not just our origin, but the cosmic reach of the universe.


Like what you see? Sign up for a free subscription to symmetry!


by Ali Sundermier at August 25, 2015 01:00 PM

Peter Coles - In the Dark

When scientists help to sell pseudoscience: The many worlds of woo


Since Professor Moriarty has mentioned me by name (not once, but twice) in his latest post, the least I can do is reblog it. In fact I agree wholeheartedly with his demolition of the pathological industry that has grown up around “Quantum Woo”…

Originally posted on Symptoms Of The Universe:

…or, as Peter Coles suggested, The Empirical Strikes Back

Until a couple of weeks ago, I was blissfully unaware that there was a secret out there that had the potential to change my life forever. I could do anything, be anything, get anything I so desired… if I only knew The Secret. Despite my hitherto abject ignorance, it’s not a particularly well-kept secret: millions know about it — and its universal law of attraction guiding ‘principle’ — largely due to Oprah Winfrey’s glowing and gushing endorsement:

Nor is The Secret anything new. The film which first gave it away was released nearly a decade ago. Like the best memes, however, its rate of infection continues to grow. Googling “The Law of Attraction” gives millions upon millions of hits, and counting.

I found out about The Secret via Tim Brownson and Olivier Larvor, both mentioned in my previous post…

View original 2,947 more words

by telescoper at August 25, 2015 12:38 PM

arXiv blog

The 20 Most Infamous Cyberattacks of the 21st Century (Part I)

Cyberattacks are on the increase, and one cybersecurity researcher is on a mission to document them all.

August 25, 2015 04:15 AM

Emily Lakdawalla - The Planetary Society Blog

Galileo's best pictures of Jupiter's ringmoons
People often ask me to produce one of my scale-comparison montages featuring the small moons of the outer solar system. I'd love to do that, but Galileo's best images of Jupiter's ringmoons lack detail compared to Cassini's images from Saturn.

August 25, 2015 12:07 AM

August 24, 2015

Peter Coles - In the Dark

Early Autumn?

This seems a bit strange. I was on campus yesterday (23rd August) and noticed that the leaves are already falling from the trees:

Early Autmn

Has Autumn come early to Sussex this year? Or is this normal? Anyone noticed anything like this elsewhere?

by telescoper at August 24, 2015 02:36 PM

Tommaso Dorigo - Scientificblogging

New Frontiers In Physics: The 2015 Conference In Kolimbari
Nowadays Physics is a very big chunck of science, and although in our University courses we try to give our students a basic knowledge of all of it, it has become increasingly clear that it is very hard to keep up to date with the developments in such diverse sub-fields as quantum optics, material science, particle physics, astrophysics, quantum field theory, statistical physics, thermodynamics, etcetera.

Simply put, there is not enough time within the average life time of a human being to read and learn about everything that is being studied in dozens of different disciplines that form what one may generically call "Physics. 

read more

by Tommaso Dorigo at August 24, 2015 01:21 PM

Clifford V. Johnson - Asymptotia


Fig beetles.


(Slightly blurred due to it being windy and a telephoto shot with a light handheld point-and-shoot...)

-cvj Click to continue reading this post

The post Beetlemania… appeared first on Asymptotia.

by Clifford at August 24, 2015 10:20 AM

astrobites - astro-ph reader's digest

An Explosive Signature of Galaxy Collisions

Gamma ray bursts (GRBs) are among the most dramatically explosive events in the universe. They’re often dubbed the largest explosions since the Big Bang (it’s pretty hard to quantify how big the Big Bang was, but suffice it to say it was quite large). There are two classes of GRBs: long-duration and short-duration. Long-duration GRBs (which interest us today) are caused when extremely massive stars go bust.

Fig 1. -

Fig 1. – Long-duration GRBs are thought to form during the deaths of the most massive stars. As the stars run out of fuel (left to right) they star fusing heavier elements together until reaching iron (Fe). Iron doesn’t fuse, and the star can collapse into a black hole. As the material is sucked into the black hole, a powerful jet can burst out into the universe (bottom left), which we would observe as a GRB.

The most massive stars burn through their fuel much faster, and die out much more quickly than smaller stars. Therefore, long-duration GRBs should only be seen in galaxies with a lot of recent star formation. All the massive stars will have already died in a galaxy which isn’t forming new stars. Lots of detailed observations have been required to confirm this connection between GRBs and their host galaxies. It’s, in fact, one of the main pieces of evidence for the massive-star explanation.

The authors of today’s paper studied the host galaxy of a long-duration GRB with an additional goal in mind. Rather than just show that this galaxy is forming lots of stars, they wanted to look at its gas to explain why it’s forming so many stars. So, they went looking for neutral hydrogen gas in the galaxy. Neutral gas is a galaxy’s fuel for forming new stars. Understanding the properties of the gas should tell us about the way in which the galaxy is forming stars.

Hot, ionized hydrogen is easy to observe, because it emits a lot of light in the UV and optical ranges. This ionized hydrogen is found right around young, star-forming regions, and so has been seen in GRB hosts before. But the cold, neutral hydrogen – which makes up most of a galaxy’s gas – is much harder to observe directly. It doesn’t emit much light on its own, but one of the main places it does emit is in the radio band: the 21-cm line. For more information on the physics involved, see this astrobite page, but suffice it to say that pretty much all neutral hydrogen emits weakly at 21 cm.

This signal is weak enough that it hasn’t been detected in the more distant GRB hosts. Today’s authors observed the host galaxy of the closest-yet-observed GRB (980425), which is only 100 million light-years away: about 50 times farther away than the Andromeda galaxy. This is practically just next-door, compared to most GRBs. This close proximity allowed them to make the first ever detection of 21-cm hydrogen emission from a GRB host galaxy.


Fig. 2 -The radio map (contours) of the neutral hydrogen gas from 21-cm radio observations. The densest portions of the disk align with the location of the GRB explosion (red arrow) and a currently-ongoing burst of star formation (blue arrow). Fig 2 from Arabsalmani et al. 2015

Using powerful radio observations – primarily from the Giant Metrewave Radio Telescope – the authors made maps of hydrogen 21-cm emission across the galaxy. They found a large disk of neutral gas, which was thickest in the region around where the GRB went off. Denser gas leads to more ongoing star formation, which as we know can mean that very massive stars may still be around to become GRBs.

The most important finding, however, was that the gas disk had been disturbed: more than 21% of the gas wasn’t aligned with the disk. This disturbance most likely came from a merger with a smaller galaxy, that mixed up the disk when passing by. The authors argue that this merger could have helped get the star-formation going. By shock-compressing the gas, the disturbance would have kick-started the galaxy into forming stars and, eventually, resulted in the GRB.

This paper is quite impressive, as it shows that astronomers are probing farther into the link between GRBs and their host galaxies. Astronomers have known for a while that GRBs are sign-posts to galaxies which are forming lots of stars. But today’s paper used radio observations of the gas to connect that star formation to a recent merger. Most GRB hosts are much farther away, and similar observations will be difficult. But with more sensitive observatories – like ALMA or the VLA – it may be possible to see whether the gas of more GRB hosts show evidence of mergers. Perhaps GRBs are telling us even more about their galaxies than we had thought before!

by Ben Cook at August 24, 2015 04:44 AM

August 23, 2015

Clifford V. Johnson - Asymptotia

Red and Round…


Some more good results from the garden, after I thought that the whole crop was again going to be prematurely doomed, like last year. I tried to photograph the other thing about this year's gardening narrative that I intend to tell you about, but with poor results, but I'll say more shortly. In the meantime, for the record here are some Carmello tomatoes and some of a type of Russian Black [...] Click to continue reading this post

The post Red and Round… appeared first on Asymptotia.

by Clifford at August 23, 2015 12:10 AM

August 21, 2015

ZapperZ - Physics and Physicists

Quantum Teleportation Versus Star Trek's "Transporter".
Chad Orzel has an article on Forbes explaining a bit more on what quantum teleportation is, and how it is different than those transporters in Star Trek. You might think that this is rather well-known since this has been covered many times, even on this blog. But the ignorance of what quantum teleportation is still pops up frequently, and I see people on public forums still think that we can transport objects from one location to another because "quantum teleportation" has been verified.

So, if you are still cloudy on this topic, you might want to read that article.


by ZapperZ ( at August 21, 2015 12:48 PM

arXiv blog

How Astronomers Could Observe Light Sails Around Other Stars

Light sails are a promising way of exploring star systems. If other civilizations use them, these sails should be visible from Earth, say astrophysicists.

August 21, 2015 04:38 AM

August 20, 2015

astrobites - astro-ph reader's digest

Magnetars: The Perpetrators of (Nearly) Everything

Fig 1:  Artist's conception of a magnetar with strong magnetic field lines. [From Wikipedia Commons]

Fig 1: Artist’s conception of a magnetar with strong magnetic field lines. [From Wikipedia Commons]

Astronomers who study cosmic explosions have a running joke: anything too wild to explain with standard models are probably magnetars. These scapegoats are neutron stars with extremely powerful magnetic fields, like the one shown to the right.

Super-luminous supernovae? Probably a magnetar collapsing. Short, weak gamma ray bursts? Why not magnetar flares. Ultra-long gamma ray bursts? Gotta be magnetars. Magnetars are a popular model due to their natural versatility. In today’s paper, the authors tie together several magnetar theories into a cohesive theoretical explanation of two types of transients, or short cosmic events: super-luminous supernovae (SLSNe) and ultra-long gamma ray bursts (ULGRBs).

The Super-Ultra Transients

Super-luminous supernovae, as their name suggests, are extreme stellar deaths which are about 100 times brighter than normal core-collapse supernovae. The brightest SLSN, ASASSN-15lh (pronounced “Assassin 15lh”), is especially troubling for scientists because it lies well above the previously predicted energy limits of a magnetar model*. The other new-kids-on-the-transient-block are ultra-long gamma ray bursts which are bursts of gamma-ray energy which last a few thousands of seconds. The other popular variety of gamma ray bursts associated with SNe are plain-old “long gamma ray bursts”, which last less than 100 seconds are often accompanied by a core-collapse supernovae. Both long gamma ray bursts and ULGRBs are currently predicted in magnetar models. The question is: can we tie these two extreme events, SLSNe and ULGRBs, together in a cohesive theoretical framework?

The authors say yes! The basic theoretical idea proposed is that a very massive star will begin to collapse like a standard core-collapse supernova. The implosion briefly leaves behind a twirling pulsar whose angular momentum is saving it from collapsing into a black hole. Material is flung from its spinning surface, especially along its magnetic poles. From these poles, columned material is seen as high-energy jets, like you can see in this video of Vela. Eventually, the magnetars slow down and finally collapse into a black hole.

Fig 2: The connection between ULGRBs and SLSNe. Along the x-axis is is the initial spin-period of the magnetar. On the y-axis is the magnetic field of the magnetar. The red-shaded region shows where SLSNe are possible, and the blue-shaded region show where GRBs are possible. The red and green points are observed SLSNe.

Fig 2: The connection between ULGRBs and SLSNe. Along the x-axis is is the initial spin-period of the magnetar. On the y-axis is the magnetic field of the magnetar. The red-shaded region shows where SLSNe are possible, and the blue-shaded region show where GRBs are possible. The red and green points are observed SLSNe.

Connecting the Dots

We can explain the consequences of the model using the image shown above. In the upper left quadrant, the magnetars spin very quickly (i.e. short spin periods) and have large magnetic fields. In this scenario, the escaping magnetar jets are extremely powerful and columnated, and the magnetar will spin down and collapse into a black hole after a few minutes. This scenario describes the typical long gamma ray bursts that we often see.

Now if we move down and right on the figure, our initial magnetic field weakens and the period of the magnetar grows. In this case, the expected jet from the magnetar will weaken, but it will last longer as the magnetar takes a longer time to slow down its life-preserving spin. If the jet is able to blast its way out of the magnetar and is directed towards us, we will see it as an ULGRB, with a lifetime of about a half hour!

One of the most exciting features of the plot are the solid black lines that show where the supernova luminosity is maximized. In these points, the luminosity of the supernova is enhanced by the magnetar, leading to super luminous supernovae. These lines are in great agreement with three notable, luminous SNe.  It’s especially exciting that the black contours overlap with the region where ultra-long GRBs are produced. In other words, the authors predict that it is possible for a super-luminous supernova to be associated with an ultra-long gamma ray burst, tying together these extreme phenomena.

What’s Next?

One of the best tests of this theory will come from observations of ASASSN-15lh over the next several months. Ionizing photons from a magnetar model are predicted to escape from the supernova’s ejected material in the form of X-rays. Observations of these X-ray photons could be a smoking gun for a magnetar model of super-luminous supernovae, so stay tuned!

*Note: At the time of this bite, ASASSN-15lh is not a confirmed supernova. It may be another exciting type of transient known as a tidal disruption event, which you can read all about here

by Ashley Villar at August 20, 2015 02:11 PM

Symmetrybreaking - Fermilab/SLAC

Q&A: Marcelle Soares-Santos

Scientist Marcelle Soares-Santos talks about Brazil, neutron stars and a love of discovery.

Marcelle Soares-Santos has been exploring the cosmos since she was an undergraduate at the Federal University of Espirito Santo in southeast Brazil. She received her PhD from the University of São Paulo and is currently an astrophysicist on the Dark Energy Survey based at Fermi National Accelerator Laboratory outside Chicago.

Soares-Santos has worked at Fermilab for only five years, but she has already made a significant impact: In 2014, she was bestowed the Alvin Tollestrup Award for postdoctoral research. Now she is embarking on a new study to measure gravitational waves from neutron star collisions.


S: You recently attended the LISHEP conference, a high-energy physics conference held annually in Brazil. This year it was held in the region of Manaus, near your childhood home. What was it like to grow up there?

MS: Manaus is very different from the region that I think most foreigners know, Rio or São Paulo, but it’s very beautiful, very interesting. When I was four, my father worked for a mining company, and they found a huge reserve of iron ore in the middle of the Amazon forest. All over Brazil, people got offers from that company to get some extra benefits, which was very good for us because one of the benefits was a chance to go a good school there.


S: When did you get interested in physics?

MS: That was very early on, when I was a little kid. I didn’t know that it was physics I wanted to do, but I knew I wanted to do science. I tend to say that I lacked any other talents. I could not play any sport, I wasn’t good in the arts. But math and science, that was something I was good at.

These days I look back and feel that, had I known what I know today, I might not have had this confidence, because I understand now how lots of people are not encouraged to view physics as a topic they can handle. But back then I had a little bit of blind faith in the school system.


S: You work on the Dark Energy Survey. When did the interest in astrophysics kick in?

MS: I did an undergraduate research project. In Brazil, there is a program of research initiation where undergraduates can work for an entire year on a particular topic. My supervisor’s research was related to dark energy and gravitational waves. It’s interesting, because today I work on those two topics from a completely different perspective.


S: You’re also starting on a new project to study gravitational waves. What’s that about?

MS: For the first time we are building detectors that will be able to detect gravitational waves, not from cosmological sources, but from colliding neutron stars. These events are very rare, but we know they occur, and we can calculate how much gravitational wave emission there will be. The detectors are now reaching the sensitivity that they can see that. There’s LIGO in the United States and VIRGO in Europe.

Relying solely on gravitational waves, it’s possible only to roughly localize in the sky where the star collision happens. But we also have the Dark Energy Camera, so we can use it to find the optical counterpart—lots and lots of photons—and pinpoint the event picked up by the gravitational wave detector.

If we see the collision, we will be the first ones to see it based on a gravitational wave signal. That will be really cool.


S: How did the project get started? What is it called?

MS: I saw an announcement that LIGO was going to start operating this year, and I thought, “DECam would be great for this.” I talked to Jim Annis [at Fermilab] and said, “Look, look at this. It would be cool.” And he said, “Yeah, it would.”

It’s called the DES-GW project. It will start up in September. Groups from Fermilab, the University of Chicago, University of Pennsylvania and Harvard are participating.


S: What’s your favorite thing about what you do?

MS: Building these crazy ideas to become a reality. That’s the fun part of it. Of course, it’s not always possible, and we have more ideas than we can actually realize, but if you get to do one, it’s really cool. Part of the reason I moved from theory [as a graduate student] to experiment is that I wanted to do something where you actually get to close the loop of answering a question.


S: Has anything about being a scientist surprised you?

MS: In the beginning I thought I’d never be the person doing hands-on work on detector. I thought of myself more as someone who would be sitting in front of a computer. And it’s true that I spend most of my time sitting in front of the computer, but I also get a chance to go to Chile [where the Dark Energy Camera is located] and take data, be at the lab and get my hands dirty. Back then I thought that was more the role of an engineer than a scientist. I learned it doesn’t matter the label. It is a part of the job, and it’s a fun part.


S:In June 2014 Fermilab posted a Facebook post about you winning the Alvin Tollestrup Award. It received by far more likes than any Fermilab post up to that point, and most were pouring in from Brazil. What was behind its popularity?

MS:That was surprising for me. Typically whenever there is something on Facebook related to what I do, my parents will be excited about it and repost, so I get a few likes and reposts from relatives and friends. This one, I don’t know what happened.  I think in part there was a little bit of pride, people seeing a Brazilian being successful abroad.

I got lots of friend requests from people I’ve never met before. I got questions from high schoolers about physics and how to pursue a physics education. It’s a big responsibility to say something. What do you say to people? I tried to answer reasonably and tell them my experience. It was my 15 minutes of fame in social media.


Like what you see? Sign up for a free subscription to symmetry!


by Leah Hesla at August 20, 2015 01:00 PM

Axel Maas - Looking Inside the Standard Model

Looking for one thing in another
One of the most powerful methods to learn about particle physics is to smash two particles against each other at high speed. This is what is currently done at the LHC, where two protonsare used. Protons have the advantage that they are simple to handle on an engineering level, but since they are made up out of quarks andgluons, these collisions are rather messy.

An alternative are colliders using electrons and positrons. There have been many of these used successfully in the past. The reason is that the electrons and positrons appear at first sight to be elementary, but are technically much harder to use. Nonetheless, there are right now two large projects planned to use them, one in Japan and one in China. The decision is still out, whether either, or both, will be build, but they would both open up a new view on particle physics. These would start, hopefully, in the (late) 2020ies or early 2030ies.

However, they may be a little more messier than currently expected. I have written previously several times about our research on the structure of the Higgs particle. Especially that the Higgs may be much less simpler than just a singleparticle. We are currently working on possible consequences of this insight.

What has this to do with the collisions? Well, already 35 years agopeople figured out that if the statements about the Higgs are true, then this has further implications. Especially, the electrons as we know them cannot be really elementary particles. Rather, they have to be bound states, a combination of what we usually call an electron and a Higgs.

This is confusing at first sight: An electron should consists out of an electron and something else? The reason is that a clear distinction is often not made. But one should be more precise. One should first think of an elementary 'proper' electron. Together with the Higgs this proper electron creates a bound state. This bound state is what we perceive and usually call an electron. But it is different from the proper electron. Thus, we should therefore call it rather an 'effective' electron. This chaos is not so much different as what you would get when you would call a hydrogen atom also proton, since the electron is such a small addition to the proton in the atom. That you do not do so has a historical origin, as it has in the reverse way for the (effective) electron. Yes, it is confusing.

Even after mastering this confusion, that seems to be a rather outrageous statement. The Higgs is so much heavier than the electron, how can that be? Well, field theory indeed allows to have a bound state of two particles which is much, much lighter than the two individual particles. This effect is called a mass defect. This has been measured for atoms and nuclei, but there this is a very small effect. So, it is in principle possible. Still, the enormous size of the effect makes this a very surprising statement.

What we want to do now is to find some way to confirm this picture using experiments. And confirm this extreme mass defect.

Unfortunately, we cannot disassemble the bound state. But the presence of the Higgs can be seen in a different way. When we collide such bound states hard enough, then we usually have the situation that we actually collide from each bound states just one of the elements, rather than the whole thing. In a simple picture, in most cases one of the parts will be in the front of the collision, and will take the larger part of the hit.

This means that sometimes we will collide the conventional part, the proper electron. Then everything looks as usually expected. But sometimes we will do something involving the Higgs from either or both bound states. We can estimate already that anything involving the Higgs will be very rare. In the simple picture above, the Higgs, being much heavier than the proper electron, mostly drags behind the proper electron. But 'rarer' is no quantitative enough in physics. Therefore we have to do now calculations. This is a project I intend to do now with a new master student.

We want to do this, since we want to predict whether the aforementioned next experiments we will be sensitive enough to see that some times we actually collide the Higgses. That would confirm the idea of bound states. Then, we would indeed find something else than people were originally been looking for. Or expecting.

by Axel Maas ( at August 20, 2015 07:42 AM

August 19, 2015

ZapperZ - Physics and Physicists

The Apparent Pentaquark Discovery - More Explanation
Recall the report on the apparent observation of a pentaquark made by LHCb a few weeks back. Fermilab's Don Lincoln had a video that explains a bit of what a quark is, what a pentaquark is, and how physics will proceed in verifying this.


by ZapperZ ( at August 19, 2015 03:29 PM

ZapperZ - Physics and Physicists

The Physics Of Air Conditioners
Ah, the convenience of having air conditioning. How many of us have thanked the technology that gave so much comfort during the hot, muggy day.

This CNET article covers the basic physics of air conditioners. Any undergraduate student who had taken intro Physics course should know the basic physics of this device when studying thermodynamics and the Carnot cycle. This is essentially a heat pump, where heat is transferred from a cooler reservoir to a warmer reservoir.

But, if you have forgotten about this, or if you are not aware of the physics behind that thing that gives you such comfort, then you might want to read it.


by ZapperZ ( at August 19, 2015 03:02 PM

August 18, 2015

Clifford V. Johnson - Asymptotia



Been a while since I shared a snippet from the graphic book in progress. And this time the dialogue is not redacted! A few remarks: [...] Click to continue reading this post

The post Mid-Conversation… appeared first on Asymptotia.

by Clifford at August 18, 2015 08:44 PM

arXiv blog

Physicists Unveil First Quantum Interconnect

An international team of physicists has found a way to connect quantum devices in a way that transports entanglement between them.

August 18, 2015 06:07 PM

Symmetrybreaking - Fermilab/SLAC

The age of the universe

How can we figure out when the universe began?

Looking out from our planet at the vast array of stars, humans have always asked questions central to our origin: How did all of this come to be? Has it always existed? If not, how and when did it begin?

How can we determine the history of something so complex when we were not around to witness its birth?

Scientists have used several methods: checking the age of the oldest objects in the universe, determining the expansion rate of the universe to trace backward in time, and using measurements of the cosmic microwave background to figure out the initial conditions of the universe and its evolution.

Hubble and an expanding universe

In the early 1900s, there was no such concept of the age of the universe, says Stanford University associate professor Chao-Lin Kuo of SLAC National Accelerator Laboratory. “Philosophers and physicists thought the universe had no beginning and no end.”

Then in the 1920s, mathematician Alexander Friedmann predicted an expanding universe. Edwin Hubble confirmed this when he discovered that many galaxies were moving away from our own at high speeds. Hubble measured several of these galaxies and in 1929 published a paper stating the universe is getting bigger.

Scientists then realized that they could wind this expansion back in time to a point when it all began. “So it was not until Friedmann and Hubble that the concept of a birth of the universe started,” Kuo says.

Tracing the expansion of the universe back in time is called finding its “dynamical age,” says Nobel Laureate Adam Riess, professor of astronomy and physics at Johns Hopkins University.

“We know the universe is expanding, and we think we understand the expansion history,” he says. “So like a movie, you can run it backwards until everything is on top of everything in the big bang.”

The expansion rate of the universe is known as the Hubble constant.

The Hubble puzzle

The Hubble constant has not been easy to measure, and the number has changed several times since the 1930s, Kuo says.

One way to check the Hubble constant is to compare its prediction for the age of the universe with the age of the oldest objects we can see. At the very least, the universe should be older than the objects it contains.

Scientists can estimate the age of very old stars that have burned out—called white dwarfs—by determining how long they have been cooling. Scientists can also estimate the age of globular clusters, large clusters of old stars that formed at roughly the same time.

They have estimated the oldest objects to be between 12 billion and 13 billion years old.

In the 1990s, scientists were puzzled when they found that their estimate of the age of the universe—based on their measurement of the Hubble constant—was several billion years younger than the age of these oldest stars.

However, in 1998, Riess and colleagues Saul Perlmutter of Lawrence Berkeley National Laboratory and Brian Schmidt of the Australian National Lab found the root of the problem: The universe wasn’t expanding at a steady rate. It was accelerating.

They figured this out by observing a type of supernova, the explosion of a star at the end of its life. Type 1a supernovae explode with uniform brightness, and light travels at a constant speed. By observing several different Type 1a supernovae, the scientists were able to calculate their distance from the Earth and how long the light took to get here.

“Supernovae are used to determine how fast the universe is expanding around us,” Riess says. “And by looking at very distant supernovae that exploded in the past and whose light has taken a long time to reach us, we can also see how the expansion rate has recently been changing.”

Using this method, scientists have estimated the age of the universe to be around 13.3 billion years.

Recipe for the universe

Another way to estimate the age of the universe is by using the cosmic microwave background, radiation left over from just after the big bang that extends in every direction.

“The CMB tells you the initial conditions and the recipe of the early universe—what kinds of stuff it had in it,” Riess says. “And if we understand that well enough, in principle, we can predict how fast the universe made that stuff with those initial conditions and how the universe would expand at different points in the future.”

Using NASA’s Wilkinson Microwave Anisotropy Probe, scientists created a detailed map of the minute temperature fluctuations in the CMB. They then compared the fluctuation pattern with different theoretical models of the universe that predict patterns of CMB. In 2003 they found a match.

“Using these comparisons, we have been able to figure out the shape of the universe, the density of the universe and its components,” Kuo says. WMAP found that ordinary matter makes up about 4 percent of the universe; dark matter is about 23 percent; and the remaining 73 percent is dark energy. Using the WMAP data, scientists estimated the age of the universe to be 13.772 billion years, plus or minus 59 million years.

In 2013, the European Space Agency’s Planck space telescope created an even more detailed map of the CMB temperature fluctuations and estimated the universe to be 13.82 billion years old, plus or minus 50 million years—slightly older than WMAP’s estimate. Planck also made more detailed measurements of the components of the universe and found slightly less dark energy (around 68 percent) and slightly more dark matter (around 27 percent).

New puzzles

Even with these extremely precise measurements, scientists still have puzzles to solve. The measured current expansion rate of the universe tends to be about 5 percent higher than what is predicted from the CMB, and scientists are not sure why, Riess says.

“It could be a sign that we do not totally understand the physics of the universe, or it could be an error in either of the two measurements,” Riess says.

“It is a sign of tremendous progress in cosmology that we get upset and worried about a 5 percent difference, whereas 15 or 20 years ago, measurements of the expansion rate could differ by a factor of two.”

There is also much left to understand about dark matter and dark energy, which appear to make up about 95 percent of the universe. “Our best chance to understand the nature of these unknown dark components is by making these kinds of precise measurements and looking for small disagreements or a loose thread that we can pull on to see if the sweater unravels.”


Like what you see? Sign up for a free subscription to symmetry!

by Amelia Williamson Smith at August 18, 2015 02:39 PM

Quantum Diaries

Pour une physique nucléaire accessible à tous

Grégoire Besse, doctorant au CNRS en physique nucléaire théorique, nous confie son intérêt pour la médiation des sciences.

C’est en débutant ma thèse que je me suis aperçu du lien inéluctable entre la recherche et la vulgarisation. J’ai donc progressivement choisi de me lancer dans cette démarche afin d’expliquer mes recherches et de les rendre plus « limpides » pour le commun des mortels. Ma thèse porte sur la physique nucléaire théorique et s’intitule  « Description théorique de la dynamique nucléaire lors des collisions d’ions lourds et ses implications astrophysiques ». Elle se déroule au laboratoire Subatech à Nantes. Je travaille sur la description dynamique d’un système nucléaire, c’est-à-dire des noyaux en collision ou en réseau. Pour cela, le groupe de recherche dont je fais partie a élaboré un code de simulation nommé Dynamical Yavelets in Nuclei (DYWAN). Ce code est déjà opérationnel mais reste en phase d’optimisation.

Exemple de collision à basse énergie entre deux noyaux. On observe que les noyaux se déforment sous l’effet de la force nucléaire pour se coller, jusqu’à atteindre une fusion.

Exemple de collision à basse énergie entre deux noyaux. On observe que les noyaux se déforment sous l’effet de la force nucléaire pour se coller, jusqu’à atteindre une fusion.

La physique nucléaire s’intéresse aux noyaux et aux comportements de la force nucléaire. La force nucléaire, ou interaction forte résiduelle, est l’effet de l’interaction forte (quarks-gluons) à l’échelle nucléaire. Il s’agit de l’interaction nucléon-nucléon. Bien moins médiatisée que la physique des hautes énergies (celle du LHC et du boson de Higgs), la physique nucléaire reste néanmoins un maillon essentiel pour comprendre la matière. De plus, ses applications sont immédiates, comme par exemple avec la radioactivité, la fission, la fusion ou la production de radio-isotopes.

Ma passion au service de mon travail

Aperçu de l’environnement 3D en OpenGL. Il est visitable comme un jeu-vidéo avec clavier-souris. Les noyaux (bleu-rouge et cyan-rose), déjà mêlés, sont représentés par des objets mathématiques : les états cohérents (les boules avec des nuages de points).

Aperçu de l’environnement 3D en OpenGL. Il est visitable comme un jeu-vidéo avec clavier-souris. Les noyaux (bleu-rouge et cyan-rose), déjà mêlés, sont représentés par des objets mathématiques : les états cohérents (les boules avec des nuages de points).

Le but de ma thèse est de fournir un code de simulation puissant capable de reproduire des données et des comportements observés expérimentalement puis de prédire des réactions. Nous nous focalisons sur la collision d’ions lourds qui permettent de produire des systèmes nucléaires très exotiques tels que de la matière très riche en neutrons. D’autres groupes de recherche du laboratoire s’intéressent plutôt aux études de la radioactivité, de la durée de vie et du comportement des noyaux isolés. Ceci me rappelle la métaphore d’Albert Einstein qui expliquait que pour comprendre le fonctionnement d’une montre sans l’ouvrir, vous avez deux solutions : l’observation (écoute, regard, prise de données et émission d’hypothèses) ou l’expérimentation (vous lancez la montre contre un mur, vous regardez les pièces qui sortent et vous essayer de tout remettre en ordre). Nous utilisons plutôt cette deuxième méthode.

Parallèlement à ma thèse, j’essaie de mettre au point un logiciel  alliant recherche et nouvelles technologies (j’en suis arrivé à un environnement 3D visitable avec clavier-souris). Je suis très intéressé par la réalité virtuelle et la réalité augmentée : je pense que ces outils permettront de nouvelles approches dans la recherche, un nouveau point de vue pour une nouvelle théorie. Et cela a déjà fait ses preuves : nous avons débusqué des erreurs sur DYWAN grâce à mon logiciel !

L’oiseau bleu, ami de la recherche

Mon arrivée sur Twitter n’est pas très ancienne, mais très vite j’ai compris que ce réseau social est un outil formidable pour la recherche. Cette dernière est un monde actif en constante évolution, il paraît alors légitime de se tenir informé des avancées car cela fait normalement partie de notre travail. Par ailleurs, Twitter permet un aperçu rapide (- de 140 caractères) des informations importantes.

J’ai découvert le compte @EnDirectDuLabo par hasard : chaque semaine, un scientifique en prend les rênes pour partager son quotidien avec les abonnés. Avec un public potentiel de plus de 2 000 personnes, l’expérience peut être intimidante. Mais finalement, lorsque ce fut mon tour, tout s’est bien passé et j’ai eu des échanges avec un public varié : chercheurs, doctorants, journalistes, community managers, amateurs et autres curieux.

Au final, cette expérience m’a aidé à mieux cerner mon sujet de thèse. De plus, ces « relations » sont très enrichissantes au quotidien : une photo, une phrase, un article, un blog, vive la curiosité et le partage 2.0 !

by CNRS-IN2P3 at August 18, 2015 02:39 PM

CERN Bulletin

CERN Bulletin Issue No. 34-36/2015
Link to e-Bulletin Issue No. 34-36/2015Link to all articles in this issue No.

August 18, 2015 01:09 PM

Jester - Resonaances

Weekend Plot: Inflation'15
The Planck collaboration is releasing new publications based on their full dataset, including CMB temperature and large-scale polarization data.  The updated values of the crucial  cosmological parameters were already made public in December last year, however one important new element is the combination of these result with the joint Planck/Bicep constraints on the CMB B-mode polarization.  The consequences for models of inflation are summarized in this plot:

It shows the constraints on the spectral index ns and the tensor-to-scalar ratio r of the CMB fluctuations, compared to predictions of various single-field models of inflation.  The limits on ns changed slightly compared to the previous release, but the more important progress is along the y-axis. After including the joint Planck/Bicep analysis (in the plot referred to as BKP), the combined limit on the tensor-to-scalar ratio becomes r < 0.08.  What is also important, the new limit is much more robust; for example, allowing for a scale dependence of the spectral index  relaxes the bound  only slightly,  to r< 0.10.

The new results have a large impact on certain classes models. The model with the quadratic inflaton potential, arguably the simplest model of inflation, is now strongly disfavored. Natural inflation, where the inflaton is a pseudo-Golsdtone boson with a cosine potential, is in trouble. More generally, the data now favors a concave shape of the inflaton potential during the observable period of inflation; that is to say, it looks more like a hilltop than a half-pipe. A strong player emerging from this competition is R^2 inflation which, ironically, is the first model of inflation ever written.  That model is equivalent to an exponential shape of the inflaton potential, V=c[1-exp(-a φ/MPL)]^2, with a=sqrt(2/3) in the exponent. A wider range of the exponent a can also fit the data, as long as a is not too small. If your favorite theory predicts an exponential potential of this form, it may be a good time to work on it. However, one should not forget that other shapes of the potential are still allowed, for example a similar exponential potential without the square V~ 1-exp(-a φ/MPL), a linear potential V~φ, or more generally any power law potential V~φ^n, with the power n≲1. At this point, the data do not favor significantly one or the other. The next waves of CMB polarization experiments should clarify the picture. In particular, R^2 inflation predicts 0.003 < r < 0.005, which is should be testable in a not-so-distant future.

Planck's inflation paper is here.

by Jester ( at August 18, 2015 12:20 PM

Jester - Resonaances

How long until it's interesting?
Last night, for the first time, the LHC  collided particles at the center-of-mass energy of 13 TeV. Routine collisions should follow early in June. The plan is to collect 5-10 inverse femtobarn (fb-1) of data before winter comes, adding to the 25 fb-1 from Run-1. It's high time dust off your Madgraph and tool up for what may be the most exciting time in particle physics in this century. But when exactly should we start getting excited? When should we start friending LHC experimentalists on facebook? When is the time to look over their shoulders for a glimpse of of gluinos popping out of the detectors. One simple way to estimate the answer is to calculate what is the luminosity when the number of particles produced  at 13 TeV will exceed that produced during the whole Run-1. This depends on the ratio of the production cross sections at 13 and 8 TeV which is of course strongly dependent on the particle's mass and production mechanism. Moreover, the LHC discovery potential will also depend on how the background processes change, and on a host of other experimental issues.  Nevertheless, let us forget for a moment about  the fine-print, and  calculate the ratio of 13 and 8 TeV cross sections for a few particles popular among the general public. This will give us a rough estimate of the threshold luminosity when things should get interesting.

  • Higgs boson: Ratio≈2.3; Luminosity≈10 fb-1.
    Higgs physics will not be terribly exciting this year, with only a modest improvement of the couplings measurements expected. 
  • tth: Ratio≈4; Luminosity≈6 fb-1.
    Nevertheless, for certain processes involving the Higgs boson the improvement may be a bit  faster. In particular, the theoretically very important process of Higgs production in association with top quarks (tth) was on the verge of being detected in Run-1. If we're lucky, this year's data may tip the scale and provide an evidence for a non-zero top Yukawa couplings. 
  • 300 GeV Higgs partner:  Ratio≈2.7 Luminosity≈9 fb-1.
    Not much hope for new scalars in the Higgs family this year.  
  • 800 GeV stops: Ratio≈10; Luminosity≈2 fb-1.
    800 GeV is close to the current lower limit on the mass of a scalar top partner decaying to a top quark and a massless neutralino. In this case, one should remember that backgrounds also increase at 13 TeV, so the progress will be a bit slower than what the above number suggests. Nevertheless,  this year we will certainly explore new parameter space and make the naturalness problem even more severe. Similar conclusions hold for a fermionic top partner. 
  • 3 TeV Z' boson: Ratio≈18; Luminosity≈1.2 fb-1.
    Getting interesting! Limits on Z' bosons decaying to leptons will be improved very soon; moreover, in this case background is not an issue.  
  • 1.4 TeV gluino: Ratio≈30; Luminosity≈0.7 fb-1.
    If all goes well, better limits on gluinos can be delivered by the end of the summer! 

In summary, the progress will be very fast for new heavy particles. In particular, for gluon-initiated production of TeV-scale particles  already the first inverse femtobarn may bring us into a new territory. For lighter particles the progress will be slower, especially when backgrounds are difficult.  On the other hand, precision physics, such as the Higgs couplings measurements, is unlikely to be in the spotlight this year.

by Jester ( at August 18, 2015 12:20 PM

Jester - Resonaances

Weekend Plot: Higgs mass and SUSY
This weekend's plot shows the region in the stop mass and mixing space of the MSSM that reproduces the measured Higgs boson mass of 125 GeV:

Unlike in the Standard Model, in the minimal supersymmetric extension of the Standard Model (MSSM) the Higgs boson mass is not a free parameter; it can be calculated given all masses and couplings of the supersymmetric particles. At the lowest order, it is equal to the Z bosons mass 91 GeV (for large enough tanβ). To reconcile the predicted and the observed Higgs mass, one needs to invoke large loop corrections due to supersymmetry breaking. These are dominated by the contribution of the top quark and its 2 scalar partners (stops) which couple most strongly of all particles to the Higgs. As can be seen in the plot above, the stop mass preferred by the Higgs mass measurement is around 10 TeV. With a little bit of conspiracy, if the mixing between the two stops  is just right, this can be lowered to about 2 TeV. In any case, this means that, as long as the MSSM is the correct theory, there is little chance to discover the stops at the LHC.

This conclusion may be surprising because previous calculations were painting a more optimistic picture. The results above are derived with the new SUSYHD code, which utilizes effective field theory techniques to compute the Higgs mass in the presence of  heavy supersymmetric particles. Other frequently used codes, such as FeynHiggs or Suspect, obtain a significantly larger Higgs mass for the same supersymmetric spectrum, especially near the maximal mixing point. The difference can be clearly seen in the plot to the right (called the boobs plot by some experts). Although there is a  debate about the size of the error as estimated by SUSYHD, other effective theory calculations report the same central values.

by Jester ( at August 18, 2015 12:18 PM

Jester - Resonaances

Weekend plot: dark photon update
Here is a late weekend plot with new limits on the dark photon parameter space:

The dark photon is a hypothetical massive spin-1 boson mixing with the ordinary photon. The minimal model is fully characterized by just 2 parameters: the mass mA' and the mixing angle ε. This scenario is probed by several different experiments using completely different techniques.  It is interesting to observe how quickly the experimental constraints have been improving in the recent years. The latest update appeared a month ago thanks to the NA48 collaboration. NA48/2 was an experiment a decade ago at CERN devoted to studying CP violation in kaons. Kaons can decay to neutral pions, and the latter can be recycled into a nice probe of dark photons.  Most often,  π0 decays to two photons. If the dark photon is lighter than 135 MeV, one of the photons can mix into an on-shell dark photon, which in turn can decay into an electron and a positron. Therefore,  NA48 analyzed the π0 → γ e+ e-  decays in their dataset. Such pion decays occur also in the Standard Model, with an off-shell photon instead of a dark photon in the intermediate state.  However, the presence of the dark photon would produce a peak in the invariant mass spectrum of the e+ e- pair on top of the smooth Standard Model background. Failure to see a significant peak allows one to set limits on the dark photon parameter space, see the dripping blood region in the plot.

So, another cute experiment bites into the dark photon parameter space.  After this update, one can robustly conclude that the mixing angle in the minimal model has to be less than 0.001 as long as the dark photon is lighter than 10 GeV. This is by itself not very revealing, because there is no  theoretically preferred value of  ε or mA'.  However, one interesting consequence the NA48 result is that it closes the window where the minimal model can explain the 3σ excess in the muon anomalous magnetic moment.

by Jester ( at August 18, 2015 12:18 PM

Jester - Resonaances

On the LHC diboson excess
The ATLAS diboson resonance search showing a 3.4 sigma excess near 2 TeV has stirred some interest. This is understandable: 3 sigma does not grow on trees, and moreover CMS also reported anomalies in related analyses. Therefore it is worth looking at these searches in a bit more detail in order to gauge how excited we should be.

The ATLAS one is actually a dijet search: it focuses on events with two very energetic jets of hadrons.  More often than not, W and Z boson decay to quarks. When a TeV-scale  resonance decays to electroweak bosons, the latter, by energy conservation,  have to move with large velocities. As a consequence, the 2 quarks from W or Z boson decays will be very collimated and will be seen as a single jet in the detector.  Therefore, ATLAS looks for dijet events where 1) the mass of each jet is close to that of W (80±13 GeV) or Z (91±13 GeV), and  2) the invariant mass of the dijet pair is above 1 TeV.  Furthermore, they look into the substructure of the jets, so as to identify the ones that look consistent with W or Z decays. After all this work, most of the events still originate from ordinary QCD production of quarks and gluons, which gives a smooth background falling with the dijet invariant mass.  If LHC collisions lead to a production of  a new particle that decays to WW, WZ, or ZZ final states, it should show as a bump on top of the QCD background. ATLAS observes is this:

There is a bump near 2 TeV, which  could indicate the existence of a particle decaying to WW and/or WZ and/or ZZ. One important thing to be aware of is that this search cannot distinguish well between the above 3  diboson states. The difference between W and Z masses is only 10 GeV, and the jet mass windows used in the search for W and Z  partly overlap. In fact, 20% of the events fall into all 3 diboson categories.   For all we know, the excess could be in just one final state, say WZ, and simply feed into the other two due to the overlapping selection criteria.

Given the number of searches that ATLAS and CMS have made, 3 sigma fluctuations of the background should happen a few times in the LHC run-1 just by sheer chance.  The interest in the ATLAS  excess is however amplified by the fact that diboson searches in CMS also show anomalies (albeit smaller) just below 2 TeV. This can be clearly seen on this plot with limits on the Randall-Sundrum graviton excitation, which is one  particular model leading to diboson resonances. As W and Z bosons sometimes decay to, respectively, one and two charged leptons, diboson resonances can be searched for not only via dijets but also in final states with one or two leptons.  One can see that, in CMS, the ZZ dilepton search (blue line), the WW/ZZ dijet search (green line), and the WW/WZ one-lepton (red line)  search all report a small (between 1 and 2 sigma) excess around 1.8 TeV.  To make things even more interesting,  the CMS search for WH resonances return 3 events  clustering at 1.8 TeV where the standard model background is very small (see Tommaso's post). Could the ATLAS and CMS events be due to the same exotic physics?

Unfortunately, building a model explaining all the diboson data is not easy. Enough to say that the ATLAS excess has been out for a week and there's isn't yet any serious ambulance chasing paper on arXiv. One challenge is the event rate. To fit the excess, the resonance should be produced with a cross section of order 10 femtobarns. This requires the new particle to couple quite strongly to light quarks (or gluons), at least as strong as the W and Z bosons. At the same time, it should remain a narrow resonance decaying dominantly to dibosons. Furthermore, in concrete models, a sizable coupling to electroweak gauge bosons will get you in trouble with electroweak precision tests.

However, there is yet a bigger problem, which can be also  seen in the plot above. Although the excesses in CMS occur roughly at the same mass, they are not compatible when it comes to the cross section. And so the limits in the single-lepton search are not consistent with the new particle interpretation of the excess in dijet  and  the dilepton searches, at least in the context of the Randall-Sundrum graviton model. Moreover, the limits from the CMS one-lepton search are grossly inconsistent with the diboson interpretation of the ATLAS excess! In order to believe that the ATLAS 3 sigma excess is real one has to move to much more baroque models. One possibility is that  the dijets observed by ATLAS do not originate from  electroweak bosons, but rather from an exotic particle with a similar mass. Another possibility is that the resonance decays only to a pair of Z bosons and not to W bosons, in which case the CMS limits are weaker; but I'm not sure if there exist consistent models with this property.  

My conclusion...  For sure this is something to observe in the early run-2. If this is real, it should clearly show in both experiments already this year.  However, due to the inconsistencies between different search channels and the theoretical challenges, there's little reason to get excited yet.

Thanks to Chris for digging out the CMS plot.

by Jester ( at August 18, 2015 12:17 PM

Tommaso Dorigo - Scientificblogging

First CMS Physics Result At 13 TeV: Top Quarks
Twenty years have passed since the first observation of the top quark, the last of the collection of six that constitutes the matter of which atomic nuclei are made. And in these twenty years particle physics has made some quite serious leaps forward; the discovery that neutrinos oscillate and have mass (albeit a tiny one), and the discovery of the Higgs boson are the two most important ones to cite. Yet the top quark remains a very interesting object to study at particle colliders.

read more

by Tommaso Dorigo at August 18, 2015 08:24 AM

The n-Category Cafe

A Wrinkle in the Mathematical Universe

Of all the permutation groups, only <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> has an outer automorphism. This puts a kind of ‘wrinkle’ in the fabric of mathematics, which would be nice to explore using category theory.

For starters, let <semantics>Bij n<annotation encoding="application/x-tex">Bij_n</annotation></semantics> be the groupoid of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-element sets and bijections between these. Only for <semantics>n=6<annotation encoding="application/x-tex">n = 6</annotation></semantics> is there an equivalence from this groupoid to itself that isn’t naturally isomorphic to the identity!

This is just another way to say that only <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> has an outer isomorphism.

And here’s another way to play with this idea:

Given any category <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, let <semantics>Aut(X)<annotation encoding="application/x-tex">Aut(X)</annotation></semantics> be the category where objects are equivalences <semantics>f:XX<annotation encoding="application/x-tex">f : X \to X</annotation></semantics> and morphisms are natural isomorphisms between these. This is like a group, since composition gives a functor

<semantics>:Aut(X)×Aut(X)Aut(X)<annotation encoding="application/x-tex"> \circ : Aut(X) \times Aut(X) \to Aut(X) </annotation></semantics>

which acts like the multiplication in a group. It’s like the symmetry group of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. But it’s not a group: it’s a ‘2-group’, or categorical group. It’s called the automorphism 2-group of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

By calling it a 2-group, I mean that <semantics>Aut(X)<annotation encoding="application/x-tex">Aut(X)</annotation></semantics> is a monoidal category where all objects have weak inverses with respect to the tensor product, and all morphisms are invertible. Any pointed space has a fundamental 2-group, and this sets up a correspondence between 2-groups and connected pointed homotopy 2-types. So, topologists can have some fun with 2-groups!

Now consider <semantics>Bij n<annotation encoding="application/x-tex">Bij_n</annotation></semantics>, the groupoid of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-element sets and bijections between them. Up to equivalence, we can describe <semantics>Aut(Bij n)<annotation encoding="application/x-tex">Aut(Bij_n)</annotation></semantics> as follows. The objects are just automorphisms of <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics>, while a morphism from an automorphism <semantics>f:S nS n<annotation encoding="application/x-tex">f: S_n \to S_n</annotation></semantics> to an automorphism <semantics>f:S nS n<annotation encoding="application/x-tex">f' : S_n \to S_n</annotation></semantics> is an element <semantics>gS n<annotation encoding="application/x-tex">g \in S_n</annotation></semantics> that conjugates one automorphism to give the other:

<semantics>f(h)=gf(h)g 1hS n<annotation encoding="application/x-tex"> f'(h) = g f(h) g^{-1} \qquad \forall h \in S_n </annotation></semantics>

So, if all automorphisms of <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics> are inner, all objects of <semantics>Aut(Bij n)<annotation encoding="application/x-tex">Aut(Bij_n)</annotation></semantics> are isomorphic to the unit object, and thus to each other.

Puzzle 1. For <semantics>n6<annotation encoding="application/x-tex">n \ne 6</annotation></semantics>, all automorphisms of <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics> are inner. What are the connected pointed homotopy 2-types corresponding to <semantics>Aut(Bij n)<annotation encoding="application/x-tex">Aut(Bij_n)</annotation></semantics> in these cases?

Puzzle 2. The permutation group <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> has an outer automorphism of order 2, and indeed <semantics>Out(S 6)= 2.<annotation encoding="application/x-tex">Out(S_6) = \mathbb{Z}_2.</annotation></semantics> What is the connected pointed homotopy 2-type corresponding to <semantics>Aut(Bij 6)<annotation encoding="application/x-tex">Aut(Bij_6)</annotation></semantics>?

Puzzle 3. Let <semantics>Bij<annotation encoding="application/x-tex">Bij</annotation></semantics> be the groupoid where objects are finite sets and morphisms are bijections. <semantics>Bij<annotation encoding="application/x-tex">Bij</annotation></semantics> is the coproduct of all the groupoids <semantics>Bij n<annotation encoding="application/x-tex">Bij_n</annotation></semantics> where <semantics>n0<annotation encoding="application/x-tex">n \ge 0</annotation></semantics>:

<semantics>Bij= n=0 Bij n<annotation encoding="application/x-tex"> Bij = \sum_{n = 0}^\infty Bij_n </annotation></semantics>

Give a concrete description of the 2-group <semantics>Aut(Bij)<annotation encoding="application/x-tex">Aut(Bij)</annotation></semantics>, up to equivalence. What is the corresponding pointed connected homotopy 2-type?

You can get a bit of intuition for the outer automorphism of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> using something called the Tutte–Coxeter graph.

Let <semantics>S={1,2,3,4,5,6}<annotation encoding="application/x-tex">S = \{1,2,3,4,5,6\}</annotation></semantics>. Of course the symmetric group <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> acts on <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, but James Sylvester found a different action of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> on a 6-element set, which in turn gives an outer automorphism of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics>.

To do this, he made the following definitions:

• A duad is a 2-element subset of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. Note that there are <semantics>6choose2=15<annotation encoding="application/x-tex"> {6 \choose 2} = 15 </annotation></semantics> duads.

• A syntheme is a set of 3 duads forming a partition of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. There are also 15 synthemes.

• A synthematic total is a set of 5 synthemes partitioning the set of 15 duads. There are 6 synthematic totals.

Any permutation of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> gives a permutation of the set <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> of synthematic totals, so we obtain an action of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> on <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>. Choosing any bijection betweeen <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> and <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, this in turn gives an action of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> on <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, and thus a homomorphism from <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> to itself. Sylvester showed that this is an outer automorphism!

There’s a way to draw this situation. It’s a bit tricky, but Greg Egan has kindly done it:

Here we see 15 small red blobs: these are the duads. We also see 15 larger blue blobs: these are the synthemes. We draw an edge from a duad to a syntheme whenever that duad lies in that syntheme. The result is a graph called the Tutte–Coxeter graph, with 30 vertices and 45 edges.

The 6 concentric rings around the picture are the 6 synthematic totals. A band of color appears in one of these rings near some syntheme if that syntheme is part of that synthematic total.

If we draw the Tutte–Coxeter graph without all the decorations, it looks like this:

The red vertices come from duads, the blue ones from synthemes. The outer automorphism of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> gives a symmetry of the Tutte–Coxeter graph that switches the red and blue vertices!

The inner automorphisms, which correspond to elements of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics>, also give symmetries: for each element of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics>, the Tutte–Coxeter graph has a symmetry that permutes the numbers in the picture. These symmetries map red vertices to red ones and blue vertices to blue ones.

The group <semantics>Aut(S 6)<annotation encoding="application/x-tex">\mathrm{Aut}(S_6)</annotation></semantics> has

<semantics>2×6!=1440<annotation encoding="application/x-tex"> 2 \times 6! = 1440 </annotation></semantics>

elements, coming from the <semantics>6!<annotation encoding="application/x-tex">6!</annotation></semantics> inner automorphisms of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> and the outer automorphism of order 2. In fact, <semantics>Aut(S 6)<annotation encoding="application/x-tex">\mathrm{Aut}(S_6)</annotation></semantics> is the whole symmetry group of the Tutte–Coxeter graph.

For more on the Tutte–Coxeter graph, see my post on the AMS-hosted blog Visual Insight:

by john ( at August 18, 2015 08:24 AM

August 17, 2015

Clifford V. Johnson - Asymptotia

Ship Building

agent_carter_team_cvj_aug_2015Last week it was a pleasure to have another meeting with writers, producers, VFX people, etc., from the excellent show Agent Carter! (Photo -click for larger view- used with permission.) I've been exchanging ideas about some science concepts and designs that they're using as springboards for their story-telling and world-building for the show. I can tell you absolutely nothing except that I'm confident that season 2 is going to be really great!

(Season 1, if you've not already seen it, is also great. Go get it on your favourite on-demand platform - I rapidly watched all eight episodes back in June to get up to speed so that I could be as useful as possible, and it was a pleasure. It is smart, funny, fresh and ground-breaking, and has a perfect [...] Click to continue reading this post

The post Ship Building appeared first on Asymptotia.

by Clifford at August 17, 2015 05:22 PM

ZapperZ - Physics and Physicists

Stnky Superconductor Breaks Record
No, I wasn't being deragoratory by calling it "stinky".

It turns out that hydrogen sulfide, the same compound that smells like rotten eggs, becomes a superconductor when solidified under pressure. And not only that, but it has recently be shown that it becomes a superconductor at a record highest transition temperature of 203.5 K.

Still, there are two points here that may make this not as "exciting" as one wold hope for. Earlier theoretical studies have predicted this to occur, and this material is expected to be a conventional superconductor mediated by phonons.

But the other issue, as in the practical aspect of this, may be even less enticing. This is because this material becomes a superconductor only under very high pressures.

The result may revive visions of superconductors that work at room temperature and magnetically levitated trains. But there's a catch: Hydrogen sulfide works its magic only when squeezed to more than 100 million times atmospheric pressure, roughly one-third as high as the pressure in Earth’s core. This condition makes it impractical for most applications. “Where does it go from here?” asks Igor Mazin, a theorist at the U.S. Naval Research Laboratory in Washington, D.C. “Probably nowhere.” Even so, the discovery is already altering the course of research in superconductivity.

So, while I think this is an exciting discovery, I'm not sure how much it will add to the physics and to applications..... yet.


by ZapperZ ( at August 17, 2015 04:25 PM

Symmetrybreaking - Fermilab/SLAC

Dark Energy Survey finds more celestial neighbors

The observation of new dwarf galaxy candidates could mean our sky is more crowded than we thought.

Scientists on the Dark Energy Survey, using one of the world’s most powerful digital cameras, have discovered eight more faint celestial objects hovering near our Milky Way galaxy. Signs indicate that they—like the objects found by the same team earlier this year—are likely dwarf satellite galaxies, the smallest and closest known form of galaxies.

Satellite galaxies are small celestial objects that orbit larger galaxies, such as our own Milky Way. Dwarf galaxies can be found with fewer than 1000 stars, in contrast to the Milky Way, an average-sized galaxy containing billions of stars. Scientists have predicted that larger galaxies are built from smaller galaxies, which are thought to be especially rich in dark matter, which makes up about 25 percent of the total matter and energy in the universe. Dwarf satellite galaxies, therefore, are considered key to understanding dark matter and the process by which larger galaxies form.

The main goal of the Dark Energy Survey, as its name suggests, is to better understand the nature of dark energy, the mysterious stuff that makes up about 70 percent of the matter and energy in the universe. Scientists believe that dark energy is the key to understanding why the expansion of the universe is speeding up. To carry out its dark energy mission, DES is taking snapshots of hundreds of millions of distant galaxies. However, some of the DES images also contain stars in dwarf galaxies much closer to the Milky Way. The same data can therefore be used to probe both dark energy, which scientists think is driving galaxies apart, and dark matter, which is thought to hold galaxies together.

Scientists can only see the nearest dwarf galaxies, since they are so faint, and had previously found only a handful. If these new discoveries are representative of the entire sky, there could be many more galaxies hiding in our cosmic neighborhood.

“Just this year, more than 20 of these dwarf satellite galaxy candidates have been spotted, with 17 of those found in Dark Energy Survey data,” says Alex Drlica-Wagner of Fermi National Accelerator Laboratory, one of the leaders of the DES analysis. “We’ve nearly doubled the number of these objects we know about in just one year, which is remarkable.”

In March, researchers with the Dark Energy Survey and an independent team from the University of Cambridge jointly announced the discovery of nine of these objects in snapshots taken by the Dark Energy Camera, the extraordinary instrument at the heart of the DES, an experiment funded by the DOE, the National Science Foundation and other funding agencies. Two of those have been confirmed as dwarf satellite galaxies so far.

Prior to 2015, scientists had located only about two dozen such galaxies around the Milky Way.

“DES is finding galaxies so faint that they would have been very difficult to recognize in previous surveys,” says Keith Bechtol of the University of Wisconsin-Madison. “The discovery of so many new galaxy candidates in one-eighth of the sky could mean there are more to find around the Milky Way.”

The closest of these newly discovered objects is about 80,000 light years away, and the furthest roughly 700,000 light years away. These objects are, on average, around a billion times dimmer than the Milky Way and a million times less massive. The faintest of the new dwarf galaxy candidates has about 500 stars.

Most of the newly discovered objects are in the southern half of the DES survey area, in close proximity to the Large Magellanic Cloud and the Small Magellanic Cloud. These are the two largest satellite galaxies associated with the Milky Way, about 158,000 light years and 208,000 light years away, respectively. It is possible that many of these new objects could be satellite galaxies of these larger satellite galaxies, which would be a discovery by itself.

“That result would be fascinating,” says Risa Wechsler of SLAC National Accelerator Laboratory. “Satellites of satellites are predicted by our models of dark matter. Either we are seeing these types of systems for the first time, or there is something we don’t understand about how these satellite galaxies are distributed in the sky.”

Since dwarf galaxies are thought to be made mostly of dark matter, with very few stars, they are excellent targets to explore the properties of dark matter. Further analysis will confirm whether these new objects are indeed dwarf satellite galaxies, and whether signs of dark matter can be detected from them.

The 17 dwarf satellite galaxy candidates were discovered in the first two years of data collected by the Dark Energy Survey, a five-year effort to photograph a portion of the southern sky in unprecedented detail. Scientists have now had a first look at most of the survey area, but data from the next three years of the survey will likely allow them to find objects that are even fainter, more diffuse or farther away. The third survey season has just begun.

“This exciting discovery is the product of a strong collaborative effort from the entire DES team,” says Basilio Santiago, a DES Milky Way Science Working Group coordinator and a member of the DES-Brazil Consortium. “We’ve only just begun our probe of the cosmos, and we’re looking forward to more exciting discoveries in the coming years.” 

This article is based on a Fermilab press release.


Like what you see? Sign up for a free subscription to symmetry!

August 17, 2015 02:29 PM

August 15, 2015

ATLAS Experiment

BOOST Outreach and Particle Fever

Conferences like BOOST, which my colleagues Cristoph and Tatjana have written about already, are designed to bring physicists to think about the latest results in the field. When you put 100 experts from around the world together into a room for a week, you get a fantastic picture of the state of the art in searches for new physics and measurements of the Standard Model. But it turns out there’s another great use for conferences: they’re an opportunity to talk to the general public about the work we do. The BOOST committee organized a discussion and screening of the movie “Particle Fever” on Monday, and I think it was an enormously successful event!


For those who haven’t seen it, Particle Fever is excellent. It is the story of the discovery of the Higgs Boson, and its consequences on the myriad of theories that we are searching for at the LHC. It presents the whole experience of construction, commissioning, turning on, and operating the experiments, from the perspective of experimentalists and theorists, young and old. People love it – it has a 95% rating on Rotten Tomatoes – and nearly all my colleagues loved it as well. It’s rare to find a documentary that both experts and the public enjoy, so this is indeed a real achievement!

Getting back to BOOST, not only did we have a screening of the movie, but also a panel discussion where people could ask questions about the movie and about physics in general. One question that an audience member asked was really quite excellent. He asked why physicists think that movies like Particle Fever, and events like this public showing, were important. Why did we go to the trouble of booking a room, organizing people, and spending hours of our day on a movie we’ve already seen many times before? And let’s not forget that David Kaplan, a physicist and a producer of the movie, spent several years of his life on this project full time. He essentially gave up research for a few years in order to make a movie about doing research – not an easy task for a professor!

So why do we do it? Why is Particle Fever important?

The answer, to me, is that we have a responsibility to share what we know about the Universe. We study the fundamental nature of the Universe not so that we as individuals can understand more, but so that humanity as a whole understands more. On an experiment as big as ATLAS, you quickly become extremely aware of how much you depend on the work and experience of others, and on the decades and centuries of scientists who came before us. Doing science means contributing to this shared knowledge. And while some details may only be important to a few individuals (not many people are going to care about the derivation of the jet energy scale via numerical inversion), the big picture is something that everyone can appreciate.

And Particle Fever helps with that. The movie is funny, smart, and understandable – all things that we strive to be as science communicators, but which we sometimes fail at. Every particle physicist owes David Kaplan and the director Mark Levinson a tremendous debt, because they have done such a tremendous job of communicating the excitement of fundamental knowledge and discovery. CERN has always sought to unite Europe, and the world, through a quest for understanding, and Particle Fever helps the rest of the world join us on that quest.

Conferences like BOOST are a great time to focus on the details of our work, but they’re also an opportunity to consider how our physics relates to the rest of the world, and how best we can communicate our understanding. Particle Fever has made me realize just how much work it takes to do a really wonderful job, and I’m extremely happy that such a great guide is available to the public. With any luck, we’ll have more movies coming out soon about the discovery of supersymmetry and extra dimensions!

“Max Max is a postdoctoral fellow at the Enrico Fermi Institute at the University of Chicago, and has worked on ATLAS since 2009. His work focuses on searches for supersymmetry, an exciting potential extension of the Standard Model which predicts many new particles the Large Hadron Collider might be able to produce. When not colliding particles, he can be found cycling, hiking, going to concerts, or playing board games.

by swiatlow at August 15, 2015 10:11 PM

August 14, 2015

ATLAS Experiment

A boost for the next discovery

I arrived in Chicago for my first conference after the first long LHC shutdown, where new results from the two big experiments ATLAS and CMS were to be shown. Before the beginning of the conference on Monday, I had one day to fight against jetlag and see the city – certainly not enough time to see everything!


Chicago at night.

The focus of the BOOST conference is to discuss the physics of objects produced at very high momenta. New, very heavy particles can be produced at LHC. Such particles decay very quickly and we only observe their daughter particles, or their granddaughter particles in the detector. These decay products are special – they get the boost from the mother particle and have very high momenta.

The reconstruction of such highly boosted particles is a big challenge. Around 100 experts from different institutes around the world meet for one week to discuss the discovery techniques and strategies for these new heavy particles at LHC. Compared to some other much larger conferences, it is a very familiar atmosphere!


Boost 2015 in Chicago: Many fruitful discussions among theorists and experimentalists from ATLAS and CMS.

We had many interesting discussions during the presentations and coffee breaks. For me, it was a great opportunity to meet colleagues from around the world that I’d known for a while but had never met in person. Theorists and experimentalists from ATLAS and CMS exchanged new ideas on the reconstruction and identification of high momenta particles, and discussed not only the discovery potential of both experiments for the next run period but also the prospective designs of new very high energy machines that could be built in 20-30 years.

The University of Chicago organised a public event during the conference: a screening of the Particle Fever movie and a discussion panel. For me, it was like going back in time before the Higgs boson discovery – feeling once again the excitement about the first beam at LHC. It was a great time and I am sure we will have more great moments aheard. It’s time to boost for a new discovery!

F00062 Tatjana Lenz is research assistent at the University of Bonn, Germany. She joined ATLAS in 2005, doing her Master’s at the University of Wuppertal. Her research includes searches for new physics which can show up in decays of top quark pairs and studies of Higgs boson properties like spin and parity. Currently she focuses on the measurement of the Higgs boson couplings to bottom quarks and new physics searches involving this topology. In addition to her fascination for physics she loves diving, climbing, hiking, skiiing and working in her garden!

by Tatjana Lenz at August 14, 2015 02:26 PM

CERN Bulletin

Wednesday 19 August 2015 at 20:00 CERN Council Chamber A Clockwork Orange Directed by Stanley Kubrick (UK/USA, 1971) 136 min. Alex DeLarge, a teenage hooligan in a near-future Britain, gets jailed by the police. There he volunteers as guinea pig for a new aversion therapy proposed by the government to make room in prisons for political prisoners. "Cured" of his hooliganism and released, he is rejected by his friends and relatives. Eventually nearly dying, he becomes a major embarrassment for the government, who arrange to cure him of his cure. Original version english; french subtitles Wednesday 26 August 2015 at 20:00 CERN Council Chamber The Shining Directed by Stanley Kubrick (USA/UK, 1980) 146 min. A family heads to an isolated hotel for the winter where an evil and spiritual presence influences the father into violence, while his psychic son sees horrific forebodings from the past and of the future. Original version english; french subtitles Wednesday 2 September 2015 at 20:00 CERN Council Chamber L’Auberge Espagnole Directed by Cédric Klapisch (France/Spain, 2002) 122 min. As part of a job that he is promised, Xavier, an economics student in his twenties, signs on to a European exchange programme in order to gain working knowledge of the Spanish language. Promising that they will remain close, he says farewell to his loving girlfriend, then heads to Barcelona. Following his arrival, Xavier is soon thrust into a cultural melting pot when he moves into an apartment full of international students. An Italian, an English girl, a boy from Denmark, a young girl from Belgium, a German and a girl from Tarragona all join him in a series of adventures that serve as an initiation to life. Original version french / spanish/ english / catalan / danish / german / italian; english subtitles

by CERN CINE CLUB at August 14, 2015 01:40 PM

CERN Bulletin

REPRISE DES COURS – Venez nombreux ! Yoga, Sophrologie, Tai Chi La liste des cours pour le semestre allant du 1er septembre 2015 au 31 janvier 2016 est disponible sur notre site web : Lieu Les cours ont lieu dans la salle des clubs, à l’entresol du restaurant No 2, Bât. 504 (dans la salle no 3 pour la Sophrologie). Prix des cours Le prix pour le semestre (environ 18 leçons) est fixé à 220 CHF plus 10 CHF d’adhésion annuelle au Club. Couple : 200 CHF par personne. 2 cours par semaine : 400 CHF. Inscriptions Les inscriptions aux cours seront prises directement auprès du professeur, lors de la 1ère séance. Avant de s’inscrire pour le semestre, il est possible d’essayer une séance gratuitement. Informations : -----------------------------------------

by Club de Yoga at August 14, 2015 01:34 PM

CERN Bulletin

August 13, 2015

Symmetrybreaking - Fermilab/SLAC

MicroBooNE sees first cosmic muons

The experiment will begin collecting data from a neutrino beam in October.

A school bus-sized detector packed with 170 tons of liquid argon has seen its first particle footprints.

On August 6, MicroBooNE, a liquid-argon time projection chamber, or LArTPC, recorded images of the tracks of cosmic muons, particles that shower down on Earth when cosmic rays collide with nuclei in our atmosphere.

"This is the first detector of this size and scale we've ever launched in the US for use in a neutrino beam, so it's a very important milestone for the future of neutrino physics," says Sam Zeller, co-spokesperson for the MicroBooNE collaboration.

Picking up cosmic muons is just one brief stop during MicroBooNE's expedition into particle physics. The centerpiece of the three detectors planned for Fermilab's Short-Baseline Neutrino program, or SBN, MicroBooNE will pursue the much more elusive neutrino, taking data about this weakly interacting particle for about three years.

When beam starts up in October, it will travel 470 meters and then traverse the liquid argon in MicroBooNE, where neutrino interactions will result in tracks that the detector can convert into precise three-dimensional images. Scientists will use these images to investigate anomalies seen in an earlier experiment called MiniBooNE, with the aim to determine whether the excess of low-energy events that MiniBooNE saw was due to a new source of background photons or if there could be additional types of neutrinos beyond the three established flavors.

One of the first images of cosmic rays recorded by MicroBooNE.

Courtesy of: MicroBooNE collaboration

One of MicroBooNE's goals is to measure how often a neutrino that interacts with an argon atom will produce certain types of particles. A second goal is to conduct R&D for future large-scale LArTPCs.

MicroBooNE will carry signals up to 2.5 meters across the detector, the longest drift ever for a LArTPC in a neutrino beam. This requires a very high voltage and very pure liquid argon. It is also the first time a detector will operate with its electronics submerged in liquid argon on such a large scale. All of these characteristics will be important for future experiments such as the Deep Underground Neutrino Experiment, or DUNE, which plans to use similar technology to probe neutrinos.

"The entire particle physics community worldwide has identified neutrino physics as one of the key lines of research that could help us understand better how to go beyond what we know now," says Matt Toups, run coordinator and co-commissioner for MicroBooNE with Fermilab scientist Bruce Baller. "Those questions that are driving the field, we hope to answer with a very large LArTPC detector."

Another benefit of the experiment, Zeller said, is training the next generation of LArTPC experts for future programs and experiments. MicroBooNE is a collaborative effort of 25 institutions, with 55 students and postdocs working tirelessly to perfect the technology. Collaborators are keeping their eyes on the road toward the future of neutrino physics and liquid-argon technology.

"It's been a long haul," says Bonnie Fleming, MicroBooNE co-spokesperson. "Eight and a half years ago liquid argon was a total underdog. I used to joke that no one would sit next to me at the lunch table. And it's a world of difference now. The field has chosen liquid argon as its future technology, and all eyes are on us to see if our detector will work."

A version of this article was published in Fermilab Today.


Like what you see? Sign up for a free subscription to symmetry!

by Ali Sundermier at August 13, 2015 03:18 PM

August 12, 2015

Symmetrybreaking - Fermilab/SLAC

Q&A: Underground machinist

What’s it like being the machinist at the deepest machine shop in the world?

The Majorana Demonstrator searches for a rare decay process a mile below the surface at Sanford Lab in Lead, South Dakota. To craft the experiment’s precise copper parts away from cosmic rays, the lab evolved a unique feature: the deepest machine shop in the world, complete with lathe, CNC mill, wire EDM (electrical discharge machine), 70-ton press and a laser engraver to track the parts.

It is here that Randy Hughes comes to work every day and has for the last three years. He dons two pairs of booties, full white coveralls, glasses, two pairs of gloves and a facemask before he starts machining the majority of the pieces in the experiment, from thick shield plates to microscopic pins.

Hughes, a motorcycle enthusiast and baseball fan from Detroit, brings 40 years of experience as a machinist and toolmaker to the job. He just happened to be working at Adams ISC in Rapid City when the experiment came around looking for a temporary cleanroom. The rest is history.


S: Had you worked on anything of this scale before?

RH: I don’t think anybody has done anything on this scale. I’ve had experiences that were more detailed and demanding as far as the product, but nothing in such an environment as this one.


S: How do you feel about working a mile underground?

RH: At first I was wondering if I was capable. They were preparing me to come down, and I’m wondering if I can handle it, which turned out to be a silly question. It’s like working anywhere else, but without a view. I can’t step outside and look out the window. And I can’t go out for lunch.


S: What’s a typical day?

RH: Presently, I’m finished with the string parts and working mostly on the larger parts, such as the shield. Two years ago, I was working mainly on string parts, which were all the smaller parts. As far as the inventory of what they need, I am winding down to the end of it. The shield parts could wait until the end. They didn’t want the surface to be machined and then sit around.


S: Is there anything that has surprised you about working down here?

RH: The lack of vitamin D. I feel like a mole. On days where I travel to get here, I never see the sun until the weekend. Hopefully it’s sunny out. I travel almost an hour to get here one way. I come through the Black Hills here every day. I like to tell people I have the most beautiful one-hour commute in the country. It’s along a creek and through canyons, and I see elk and eagles and deer.


S: What have been your favorite and least favorite things to work on?

RH: The least favorite thing was the hollow hex rods. Trying to thread them because they are so long, and then cutting the threads with no support and no coolant, has made for a real balancing act as far as not breaking the parts off in the machine. The hex rod is the main building block of the string. There are three of them where each detector is, and it stacks the build together.

Favorite parts, believe it or not, were little things, like the Vespel pins. It’s my favorite because of the sense of pride and the look I get from people when they see how small it is. The parts are no bigger than the ball of a ball-point pen. Vespel pins plug into voltage connectors to hold the wire in place, so it doesn’t get pulled and tugged on other than at that particular point.


S: What is unique about working on this job?

RH: Copper is not something that, as a machinist, you have to spend a lot of time making parts out of. The entire project, from my point of view, has been copper. And copper would be your last choice to make anything out of, for more than one reason. One, you wouldn’t think it would be durable enough, and two, it’s really soft, pliable and gummy, and it creates manufacturing problems.


S: What adaptations have you made?

RH: I’ve had to come up with procedures to accommodate the lack of equipment, tooling, coolant and processability. The way I set the tooling against the part, what kind of feeds and speeds I use to cut and the angle of the tools—it’s a lot of trial and error. I’ve developed a few little tricks that have paid off for me.


S: Do you ever practice on normal copper?

RH: There is some commercial copper down here for a few things, such as the prototype. What I was working on at the beginning was that copper. It machines almost exactly the same, but for the most part, I might be overconfident in my ability and I go at it. As a machinist, I have a machinist attitude: I can do anything. I can make anything.


S: What is the back-and-forth communication between the machine shop and the cleanroom?

RH: I try to build in my head as I’m doing something. If I see something in numbers that doesn’t make sense to me mechanically, all I have to do is pick up a phone or knock on the door, and I can ask them about it.


S: How do you feel about working in a cleanroom every day?

RH: At first, it was unique. And it was kinda novel. But after three years, it’s monotonous and trying. The first thing I do when I get out of this room is take this facemask off. The hood was given up for visibility’s sake. You steam up a lot, and I have my fingers and hands around the machines. I need any kind of help with being able to see where my hands are. Forty years, I still have all my digits.


S: Did you know anything about neutrinos or physics or Sanford Lab before this?

RH: Nooo. This was like a trip into outer space for me. It’s taken me all of these three years of small talk and side talk and listening to the scientists and physicists and students, and a few questions, and having them translate it into layman’s terms. I’ve learned quite a bit about it. And that’s another reward from this job. When I start talking about this project, everybody stops and listens, because it’s unique, and it’s different, and it’s interesting.


S: What are your plans after this?

RH: I’m still employed with Adams, and they’re anticipating my return. They offered me my old position back as shop foreman and are starting to pick up business and would really like to have me back.


S: And you won’t have to wear this getup everyday.

RH: That would be a pleasurable change. But I’ll also be getting dirty every day. The problem with wearing the gloves every day is I’ve lost all the calluses on my hands. And the few times I’ve gone back there and worked, I always seem to get a metal shaving sliver or little cut because my hands are soft. I have to get toughened up again.


Like what you see? Sign up for a free subscription to symmetry!


by Lauren Biron at August 12, 2015 02:49 PM

Tommaso Dorigo - Scientificblogging

The Plot Of The Week - Higgs Decays To Converted Photons
One of the nice things about the 2012 discovery of the Higgs boson is that the particle has been found at a very special spot - that is, with a very special mass. At 125 GeV, the Higgs boson has a significant probability to decay into a multitude of different final states, making the hunt for Higgs events entertaining and diverse.

read more

by Tommaso Dorigo at August 12, 2015 02:41 PM

Quantum Diaries

MicroBooNE sees first cosmic muons

This article appeared in Fermilab Today on Aug. 12, 2015.

This image shows the first cosmic ray event recorded in the MicroBooNE TPC on Aug. 6. Image: MicroBooNE

This image shows the first cosmic ray event recorded in the MicroBooNE TPC on Aug. 6. Image: MicroBooNE

A school bus-sized detector packed with 170 tons of liquid argon has seen its first particle footprints.

On Aug. 6, MicroBooNE, a liquid-argon time projection chamber, or LArTPC, recorded images of the tracks of cosmic muons, particles that shower down on Earth when cosmic rays collide with nuclei in our atmosphere.

“This is the first detector of this size and scale we’ve ever launched in the U.S. for use in a neutrino beam, so it’s a very important milestone for the future of neutrino physics,” said Sam Zeller, co-spokesperson for the MicroBooNE collaboration.

Picking up cosmic muons is just one brief stop during MicroBooNE’s expedition into particle physics. The centerpiece of the three detectors planned for Fermilab’s Short-Baseline Neutrino program, or SBN, MicroBooNE will pursue the much more elusive neutrino, taking data about this weakly interacting particle for about three years. When beam starts up in October, it will travel 470 meters and then traverse the liquid argon in MicroBooNE, where neutrino interactions will result in tracks that the detector can convert into precise three-dimensional images. Scientists will use these images to investigate anomalies seen in an earlier experiment called MiniBooNE, with the aim to determine whether the excess of low-energy events that MiniBooNE saw was due to a new source of background photons or if there could be additional types of neutrinos beyond the three established flavors.

One of MicroBooNE’s goals is to measure how often a neutrino that interacts with an argon atom will produce certain types of particles. A second goal is to conduct R&D for future large-scale LArTPCs. MicroBooNE will carry signals up to two and a half meters across the detector, the longest drift ever for a LArTPC in a neutrino beam. This requires a very high voltage and very pure liquid argon. It is also the first time a detector will operate with its electronics submerged in liquid argon on such a large scale. All of these characteristics will be important for future experiments such as the Deep Underground Neutrino Experiment, or DUNE, which plans to use similar technology to probe neutrinos.

“The entire particle physics community worldwide has identified neutrino physics as one of the key lines of research that could help us understand better how to go beyond what we know now,” said Matt Toups, run coordinator and co-commissioner for MicroBooNE with Fermilab Scientist Bruce Baller. “Those questions that are driving the field, we hope to answer with a very large LArTPC detector.”

Another benefit of the experiment, Zeller said, is training the next generation of LArTPC experts for future programs and experiments. MicroBooNE is a collaborative effort of 25 institutions, with 55 students and postdocs working tirelessly to perfect the technology. Collaborators are keeping their eyes on the road toward the future of neutrino physics and liquid-argon technology.

“It’s been a long haul,” said Bonnie Fleming, MicroBooNE co-spokesperson. “Eight and a half years ago liquid argon was a total underdog. I used to joke that no one would sit next to me at the lunch table. And it’s a world of difference now. The field has chosen liquid argon as its future technology, and all eyes are on us to see if our detector will work.”

Ali Sundermier

by Fermilab at August 12, 2015 02:20 PM

ATLAS Experiment

Boost and never look back
Seen in Chicago the day before the start of the BOOST conference.

Seen in Chicago the day before the start of the BOOST conference. (Photo: Christoph Anders)

When I arrived in Chicago this last Sunday for the BOOST conference I had a pretty good idea what new results we were going to show from ATLAS. I also had some rough ideas of what our friends from the other experiments and theory groups would be up to. What I didn’t expect was to see an ad that would fit the conference so nicely!

Now, why do we care about “boost” and what is it?

Let’s say we want to find a pair of top quarks in our detector. You need enough energy in the collision to produce the top quark pair, and if it is just enough they will be “at rest”, i.e. just sit there. Now let’s picture the decay of the two top quarks as two water balloons exploding. The splashes you see are what we would see in the ATLAS detector. But imagine you want to understand which splashes came from which of the two water balloons… This is hard, as there are many different combinations, the water mixes, etc.

That is where “boost” will help!

If we have more energy, we can give both top quarks/balloons a kick – we call them “boosted”. They will fly away from each other and when they decay/explode, the splashes will go into two different directions! Now we can tell them apart! Of course it is more complicated in real life. We have other processes that look similar to the “splashes” the two boosted top quarks make, but we can distinguish them by analyzing the inner structure of the splashes.

This is what the BOOST conference is about, in essence: understanding how we can use the boost to our advantage when searching for new phenomena.

It actually does work and it is a lot of fun!
Boost and never look back!

Christoph Anders Christoph is a postdoc at the University of Heidelberg’s physics institute in Germany. He has been working in ATLAS since 2007. His research is currently focussed on the identification of very energetic heavy particles, e.g. top quarks. When he is not looking for physics beyond the Standard model, he likes reading, listening to music, photography, cooking, traveling with his wife, playing board games with friends and sports.

by Christoph Anders at August 12, 2015 01:58 PM

Sean Carroll - Preposterous Universe

The Bayesian Second Law of Thermodynamics

Entropy increases. Closed systems become increasingly disordered over time. So says the Second Law of Thermodynamics, one of my favorite notions in all of physics.

At least, entropy usually increases. If we define entropy by first defining “macrostates” — collections of individual states of the system that are macroscopically indistinguishable from each other — and then taking the logarithm of the number of microstates per macrostate, as portrayed in this blog’s header image, then we don’t expect entropy to always increase. According to Boltzmann, the increase of entropy is just really, really probable, since higher-entropy macrostates are much, much bigger than lower-entropy ones. But if we wait long enough — really long, much longer than the age of the universe — a macroscopic system will spontaneously fluctuate into a lower-entropy state. Cream and coffee will unmix, eggs will unbreak, maybe whole universes will come into being. But because the timescales are so long, this is just a matter of intellectual curiosity, not experimental science.

That’s what I was taught, anyway. But since I left grad school, physicists (and chemists, and biologists) have become increasingly interested in ultra-tiny systems, with only a few moving parts. Nanomachines, or the molecular components inside living cells. In systems like that, the occasional downward fluctuation in entropy is not only possible, it’s going to happen relatively frequently — with crucial consequences for how the real world works.

Accordingly, the last fifteen years or so has seen something of a revolution in non-equilibrium statistical mechanics — the study of statistical systems far from their happy resting states. Two of the most important results are the Crooks Fluctuation Theorem (by Gavin Crooks), which relates the probability of a process forward in time to the probability of its time-reverse, and the Jarzynski Equality (by Christopher Jarzynski), which relates the change in free energy between two states to the average amount of work done on a journey between them. (Professional statistical mechanics are so used to dealing with inequalities that when they finally do have an honest equation, they call it an “equality.”) There is a sense in which these relations underlie the good old Second Law; the Jarzynski equality can be derived from the Crooks Fluctuation Theorem, and the Second Law can be derived from the Jarzynski Equality. (Though the three relations were discovered in reverse chronological order from how they are used to derive each other.)

Still, there is a mystery lurking in how we think about entropy and the Second Law — a puzzle that, like many such puzzles, I never really thought about until we came up with a solution. Boltzmann’s definition of entropy (logarithm of number of microstates in a macrostate) is very conceptually clear, and good enough to be engraved on his tombstone. But it’s not the only definition of entropy, and it’s not even the one that people use most often.

Rather than referring to macrostates, we can think of entropy as characterizing something more subjective: our knowledge of the state of the system. That is, we might not know the exact position x and momentum p of every atom that makes up a fluid, but we might have some probability distribution ρ(x,p) that tells us the likelihood the system is in any particular state (to the best of our knowledge). Then the entropy associated with that distribution is given by a different, though equally famous, formula:

S = - \int \rho \log \rho.

That is, we take the probability distribution ρ, multiply it by its own logarithm, and integrate the result over all the possible states of the system, to get (minus) the entropy. A formula like this was introduced by Boltzmann himself, but these days is often associated with Josiah Willard Gibbs, unless you are into information theory, where it’s credited to Claude Shannon. Don’t worry if the symbols are totally opaque; the point is that low entropy means we know a lot about the specific state a system is in, and high entropy means we don’t know much at all.

In appropriate circumstances, the Boltzmann and Gibbs formulations of entropy and the Second Law are closely related to each other. But there’s a crucial difference: in a perfectly isolated system, the Boltzmann entropy tends to increase, but the Gibbs entropy stays exactly constant. In an open system — allowed to interact with the environment — the Gibbs entropy will go up, but it will only go up. It will never fluctuate down. (Entropy can decrease through heat loss, if you put your system in a refrigerator or something, but you know what I mean.) The Gibbs entropy is about our knowledge of the system, and as the system is randomly buffeted by its environment we know less and less about its specific state. So what, from the Gibbs point of view, can we possibly mean by “entropy rarely, but occasionally, will fluctuate downward”?

I won’t hold you in suspense. Since the Gibbs/Shannon entropy is a feature of our knowledge of the system, the way it can fluctuate downward is for us to look at the system and notice that it is in a relatively unlikely state — thereby gaining knowledge.

But this operation of “looking at the system” doesn’t have a ready implementation in how we usually formulate statistical mechanics. Until now! My collaborators Tony Bartolotta, Stefan Leichenauer, Jason Pollack, and I have written a paper formulating statistical mechanics with explicit knowledge updating via measurement outcomes. (Some extra figures, animations, and codes are available at this web page.)

The Bayesian Second Law of Thermodynamics
Anthony Bartolotta, Sean M. Carroll, Stefan Leichenauer, and Jason Pollack

We derive a generalization of the Second Law of Thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter’s knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically-evolving system degrades over time. The Bayesian Second Law can be written as ΔH(ρm,ρ)+⟨Q⟩F|m≥0, where ΔH(ρm,ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm, and ⟨Q⟩F|m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the Second Law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of the Jarzynski equality. We demonstrate the formalism using simple analytical and numerical examples.

The crucial word “Bayesian” here refers to Bayes’s Theorem, a central result in probability theory. Bayes’s theorem tells us how to update the probability we assign to any given idea, after we’ve received relevant new information. In the case of statistical mechanics, we start with some probability distribution for the system, then let it evolve (by being influenced by the outside world, or simply by interacting with a heat bath). Then we make some measurement — but a realistic measurement, which tells us something about the system but not everything. So we can use Bayes’s Theorem to update our knowledge and get a new probability distribution.

So far, all perfectly standard. We go a bit farther by also updating the initial distribution that we started with — our knowledge of the measurement outcome influences what we think we know about the system at the beginning of the experiment. Then we derive the Bayesian Second Law of Thermodynamics, which relates the original (un-updated) distribution at initial and final times to the updated distribution at initial and final times.

That relationship makes use of the cross entropy between two distributions, which you actually don’t see that often in information theory. Think of how much you would expect to learn by being told the specific state of a system, when all you originally knew was some probability distribution. If that distribution were sharply peaked around some value, you don’t expect to learn very much — you basically already know what state the system is in. But if it’s spread out, you expect to learn a bit more. Indeed, we can think of the Gibbs/Shannon entropy S(ρ) as “the average amount we expect to learn by being told the exact state of the system, given that it is described by a probability distribution ρ.”

By contrast, the cross-entropy H(ρ, ω) is a function of two distributions: the “assumed” distribution ω, and the “true” distribution ρ. Now we’re imagining that there are two sources of uncertainty: one because the actual distribution has a nonzero entropy, and another because we’re not even using the right distribution! The cross entropy between those two distributions is “the average amount we expect to learn by being told the exact state of the system, given that we think it is described by a probability distribution ω but it is actually described by a probability distribution ρ.” And the Bayesian Second Law (BSL) tells us that this lack of knowledge — the amount we would learn on average by being told the exact state of the system, given that we were using the un-updated distribution — is always larger at the end of the experiment than at the beginning (up to corrections because the system may be emitting heat). So the BSL gives us a nice information-theoretic way of incorporating the act of “looking at the system” into the formalism of statistical mechanics.

I’m very happy with how this paper turned out, and as usual my hard-working collaborators deserve most of the credit. Of course, none of us actually does statistical mechanics for a living — we’re all particle/field theorists who have wandered off the reservation. What inspired our wandering was actually this article by Natalie Wolchover in Quanta magazine, about work by Jeremy England at MIT. I had read the Quanta article, and Stefan had seen a discussion of it on Reddit, so we got to talking about it at lunch. We thought there was more we could do along these lines, and here we are.

It will be interesting to see what we can do with the BSL, now that we have it. As mentioned, occasional fluctuations downward in entropy happen all the time in small systems, and are especially important in biophysics, perhaps even for the origin of life. While we have phrased the BSL in terms of a measurement carried out by an observer, it’s certainly not necessary to have an actual person there doing the observing. All of our equations hold perfectly well if we simply ask “what happens, given that the system ends up in a certain kind of probability distribution.” That final conditioning might be “a bacteria has replicated,” or “an RNA molecule has assembled itself.” It’s an exciting connection between fundamental principles of physics and the messy reality of our fluctuating world.

by Sean Carroll at August 12, 2015 12:09 AM

August 11, 2015

Quantum Diaries

Prototype of Mu2e solenoid passes tests with flying colors

This article appeared in Fermilab Today on Aug. 11, 2015.

This prototype represents one of 27 modules that will make up a critical section of the Mu2e experiment, the transport solenoid. Photo: Reidar Hahn

This prototype represents one of 27 modules that will make up a critical section of the Mu2e experiment, the transport solenoid. Photo: Reidar Hahn

If you’ve ever looked at a graphic of Fermilab’s future Mu2e experiment, you’ve likely noticed its distinctive, center s-shaped section. Tall and wide enough for a person to fit inside it, this large, curving series of magnets, called the transport solenoid, is perhaps the experiment’s most technologically demanding piece to build.

Last month a group in the Fermilab Technical Division aced three tests — for alignment, current and temperature — of a prototype transport solenoid module built by magnet experts at Fermilab’s Technical Division and INFN-Genoa in Italy.

The triple milestone means that Fermilab can now order the full set for production — 27 modules.

“The results were excellent,” said Magnet Systems Department scientist Mau Lopes, who is leading the effort.

There’s not much wiggle room when it comes to the transport solenoid, a crucial component for the ultrasensitive Mu2e experiment. Mu2e will look for a predicted but never observed phenomenon, the conversion of a muon into its much lighter, more familiar cousin, the electron, without the usual accompanying neutrinos. To do this, it will send muons into a detector where scientists will look for particular signatures of the rare process.

The transport solenoid generates a magnetic field that deftly separates muons based on their momentum and charge and directs slow muons to the center of the Mu2e detector. The maneuver requires some fairly precisely designed details, not the least of which is a good fit.

When put together, the 27 wedge-shaped modules will form a tube with the snake-like profile. Muons will travel down this vacuum tube. To guide them along the right path to the detector, the solenoid units must align with each other to within 0.2 degrees. The Magnet Systems team exceeded expectation: The prototype was aligned with 100 times greater precision.

The team achieved not just the right shape, but the right current. The electrical current running through the solenoid coil creates the magnetic field. The Mu2e team exceeded the nominal current of 1,730 amps, reaching 2,200 amps. As a bonus, while that amount of current has the potential to create a slight deformation in the module’s shape, the Mu2e team measured no change in the structure.

Nor was there much change in the model’s temperature, which must be very low. The team delivered 2.5 watts of power to the coil — well above what the coils will see when running. The module proved robust: The temperature changed by a mere whisker — 150 millikelvin, or 0.27 degrees Fahrenheit. The coils will be at 5 Kelvin when operating. The prototype sustained the nominal current at up to 8 Kelvin.

Fermilab has selected a vendor to produce the modules. Lopes expects that it will be two and a half years until all modules are complete.

“We thank all the smart people at INFN Genoa, the Fermilab Test and Instrumentation Department, the Magnet Systems Department and the Accelerator Division Cryogenics Department for this achievement,” Lopes said. “These seven months of hard work have paid off tremendously. Our project continues at full steam ahead.”

Leah Hesla

by Fermilab at August 11, 2015 08:27 PM

Symmetrybreaking - Fermilab/SLAC

Testing the nature of neutrinos

The Majorana Demonstrator experiment is looking for a sign that neutrinos are their own antiparticles.

A mile below the Earth’s surface, a large copper cylinder sits behind a thick shield of lead bricks stacked into what could be confused for a wood-burning pizza oven. Inside, 29 hockey pucks of germanium sit in strings and send information out through a gleaming copper arm.

This is the Majorana Demonstrator, an experiment housed in a former gold mine at Sanford Lab in Lead, South Dakota. It’s searching for a rare process that would help scientists explain why matter exists in our universe instead of nothing at all.

“To me, it’s our version of going to the moon right now,” says Julieta Gruszko, a PhD student from the University of Washington working on the experiment. “We’re not going to the moon. Instead we’re doing these experiments that tell us why the universe is the way it is. It really is the way in which we do discovery these days.”

It’s an exciting time for Majorana. The first of two copper cylinders, a cryostat cooled to minus 320 degrees Fahrenheit, was installed in June and is already gathering basic data. Researchers plan to install the second module by the end of the year, after the specially made copper is machined into parts and assembled. This second module will join the first behind the shield, sliding in on an air-bearing table (read: hoverboard) like a key fits into a lock.

Once completed, the experiment is expected to take data for three to four years. But you can also think of the entire Majorana Demonstrator as a prototype for something bigger. The experiment’s purpose in life is to prove that researchers can sufficiently reduce backgrounds, the noise that mimics the signals they seek, to find what they’re looking for.

What they’re looking for is neutrinoless double beta decay, a type of reaction that would tell us if neutrinos, fundamental particles that rarely interact with anything, could be responsible for our matter-dominated cosmos.

It’s unlikely that the Majorana Demonstrator would see this process directly, says Steve Elliott, the project’s spokesperson and a researcher at Los Alamos National Laboratory. “But of course, you’d always love to see it,” he says.

More likely, Majorana will act as a stepping-stone on the way to a one-ton version of the experiment, which would have sufficient mass and resolution to see the process. Since neutrinoless double beta decay is expected to happen in a single germanium atom less than once every 1025 years (far, far longer than the lifespan of the universe), more mass is vital. It means more chances to glimpse the phenomenon—if it exists.

For neutrinoless double beta decay to occur, the electrically neutral neutrino must be its own antiparticle, earning it a distinction as something called a Majorana particle. As a germanium atom decayed, two neutrinos would annihilate. The emitted electrons would carry away all of the energy that scientists could see as a spike in the data. This type of experiment, taking place with different types of atoms around the world, is the only kind currently able to determine whether neutrinos are Majorana particles. If they are, they could have caused an asymmetry early in the universe’s history that led to the world we see today, allowing matter to win out over antimatter.

Photo by: Matt Kapust, Sanford Lab

Bringing the background to the foreground

To reduce background noise, you have to reduce radiation. Majorana does this in several ways. All of the copper is grown underground in a process called electroforming, which makes about a hair’s width of copper per day. The mile of rock shielding protects the experiment and copper from cosmic rays, making it the purest copper in existence. But while all the rock of the Black Hills cocoons the experiment quite well, it can’t block out everything—and the rock itself contains uranium and thorium that also produce radiation that the lead shield must counter.

The shield, two meters long on each side, is surrounded by more insulation—a nitrogen gas layer to flush out radon, polyethylene to shield against neutrons, a layer of plastic to pick up stray muons. Inside, a copper lining provides extra protection from gamma rays.

Anything that goes within this shield goes through a special cleaning process, and much is specially made. For example, off-the-shelf electrical connectors use springs made from material too radioactive for the project. That means the team had to design springless connectors.

“All these tiny details you would imagine you can brush off become really key, and the whole success of the experiment can hinge on something as stupid as making a plug that works,” Gruszko says. “For us, even fractions of grams of tiny springs inside tiny connectors can be enough to blow our whole background budget.”

The design and construction of such a low-background experiment has been more than 10 years in the making, and groups are already thinking about the larger versions yet to come. Massive physics projects have long timescales and are only getting longer as they grow. Gruszko predicts that within the next decade, there will be a collaboration working on a ton-scale version of the experiment.

Fortunately, Majorana scientists are not working in a vacuum. Other experiments around the world are seeking more information about neutrinos through this search for neutrinoless double beta decay, whether they are using germanium (as in GERDA) or tellurium (CUORE) or xenon (KamLAND-Zen, EXO or XMASS). They all face the challenge of reducing background noise in search for their ultimate signal.

“It’s not just about seeing something,” Gruszko says. “It’s about convincing the community that what you’re seeing is actually the signal that you’re claiming.”


Like what you see? Sign up for a free subscription to symmetry!

by Lauren Biron at August 11, 2015 01:00 PM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
August 30, 2015 03:21 AM
All times are UTC.

Suggest a blog: