Particle Physics Planet


September 20, 2014

Jester - Resonaances

Dark matter or pulsars? AMS hints it's neither.
Yesterday AMS-02 updated their measurement of cosmic-ray positron and electron fluxes. The newly published data extend to positron energies 500 GeV, compared to 350 GeV in the previous release. The central value of the positron fraction in the highest energy bin is one third of the error bar lower than the central value of the next-to-highestbin.  This allows the collaboration to conclude that the positron fraction has a maximum and starts to decrease at high energies :]  The sloppy presentation and unnecessary hype obscures the fact that AMS actually found something non-trivial.  Namely, it is interesting that the positron fraction, after a sharp rise between 10 and 200 GeV, seems to plateau at higher energies at the value around 15%.  This sort of behavior, although not expected by popular models of cosmic ray propagation, was actually predicted a few years ago, well before AMS was launched.  

Before I get to the point, let's have a brief summary. In 2008 the PAMELA experiment observed a steep rise of the cosmic ray positron fraction between 10 and 100 GeV. Positrons are routinely produced by scattering of high energy cosmic rays (secondary production), but the rise was not predicted by models of cosmic ray propagations. This prompted speculations of another (primary) source of positrons: from pulsars, supernovae or other astrophysical objects, to  dark matter annihilation. The dark matter explanation is unlikely for many reasons. On the theoretical side, the large annihilation cross section required is difficult to achieve, and it is difficult to produce a large flux of positrons without producing an excess of antiprotons at the same time. When theoretical obstacles are overcome by skillful model building, constraints from gamma ray and radio observations disfavor the relevant parameter space. Even if these constraints are dismissed due to large astrophysical uncertainties, the models poorly fit the shape the electron and positron spectrum observed by PAMELA, AMS, and FERMI (see the addendum of this paper for a recent discussion).
Pulsars, on the other hand, are a plausible but handwaving explanation: we know they are all around and we know they produce electron-positron pairs in the magnetosphere, but we cannot calculate the spectrum from first principles.

But maybe primary positron sources are not needed at all? The old paper by Katz et al. proposes a different approach. Rather than starting with a particular propagation model, it assumes the high-energy positrons observed by PAMELA are secondary, and attempts to deduce from the data the parameters controlling the propagation of cosmic rays. The logic is based on two premises. Firstly, while production of cosmic rays in our galaxy contains many unknowns, the production of different particles is strongly correlated, with the relative ratios depending on nuclear cross sections that are measurable in laboratories. Secondly, different particles propagate in the magnetic field of the galaxy in the same way, depending only on their rigidity (momentum divided by charge). Thus, from an observed flux of one particle, one can predict the production rate of other particles. This approach is quite successful in predicting the cosmic antiproton flux based on the observed boron flux. For positrons, the story is more complicated because of large energy losses (cooling) due to synchrotron and inverse-Compton processes. However, in this case one can make the  exercise of computing the positron flux assuming no losses at all. The result correspond to roughly 20% positron fraction above 100 GeV. Since in the real world cooling can only suppress the positron flux, the value computed assuming no cooling represents an upper bound on the positron fraction.

Now, at lower energies, the observed positron flux is a factor of a few below the upper bound. This is already intriguing, as hypothetical primary positrons could in principle have an arbitrary flux,  orders of magnitude larger or smaller than this upper bound. The rise observed by PAMELA can be interpreted that the suppression due to cooling decreases as positron energy increases. This is not implausible: the suppression depends on the interplay of the cooling time and mean propagation time of positrons, both of which are unknown functions of energy. Once the cooling time exceeds the propagation time the suppression factor is completely gone. In such a case the positron fraction should saturate the upper limit. This is what seems to be happening at the energies 200-500 GeV probed by AMS, as can be seen in the plot. Already the previous AMS data were consistent with this picture, and the latest update only strengthens it.

So, it may be that the mystery of cosmic ray positrons has a simple down-to-galactic-disc explanation. If further observations show the positron flux climbing  above the upper limit or dropping suddenly, then the secondary production hypothesis would be invalidated. But, for the moment, the AMS data seems to be consistent with no primary sources, just assuming that the cooling time of positrons is shorter than predicted by the state-of-the-art propagation models. So, instead of dark matter, AMS might have discovered models of cosmic-ray propagation need a fix. That's less spectacular, but still worthwhile.

Thanks to Kfir for the plot and explanations. 

by Jester (noreply@blogger.com) at September 20, 2014 03:25 AM

September 19, 2014

arXiv blog

The Hidden World of Facebook "Like Farms"

If you pay a “like farm” to generate likes for your Facebook pages, what do you get?

September 19, 2014 10:34 PM

Christian P. Robert - xi'an's og

someone to watch over me [Horfðu á mig]

And yet another roman noir taking place in Iceland! My bedside read over the past two months was “Someone to watch over me” by Yrsa Sigurðardóttir. (It took that long because I was mostly away in July and August, not because the book was boring me to sleep every night!) It is a fairly unusual book in several respects: the setting is an institution for mentally handicapped patients that was set on fire, killing five of the patients as a result, the investigator is an Icelandic lawyer, Þóra Guðmundsdóttir, along with her German unemployed-banker boyfriend, the action takes place at the height [or bottom!] of the Icelandic [and beyond!] economic crisis, when most divorce settlements are about splitting the debts of the household, and when replacing a computer becomes an issue, some of the protagonists, including the main suspects, are mentally ill, and the police and justice are strangely absent from most of the story. The the book tells a lot about the Icelandic society, where a hit-and-run is so unheard of that the police is clueless. Or seems to be. And where people see ghosts. Or think they do, as the author plays (heavily?) on the uncertainty about those ghosts. (At least, there are no elves. Nor trolls.) Definitely more in tune with the “true” Iceland than Available dark. (Well, as far as I can tell!) The mystery itself is a wee bit stretched and the final resolution slightly disappointing, implying some unlikely behaviour from the major characters. In particular, I do not buy the explanation motivating the arson itself. Terrible cover too. And not a great title in English (Watch me or Look at me would have been better) given the many books, movies and songs with the same title. Nonetheless, I liked very much the overall atmosphere of the book, enough to recommend it.


Filed under: Books, Travel Tagged: autism, Þóra Guðmundsdóttir, financial crisis, Horfðu á mig, Iceland, Reykjavik, Yrsa Sigurðardóttir

by xi'an at September 19, 2014 10:14 PM

CERN Bulletin

Midsummer mysteries: Criminal masterminds? Not really…

In the summer, when offices are empty and the library is full of new faces, it may seem like a perfect opportunity to steal IT equipment. However, as we know, stealing never pays and thieves always get caught. Just like the person who stole several bikes parked in front of Reception…

 

Image: Katarina Anthony.

 As we have said many times: security affects us all. It would seem that the crafty little devil who stole four computers from the library (three privately owned and one belonging to CERN) in July hadn’t read our article. This individual naïvely thought that it would be possible to commit the thefts, sell his ill-gotten gains on the CERN Market and still get away with it.

But he was wrong, as the CERN security service and the IT security service were able to identify the guilty party within just a few days.  “The computers had been stolen over a period of four days but it was obvious to us that the same person was responsible,” explains Didier Constant, Head of the Security Service. “Thanks to the IT security service, we could see that the stolen computers had been connected to the CERN network after they were taken and that they had been put up for sale on the CERN Market.”

The thief’s strategic error was blatantly obvious in this case. However, even when the intentions are clear, it is not always so easy to find proof, especially if the thief tries to defend himself with explanations and alibis like a professional criminal. “The Geneva police helped us a lot,” says Didier Constant. “The person eventually admitted to three of the four thefts. He had probably sold the fourth computer outside CERN.”

Fortunately, the security service is never on holiday: also in July, another person thought he could come to CERN on the tram, help himself to a bike parked near Reception and use it to get away, repeating this process several times. “In total, over three weeks, this person stole about 10 bikes,” explains Didier Constant. “In this case we were able to identify the guilty party from our security cameras and the police had a criminal record for him.”

So there you have two very interesting stories. In both cases, it was thanks to tickets created on the CERN Portal that these crimes could be dealt with by experts in the services concerned and by the police. If you see unusual behaviour or if you are the victim of theft, don’t hesitate to report it.

September 19, 2014 09:09 PM

Lubos Motl - string vacua and pheno

A pro-BICEP2 paper
It's generally expected that the Planck collaboration should present their new results on the CMB polarization data within days, weeks, or a month. Will they be capable of confirming the BICEP2 discovery – or refute it by convincing data?

Ten days ago, Planck published a paper on dust modelling:
Planck intermediate results. XXIX. All-sky dust modelling with Planck, IRAS, and WISE observations
I am not able to decide whether this paper has anything to say about the discovery of the primordial gravitational waves. It could be relevant but note that the paper doesn't discuss the polarization of the radiation at all.

Perhaps more interestingly, Wesley Colley and Richard Gott released their preprint
Genus Topology and Cross-Correlation of BICEP2 and Planck 353 GHz B-Modes: Further Evidence Favoring Gravity Wave Detection
that seems to claim that the data are powerful enough to confirm some influence of the dust yet defend the notion that the primordial gravitational waves have to represent a big part of the BICEP2 observation, too.




What did they do? Well, they took some new publicly available maps by Planck – those at the frequency 353 GHz (wavelength 849 microns). Recall that the claimed BICEP2 discovery appeared at the frequency 150 GHz (wavelength 2 millimeters).




They assume, hopefully for good reasons, that the dust's contribution to the data should be pretty much the same for these two frequencies, up to an overall normalization. Planck sees a lot of radiation at 353 GHz – if all of it were due to dust, the amount of dust would be enough to account for the whole BICEP2 signal.

However, if this were the case, the signals in the BICEP2 patch of the sky at these two frequencies would have to be almost perfectly correlated with each other. Instead, Colley and Gott see the correlation coefficient to be\[

15\% \pm 4\%

\] (does someone understand why the \(\rm\LaTeX\) percent sign has a tilde connecting the upper circle with the slash?) which is "significantly" (four-sigma) different from zero but it is still decidedly smaller than 100 percent. The fact that this correlation is much smaller than 100% implies that most of the BICEP2 signal is uncorrelated to what is classified as dust by the Planck maps or, almost equivalently, that most of the observations at 353 GHz in the BICEP2 region is due to noise, not dust.

When they quantify all this logic, they conclude that about one-half of the BICEP2 signal is due to dust and the remaining one-half has to be due to the primordial gravitational waves; that's why their preferred \(r\), the tensor-to-scalar ratio, drops from BICEP2's most ambitious \(r=0.2\) to \(r=0.11\pm 0.04\), a value very nicely compatible with chaotic inflation. The values "one-half" aren't known quite accurately but with the error margins they seem to work with, they still seem to see that the value \(r=0\) – i.e. non-discovery of the primordial gravitational waves – may be excluded at the 2.5-sigma level.



"Engineers with a diploma" vs "The Big Bang Theory"

by Luboš Motl (noreply@blogger.com) at September 19, 2014 06:53 PM

The Great Beyond - Nature blog

Ephemeral superheavy atoms coaxed into exotic molecules

Posted on behalf of Katharine Sanderson.

If you were ever to get excited about a chemical reaction, now might be the time.

An international team led by Christoph Düllmann at the Johannes Gutenberg University in Mainz, Germany, has managed to make a chemical compound containing the superheavy element seaborgium (Sg) — which has 106 protons in its nuclei — and six carbon monoxide groups.

The resulting molecule, reported on 18 September in Science, could be the start of a new chemical repertoire for the manmade superheavy elements, which do not exist in nature.

These elements are interesting not only to nuclear physicists — who use them to test how many protons they can pack into one nucleus before mutual electrostatic repulsion makes it explode — but also to chemists. The protons’ electrostatic pull on the electrons orbiting the nucleus is stronger in these elements than it is in lighter ones. This means that the electrons whiz around the nucleus at almost 80% the speed of light, a regime where Einstein’s special theory of relativity — which makes particles more massive the faster they get — begins to have a measurable effect. “It changes the whole electronic structure,” says Düllmann, making it different from those of elements that sit directly above the superheavy elements on the periodic table (see ‘Cracks in the periodic table‘).

Some chemists therefore expect superheavy elements to violate the general rule that elements in the same column should have similar electron structures and thus be chemically similar.

It is a brave chemist who attempts chemical reactions with superheavy elements. These cannot be studied with normal ‘wet chemistry’ methods and ordinary bunsen burners because they are made in very small numbers by smashing lighter atoms together, and tend to be extremely unstable, quickly ‘transmuting’ into other elements via radioactive decay. But it can, and has, been done, and researchers have identified fluorides, chlorides and oxides of these elements.

The difference this time is that the chemical reaction was done in a relatively cool environment, and a different kind of chemical bond was formed. Rather than a simple covalent bond, where the metal and the other element share electrons, Düllmann made a compound with a much more sophisticated sharing of electrons in the bond, called a coordination bond.

Untitled

Nuclei of the superheavy element seaborgium were created from a beam of neon ions (top right) and slowed down in a gas-filled chamber (RTC), where they reacted with carbon monoxide to produce a new kind of molecule.
Credit: P. Huey/Science

Düllmann’s team used the RIKEN Linear Accelerator (RILAC) in Japan to make seaborgium by firing a beam of neon ions (atomic number 10) at a foil of curium (96). This process yielded nuclei of seaborgium-265 — an isotope with a half-life of less than 20 seconds — at a rate of just one every few hours.

The beam also produced nuclei of molybdenum and tungsten, which are in the same column of the periodic table as seaborgium. The team separated the resulting seaborgium, molybdenum and tungsten from the neon using a magnetic field, and sent them into a gas-filled chamber to cool off and react with carbon monoxide. Molybdenum and tungsten are known to form carbonyls (Mo(CO)6 and W(CO)6).

Using a technique called gas chromatography, the team found that the seaborgium formed a compound that was volatile and tended to react with silica, the way its molybdenum- and tungsten-based siblings would. This indirect evidence was enough to convince Düllmann that he had made the first superheavy metal carbonyl (Sg(CO)6). “It was a fantastic feeling,” he says.

In this case the prediction — which the experiment confirmed — was that special relativity would make the molecule behave more like its lighter counterparts than might analogous compounds of different superheavy elements.

In an accompanying commentary, nuclear chemist Walter Loveland of Oregon State University in Corvallis writes that similar techniques could be applied to other superheavy elements from 104 to 109. In particular, the chemistry of element 109 (meitnerium) has never been studied before, he notes.

by Davide Castelvecchi at September 19, 2014 06:34 PM

Lubos Motl - string vacua and pheno

AMS in PRL: the positrons do stop increasing
...but the evidence for an actual drop remains underwhelming...



In April 2013, the Alpha Magnetic Spectrometer (AMS-02), a gadget carried by the International Space Station that looks for dark matter and other things and whose data are being evaluated by Nobel prize winner Sam Ting (MIT) and his folks, reported intriguing observations that were supposed to grow to a smoking fun proving that dark matter exists and is composed of heavy elementary particles:
AMS-02 seems to overcautiously censor solid evidence for dark matter

AMS: the steep drop is very likely there
I had various reasons for these speculative optimistic prophesies – including Sam Ting's body language. It just seemed that he knew more than he was saying and was only presenting a very small, underwhelming part of the observations.




Recall that among these high energy particles, there are both electrons and positrons. The positrons are more exotic and may be produced by pulsars – which is a boring explanation. However, they may also originate from the annihilation of dark matter "WIMP" particles. If that's so, the dark matter particle physics predicts that the positron fraction increases as you increase the energy of the electrons and positrons. But at some moment, when the energy reaches a few hundreds of \(\GeV\) or so, the positron fraction should stop growing and steeply drop afterwards.




Was that observed in 2013? Has it been observed by now? Finally, today, AMS-02 published a new paper in prestigious PRL:
High Statistics Measurement of the Positron Fraction in Primary Cosmic Rays of \(0.5\)–\(500\GeV\) with the Alpha Magnetic Spectrometer on the International Space Station (PRL)

CERN story, CERN press release + PDF supplement, copy at interactions.org, APS, NBC, Symmetry Magazine
The PRL abstract says that for the first time, they see that for the first time, they observe the positron fraction's increase with energy to stop somewhere, approximately at \(200\GeV\) although e.g. interactions.org puts the place at \(275\pm 32\GeV\). Moreover, the derivative of the number of positrons with respect to the energy exceeds the same derivative for electrons around dozens of \(\GeV\) which makes it more likely that these lower but high-energy positrons indeed directly originate from a high-energy source and not from deceleration.



While the claim about the end of the increase of the positron fraction agrees with the graphs like those above (other graphs show the positron fraction stabilized at \(0.15\) between \(190\) and \(430\GeV\) or so), I find the "end of the increase" or a "potentially emerging decrease" tantalizing but still unspectacularly weak and inconclusive. Indeed, the "straight decrease" itself still seems to be unsupported. Even if we were shown these graphs in April 2013, and we were shown a bit less than that, I would have thought that Sam Ting's hype was probably a bit excessive.

Just to be sure, the behavior in the graphs is compatible with a (below) \(1\TeV\) dark matter particle like a neutralino (supersymmetry's most convincing dark matter candidate), and indeed, I tend to think that this is what actually exists and will emerge at the LHC, too. Incidentally, some sources tell us that the LHC is back to business after the upgrade. It's a bit exaggeration of the actual ongoing "business" but let's hope that in April 2015, the \(13\TeV\) and perhaps \(14\TeV\) collisions will start smoothly and abruptly.



SUSY and related scenarios predicts the positron fraction as a function of energy that looks like the red graph above (two values of the neutralino mass are depicted). Up to a certain energy, it looks just like what AMS-02 has already shown us – which is good news for WIMP and/or SUSY – but we still haven't seen the dramatic drop yet. Of course, it's conceivable that Ting et al. are still hiding something they already have – and maybe have had already in April 2013. Maybe the hiding game is needed for the continued funding of their experiment. But this is just another speculation.



The neverending story that takes place at the ISS is described in this musical video clip.

by Luboš Motl (noreply@blogger.com) at September 19, 2014 06:08 PM

Emily Lakdawalla - The Planetary Society Blog

More jets from Rosetta's comet!
Another lovely view of comet Churyumov-Gerasimenko contains jets. Bonus: Emily explains how to use a flat field to rid these glorious Rosetta NavCam images of faint stripes and specks.

September 19, 2014 06:07 PM

CERN Bulletin

Administrative Circular No. 11 (Rev. 3) - Categories of members of the personnel

Administrative Circular No. 11 (Rev. 3) entitled “Categories of members of the personnel”, approved by the Director-General following discussion at the Standing Concertation Committee meeting of 3 July 2014 and entering into force on 1 September 2014, is available on the intranet site of the Human Resources Department:

This circular is applicable to all members of the personnel.

It cancels and replaces Administrative Circular No. 11 (Rev. 2) entitled “Categories of members of the personnel” of January 2013.

The circular was revised in order to include a minor adjustment of the determination of required period of break in the payment of subsistence allowance to certain categories of associated members of the personnel (taking account of possible technical means of control). Furthermore, the possibility of traineeships of long duration was restricted to cases in which the traineeship is awarded pursuant to an agreement between CERN and a funding agency on a national or international level.

Department Head Office
HR Department 

September 19, 2014 04:09 PM

astrobites - astro-ph reader's digest

Apply to be an Astronomy Ambassador

aas_astronomy_ambassadors_logo_2013

Do you think outreach is an important part of our jobs as scientists? We hope so! Do you want to learn more about how to effectively communicate science with the public and work with people who can teach you how to have a real impact in your community?

The AAS Astronomy Ambassadors program is now accepting applications for a workshop at this winter’s AAS meeting! We’ve discussed the Astronomy Ambassadors program before: check out Zack’s argument about why this program is so important, and if you’d like to learn more about what to expect, you can read Meredith’s and Allison’s accounts of the things they learned when they participated in the inaugural workshop.

The Ambassadors program is particularly seeking early-career astronomers who are interested in outreach but not necessarily experienced. So, don’t be discouraged if you haven’t had many opportunities to share science with public audiences. This workshop is perfect for you.

Read the official invitation from the AAS below, and click here to apply!

 

Astronomy Ambassadors Workshop for Early-Career AAS Members

The AAS Astronomy Ambassadors program supports early-career AAS members with training in resources and techniques for effective outreach to K-12 students, families, and the public. The next AAS Astronomy Ambassadors workshop will be held 3-4 January 2015 at the 225th AAS meeting in Seattle, Washington. Workshop participants will learn to communicate more effectively with public and school audiences; find outreach opportunities and establish ongoing partnerships with local schools, museums, parks, and/or community centers; reach audiences with personal stories, hands-on activities, and jargon-free language; identify strategies and techniques to improve their presentation skills; gain access to a menu of outreach resources that work in a variety of settings; and become part of an active community of astronomers who do outreach.

Participation in the program includes a few hours of pre-workshop online activities to help us get to know your needs; the two-day workshop, for which lunches and up to 2 nights’ lodging will be provided; and certification as an AAS Astronomy Ambassador, once you have logged three successful outreach events. The workshop includes presenters from the American Astronomical Society, the Astronomical Society of the Pacific, and the Pacific Science Center.

The number of participants is limited, and the application requires consent from your department chair. We invite applications from graduate students, postdocs and new faculty in their first two years after receipt of their PhD, and advanced undergraduates doing research and committed to continuing in astronomy. Early-career astronomers who are interested in doing outreach, but who haven’t done much yet, are encouraged to apply; we will have sessions appropriate for both those who have done some outreach already and those just starting their outreach adventures. We especially encourage applications from members of groups that are presently underrepresented in science.

Please complete the online application form by Monday, 20 October 2014.

by Astrobites at September 19, 2014 03:41 PM

Sean Carroll - Preposterous Universe

How Much Cosmic Inflation Probably Occurred?

Nothing focuses the mind like a hanging, and nothing focuses the science like an unexpected experimental result. The BICEP2 claimed discovery of gravitational waves in the cosmic microwave background — although we still don’t know whether it will hold up — has prompted cosmologists to think hard about the possibility that inflation happened at a very high energy scale. The BICEP2 paper has over 600 citations already, or more than 3/day since it was released. And hey, I’m a cosmologist! (At times.) So I am as susceptible to being prompted as anyone.

Cosmic inflation, a period of super-fast accelerated expansion in the early universe, was initially invented to help explain why the universe is so flat and smooth. (Whether this is a good motivation is another issue, one I hope to talk about soon.) In order to address these problems, the universe has to inflate by a sufficiently large amount. In particular, we have to have enough inflation so that the universe expands by a factor of more than 1022, which is about e50. Since physicists think in exponentials and logarithms, we usually just say “inflation needs to last for over 50 e-folds” for short.

So Grant Remmen, a grad student here at Caltech, and I have been considering a pretty obvious question to ask: if we assume that there was cosmic inflation, how much inflation do we actually expect to have occurred? In other words, given a certain inflationary model (some set of fields and potential energies), is it most likely that we get lots and lots of inflation, or would it be more likely to get just a little bit? Everyone who invents inflationary models is careful enough to check that their proposals allow for sufficient inflation, but that’s a bit different from asking whether it’s likely.

The result of our cogitations appeared on arxiv recently:

How Many e-Folds Should We Expect from High-Scale Inflation?
Grant N. Remmen, Sean M. Carroll

We address the issue of how many e-folds we would naturally expect if inflation occurred at an energy scale of order 1016 GeV. We use the canonical measure on trajectories in classical phase space, specialized to the case of flat universes with a single scalar field. While there is no exact analytic expression for the measure, we are able to derive conditions that determine its behavior. For a quadratic potential V(ϕ)=m2ϕ2/2 with m=2×1013 GeV and cutoff at MPl=2.4×1018 GeV, we find an expectation value of 2×1010 e-folds on the set of FRW trajectories. For cosine inflation V(ϕ)=Λ4[1−cos(ϕ/f)] with f=1.5×1019 GeV, we find that the expected total number of e-folds is 50, which would just satisfy the observed requirements of our own Universe; if f is larger, more than 50 e-folds are generically attained. We conclude that one should expect a large amount of inflation in large-field models and more limited inflation in small-field (hilltop) scenarios.

As should be evident, this builds on the previous paper Grant and I wrote about cosmological attractors. We have a technique for finding a measure on the space of cosmological histories, so it is natural to apply that measure to different versions of inflation. The result tells us — at least as far as the classical dynamics of inflation are concerned — how much inflation one would “naturally” expect in a given model.

The results were interesting. For definiteness we looked at two specific simple models: quadratic inflation, where the potential for the inflaton ϕ is simply a parabola, and cosine (or “natural“) inflation, where the potential is — wait for it — a cosine. There are many models one might consider (one recent paper looks at 193 possible versions of inflation), but we weren’t trying to be comprehensive, merely illustrative. And these are two nice examples of two different kinds of potentials: “large-field” models where the potential grows without bound (or at least until you reach the Planck scale), and “small-field” models where the inflaton can sit near the top of a hill.

2potentials

Think for a moment about how much inflation can occur (rather than “probably does”) in these models. Remember that the inflaton field acts just like a ball rolling down a hill, except that there is an effective “friction” from the expansion of the universe. That friction becomes greater as the expansion rate is higher, which for inflation happens when the field is high on the potential. So in the quadratic case, even though the slope of the potential (and therefore the force pushing the field downwards) grows quite large when the field is high up, the friction is also very large, and it’s actually the increased friction that dominates in this case. So the field rolls slowly (and the universe inflates) at large values, while inflation stops at smaller values. But there is a cutoff, since we can’t let the potential grow larger than the Planck scale. So the quadratic model allows for a large but finite amount of inflation.

The cosine model, on the other hand, allows for a potentially infinite amount of inflation. That’s because the potential has a maximum at the top of the hill. In principle, there is a trajectory where the field simply sits there and inflates forever. Slightly more realistically, there are other trajectories that start (with zero velocity) very close to the top of the hill, and take an arbitrarily long time to roll down. There are also, of course, trajectories that have a substantial velocity near the top of the hill, which would inflate for a relatively short period of time. (Inflation only happens when the energy density is mostly in the form of the potential itself, with a minimal contribution from the kinetic energy caused by the field velocity.) So there is an interesting tradeoff, and we would like to know which effect wins.

The answer that Grant and I derived is: in the quadratic potential, we generically expect a huge amount of inflation, while in the cosine potential, we expect barely enough (fairly close to the hoped-for 50 e-folds). Although you can in principle inflate forever near a hilltop, such behavior is non-generic; you really need to fine-tune the initial conditions to make it happen. In the quadratic potential, by contrast, getting a buttload of inflation is no problem at all.

Of course, this entire analysis is done using the classical measure on trajectories through phase space. The actual world is not classical, nor is there any strong reason to expect that the initial conditions for the universe are randomly chosen with respect to the Liouville measure. (Indeed, we’re pretty sure that they are not.)

So this study has a certain aspect of looking-under-the-lamppost. We consider the naturalness of cosmological histories with respect to the conserved measure on classical phase space because that’s what we can do. If we had a finished theory of quantum gravity and knew the wave function of the universe, we would just look at that.

We’re not looking for blatant contradictions with data, we’re looking for clues that can help us move forward. The way to interpret our result is to say that, if the universe has a field with a potential like the quadratic inflation model (and the initial conditions are sufficiently smooth to allow inflation at all), then it’s completely natural to get more than enough inflation. If we have the cosine potential, on the other hand, than getting enough inflation is possible, but far from a sure thing. Which might be very good news — it’s generally thought that getting precisely “just enough” inflation is annoying finely-tuned, but here it seems quite plausible. That might suggest that we could observe remnants of the pre-inflationary universe at very large scales today.

by Sean Carroll at September 19, 2014 02:51 PM

CERN Bulletin

CERN Road Race | 1 October
The 2014 edition of the annual CERN Road Race will be held on Wednesday 1 October at 18:15.   The 5.5 km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8 km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter for free and each child will receive a medal. More information, and the online entry form, can be found here.

by Klaus Hanke at September 19, 2014 01:45 PM

CERN Bulletin

Exhibition | CERN Micro Club | 1-30 September
The CERN Micro Club (CMC) is organising an exhibition looking back on the origins of the personal computer, also known as the micro-computer, to mark the 60th anniversary of CERN and the club’s own 30th anniversary.   CERN, Building 567, R-021 and R-029 01.09.2014 - 30.09.2014 from 4.00 to 6.00 p.m. The exhibition will be held in the club’s premises (Building 567, rooms R-0121 and R-029) and will be open Mondays to Thursdays from 1 to 30 September 2014. Come and admire, touch and use makes and models that disappeared from the market many years ago, such as Atari, Commodore, Olivetti, DEC, IBM and Apple II and III, all in good working order and installed with applications and games from the period. Club members will be on hand to tell you about these early computers, which had memories of just of a few kilobytes, whereas those of modern computers can reach several gigabytes or even terabytes.

September 19, 2014 01:25 PM

Peter Coles - In the Dark

The Athenian Option Revisited

I have to admit that I didn’t stay up to watch the results come in from the referendum on Scottish independence, primarily because I knew I had a very busy morning ahead of me and needed an early night. Not eligible to vote myself I did toy with the option of having a bet on the outcome, but the odds on the “no” outcome I thought more likely were 9-1 on so hardly worth a flutter at all. The opinion polls may have had difficulty getting this one right, but I generally trust the bookies’ assessment.

Anyway, to summarize the outcome:

  • “No” obtained a mark of 55%, which corresponds to a solid II.2 with no need for resits.
  • “Yes” obtained a mark of 45%, which is a Third Class result, but may claim extenuating circumstances or request another attempt.

Sorry about that. I guess I’ve been doing too many examination boards these days…

On balance, I’m glad that Scotland voted “no” but I don’t think it would have been that much of a big deal in the long run had they decided otherwise. There might have been some short-term difficulties but we’d all have survived. In the end what matters is that this whole exercise was run democratically and the issue was settled by voting rather than fighting, which is what would have happened in the not-too-distant past.

The aftermath of the vote against Scottish secession has been dominated by talk of greater devolution of powers not only to Scotland but also to Wales and even the English regions. One striking thing about the referendum was the high turnout (by British standards) of around 85 per cent that contrasts strongly with the dismal rate of participation in, e.g.. the recent European elections. In the light of all this I thought I’d resurrect an idea I’ve blogged about before.

Some time ago I read a very interesting an provocative little book called The Athenian Option, which offers a radical vision of how to renew Britain’s democracy.

The context within which this book was written was the need to reform Britain’s unelected second chamber, the House of Lords. The authors of the book, Anthony Barnett and Peter Carty, were proposing a way to do this even before Tony Blair’s New Labour party came to power in 1997, promising to reform  the House of Lords in its manifesto. Despite being well into it’s third Parliament, New Labour hasn’t done much about it yet, and has even failed to offer any real proposals. Although it has removed voting rights from the hereditary peers, the result of this is that the House of Lords is still stuffed full of people appointed by the government.

The need for reform is now greater than ever. In reason times, we have seen dramatically increasing disillusionment with the political establishment, which has handed  out billions of pounds of tax payers’ money to the profligate banking sector causing a ballooning public debt, followed by savage cuts in public spending with consequent reductions in jobs and services.

Meanwhile, starting under New Labour, the culture of cronyism led to the creation of a myriad pointless quangos doing their best to strangle the entire country with red tape. Although Gordon Brown stated in 2004 that he was going to reduce  bureaucracy, the number of civil servants in the UK grew by about 12% (from 465,7000 to 522,930) between 2004 and 2009. If the amount of bureaucracy within the British university system is anything to go by, the burden of the constant processes of evaluation, assessment and justification is out of all proportion to what useful stuff actually gets done. This started in the Thatcher era with  Conservative governments who viewed the public services as a kind of enemy within, to be suspected, regulated and subdued. However, there’s no denying that it has got worse in recent years.

There is an even more sinister side to all this, in the steady erosion of civil liberties through increased clandestine surveillance, detention without trial and the rest of the paraphernalia of paranoid government. Big Brother isn’t as far off as we’d all like to think.

The furore over MP’s expenses led to further disgust with the behaviour of our elected representatives, many of whom seem to be more interested in lining their own pockets than in carrying out their duties as our elected representatives.

The fact is that the political establishment has become so remote from its original goal of serving the people that it is now regarded with near-total contempt by a large fraction of the population. Politics now primarily serves itself and, of course, big business. It needs to be forced to become more accountable to ordinary people. This is why I think the suggestion of radical reform along the lines suggested by Barnett and Carty is not only interesting, but something like it is absolutely essential if we are to survive as a democracy.

What they propose is to abolish the House of Lords as the Second Chamber, and replace it with a kind of jury selected by lottery from the population in much the same way that juries are selected for the crown courts except that they would be much larger, of order a thousand people or so.  This is called the Athenian Option because in ancient Athens all citizens could vote (although I should add that in ancient Athens there were about 5000 citizens and about 100,000 slaves, and women couldn’t vote even if they weren’t slaves, so the name isn’t at all that appropriate).

Selection of representatives from the electoral roll would be quite straightforward to achieve.  Service should be mandatory, but the composition of the Second Chamber could be refreshed sufficiently frequently that participation should not be too onerous for any individual. It may even be possible for the jury not to have to attend a physical `house’ anyway. They could vote by telephone or internet, although safeguards would be needed to prevent fraud or coercion. It would indeed probably be better if each member of the panel voted independently and in secret anyway.

The central body of government would continue to be a representative Parliament similar to the current House of Commons. The role of the jury would be  limited to voting on legislation sent to it by the House of Commons, which would continue to be elected by a General Election as it is at present. Laws passed by the Commons could not become law unless approved by the juries.

Turnout at British general elections has been falling steadily over the past two decades. Apathy has increased  because the parliamentary machine has become detached from its roots. If nothing is done to bring it back under popular control, extremist parties like the British National Party will thrive and the threat to our democracy will grow further.

The creation of regional assemblies in Wales, Scotland and Northern Ireland has not been as successful as it might have been because it has resulted not in more democracy, but in more politicians. The Welsh Assembly, for example, has little real power, but has fancy offices and big salaries for its members and we have it as well as Westminster and the local Councils.

We also have a European Parliament, again with very little real power but with its own stock of overpaid and self-important politicians elected by the tiny fraction of the electorate that bothers to vote.

My solution to this mess would be to disband the regional assemblies and create regional juries in their place. No legislation would be enacted in Wales unless passed by the Welsh jury, likewise elsewhere.

To be consistent, the replacement House of Lords should be an English jury, although perhaps there could be regional structures within England too. We would therefore have one representative house, The House of Commons, and regional juries for Wales, Scotland, England (possibly more than one) and Northern Ireland. This would create a much more symmetrical structure for the governance of the United Kingdom, putting an end to such idiocies as the West Lothian Question.

Of course many details would need to be worked out, but it seems to me that this proposal makes a lot of sense. It retains the political party system in the House of Commons where legislation would be debated and amended before being sent to the popular juries. The new system would, however, be vastly cheaper than our current system. It would be much fairer and more democratic. It would make the system of government more accountable, and it would give citizens a greater sense of participation in and responsibility for the United Kingdom’s political culture. Politics is too important to be left to politicians.

On the other hand, in order to set it up we would need entire sections of the current political structure to vote themselves out of existence. Since they’re doing very nicely out of the current arrangements, I think change is unlikely to be forthcoming through the usual channels. Turkeys won’t vote for Christmas.

Anyone care for a revolution?

 


by telescoper at September 19, 2014 01:00 PM

CERN Bulletin

Women’s rugby tournament | 27 September
Women’s rugby tournament Saint-Genis rugby pitch - Golf des Serves 27 September 2014 - 10 a.m. For the third consecutive year, the women's rugby club of CERN Meyrin St Genis, The Wildcats, are organising a women’s 7's rugby tournament. With the support of the Office Municipal des Sports of St Genis-Pouilly and various other sponsors, we will be welcoming 10 teams ready to fight it out for victory! Bring your family and friends for a great day of rugby! Come and discover the values of team spirit in rugby and support your local team (RC CMSG). An initiation for kids between 4 and 10 years old will be organised by school rugby trainers. There will also be a live music concert. Food and drink will be available all day. Concert schedule 6 p.m.: Bad spirits out of the boot 7 p.m.: SoundHazard 8 p.m.: Miss Proper & the Moving Targets 9 p.m.: Fuzzy Dunlop More information on: http://www.facebook.com/events/509236532536269/

September 19, 2014 12:58 PM

Christian P. Robert - xi'an's og

an ISBA tee-shirt?!

 Sonia Petrone announced today at BAYSM’14 that a competition was open for the design of an official ISBA tee-shirt! The deadline is October 15 and the designs are to be sent to Clara Grazian, currently at CEREMADE, Université Dauphine [that should be enough to guess her email!]. I will most certainly submit my mug design. And maybe find enough free time to design a fake eleven Paris with moustache tee-shirt. With Bayes’ [presumed] portrait of course…


Filed under: Kids, pictures, University life Tagged: BAYSM 2014, ISBA, Markov chain, mug, tee-shirt, werewolf

by xi'an at September 19, 2014 12:18 PM

The Great Beyond - Nature blog

UN Security Council says Ebola is security threat

The Ebola outbreak in West Africa is “a threat to international peace and security”, the United Nations (UN) Security Council said on 18 September, in a resolution calling for a massive increase in the resources devoted to stemming the virus’s spread.

EbolaMask

Centers for Disease Control and Prevention

The council is asking countries to send supplies and medical personnel to Liberia, Guinea and Sierra Leone, and seeks to loosen travel restrictions that have hampered outbreak response in those countries. The unusual resolution was co-sponsored by 131 nations and approved at the first emergency council meeting organized in response to a health crisis.

More than 5,300 people are thought to have been infected with Ebola during the current epidemic, and more than 2,600 have died, according to the World Health Organization (WHO) in Geneva, Switzerland.

The pace of the disease’s spread seems to be increasing, with the number of Ebola cases now doubling every three weeks, UN secretary general Ban Ki-moon told the council. ”The gravity and scale of the situation now require a level of international action unprecedented for a health emergency,” he said.

WHO director-general Margaret Chan sounded a similarly dire warning. “This is like the greatest peace-time challenge that the United Nations and its agencies have ever faced,” Chan told the security council.

The UN estimates that an effective response to the Ebola outbreak will cost nearly US$1 billion, double the $490 million figure put forth by the WHO on 28 August. The United States has promised a major influx of resources, with US President Barack Obama announcing on 16 September that he would send 3,000 military personnel and spend roughly $750 million to aid the Ebola fight.

by Lauren Morello at September 19, 2014 01:56 AM

September 18, 2014

The Great Beyond - Nature blog

US Congress approves stopgap funding bill
8531546523_b9f1ac7440_z

The US Congress may consider approving a final 2015 budget in November.

Architect of the Capitol

The US Senate passed a stopgap spending bill on 18 September that includes US$88 million to fight the Ebola outbreak in West Africa.

The bill, endorsed by the House of Representatives on 17 September, now heads to US President Barack Obama, who is expected to sign it into law. The legislation would fund government operations from 1 October — when the 2015 fiscal year begins — until 11 December.

Under the plan, the US Centers for Disease Control and Prevention would receive $30 million to send more health workers and resources to countries affected by the Ebola outbreak; the agency said earlier this week that it has roughly 100 personnel in Africa working on Ebola response. The Biomedical Advanced Research and Development Authority would receive $58 million to fund the development of the promising antibody cocktail known as ZMapp, made by Mapp Pharmaceutical in San Diego, California, and two vaccines against Ebola produced by the US National Institutes of Health and NewLink Genetics of Ames, Iowa.

The funding is a small fraction of the 3,000 military personnel and roughly $750 million that Obama has committed to the Ebola fight. The disease is thought to have infected more than 5,300 people and has killed more than 2,600, according to the World Health Organization in Geneva, Switzerland.

The temporary funding measure would essentially hold US agencies’ budgets flat at 2014 levels. A more permanent 2015 spending plan will have to wait until Congress returns to work after the federal election on 4 November.

Below are the funding levels that key US science agencies received in 2014, plus the funding levels proposed in 2015 House and Senate spending bills, and the estimated fiscal 2015 funding included in the stopgap measure approved on 18 September.

Agency 2014 funding level (US$ millions)
2015 House proposal 2015 Senate proposal 2015 stopgap measure (estimated, US$ millions)
National Institutes of Health 30,003 N/A N/A 30,003
Centers for Disease Control and Prevention 5,882 N/A N/A 5,912
Food and Drug Administration 2,640* 2,574 2,588 2,640
National Science Foundation 7,172 7,404 7,255 7,172
NASA (science) 5,151 5,193 5,200 5,151
Department of Energy Office of Science 5,066 5,071 N/A 5,066
Environmental Protection Agency 8,200 7,483 N/A 8,200
National Oceanic and Atmospheric Administration 5,314 5,325 5,420 5,314
US Geological Survey 1,032 1,035 N/A 1,032

* Includes one-time transfer of $79 million in user fees.

Additional reporting by Sara Reardon. 

by Lauren Morello at September 18, 2014 10:37 PM

Emily Lakdawalla - The Planetary Society Blog

NASA's Global Reach: Pakistan
Nagin Cox, a systems engineer and manager at the Jet Propulsion Laboratory currently working on the mission operations team for Curiosity, tells us about a trip she took to Pakistan as an ambassador for science and technology.

September 18, 2014 03:19 PM

Symmetrybreaking - Fermilab/SLAC

Pursuit of dark matter progresses at AMS

A possible sign of dark matter will eventually become clear, according to promising signs from the Alpha Magnetic Spectrometer experiment.

New results from the Alpha Magnetic Spectrometer experiment show that a possible sign of dark matter is within scientists’ reach.

Dark matter is a form of matter that neither emits nor absorbs light. Scientists think it is about five times as prevalent as regular matter, but so far have observed it only indirectly.

The AMS experiment, which is secured to the side of the International Space Station 250 miles above Earth, studies cosmic rays, high-energy particles in space. A small fraction of these particles may have their origin in the collisions of dark matter particles that permeate our galaxy. Thus it may be possible that dark matter can be detected through measurements of cosmic rays.

AMS scientists—based at the AMS control center at CERN research center in Europe and at collaborating institutions worldwide—compare the amount of matter and antimatter cosmic rays of different energies their detector picks up in space. AMS has collected information about 54 billion cosmic ray events, of which scientists have analyzed 41 billion.

Theorists predict that at higher and higher energies, the proportion of antimatter particles called positrons should drop in comparison to the proportion of electrons. AMS found this to be true.

However, in 2013 it also found that beyond a certain energy—8 billion electronvolts—the proportion of positrons begins to climb steeply.

“This means there’s something new there,” says AMS leader and Nobel Laureate Sam Ting of the Massachusetts Institute of Technology and CERN. “It’s totally unexpected.”

The excess was a clear sign of an additional source of positrons. That source might be an astronomical object we already know about, such as a pulsar. But the positrons could also be produced in collisions of particles of dark matter.

Today, Ting announced AMS had discovered the other end of this uptick in positrons—an indication that the experiment will eventually be able to discern what likely caused it.

“Scientists have been measuring this ratio since 1964,” says Jim Siegrist, associate director of the US Department of Energy’s Office of High-Energy Physics, which funded the construction of AMS. “This is the first time anyone has observed this turning point.”

The AMS experiment found that the proportion of positrons begins to drop off again at around 275 billion electronvolts.

The energy that comes out of a particle collision must be equal to the amount that goes into it, and mass is related to energy. The energies of positrons made in dark matter particle collisions would therefore be limited by the mass of dark matter particles. If dark matter particles of a certain mass are responsible for the excess positrons, those extra positrons should drop off rather suddenly at an energy corresponding to the dark matter particle mass.

If the numbers of positrons at higher energies do decrease suddenly, the rate at which they do it can give scientists more clues as to what kind of particles caused the increase in the first place. “Different particles give you different curves,” Ting says. “With more statistics in a few years, we will know how quickly it goes down.”

If they decrease gradually instead, it is more likely they were produced by something else, such as pulsars.

To gain a clearer picture, AMS scientists have begun to collect data about another matter-antimatter pair—protons and antiprotons—which pulsars do not produce.

The 7.5-ton AMS experiment was able to make these unprecedented measurements due to its location on the International Space Station, above the interference of Earth’s atmosphere.

“It’s really profound to me, the fact that we’re getting this fundamental data,” says NASA Chief Scientist Ellen Stofan, who recently visited the AMS control center. “Once we understand it, it could change how we see the universe.”

AMS scientists also announced today that the way that the positrons increased within the area of interest, between 8 and 257 GeV, was steady, with no sudden peaks. Such jolts could have indicated the cause of the positron proliferation were sources other than, or in addition to, dark matter.

In addition, AMS discovered that positrons and electrons act very differently at different energies, but that, when combined, the fluxes of the two together unexpectedly seem to fit into a single, straight slope.

“This just shows how little we know about space,” Ting says.

Fifteen countries from Europe, Asia and America participated in the construction of AMS. The collaboration works closely with a management team at NASA’s Johnson Space Center. NASA carried AMS to the International Space Station on the final mission of the space shuttle Endeavour in 2011.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at September 18, 2014 02:20 PM

arXiv blog

Nanotechnologists Discover How to Carve Tunnels Beneath the Surface of Silicon Chips

A new technique for creating pipes and tunnels deep inside silicon chips could change the way engineers make microfluidic machines and optoelectronic devices.

September 18, 2014 02:00 PM

Christian P. Robert - xi'an's og

up [and down] Pöstlingberg

linz3

Early morning today, following my Linz guests’ advice, I went running towards the top of Pöstlingberg, a hill 250m over Linz and the Danube river. A perfect beacon that avoiding wrong turns and extra-mileage, but still a wee climb on a steep path for the last part. The reward of the view from the top was definitely worth the [mild] effort and I even had enough time to enjoy a good Austrian breakfast before my ABC talk

linz4


Filed under: Mountains, pictures, Running, Travel, University life Tagged: Austria, breakfast, Danube, Donau, IFAS, Linz, montain running, Pöstlingberg, seminar, sunrise

by xi'an at September 18, 2014 12:18 PM

Peter Coles - In the Dark

Scotland Small?

Scotland small? Our multiform, our infinite Scotland _small_?
Only as a patch of hillside may be a cliche corner
To a fool who cries “Nothing but heather!” Where in September another
Sitting there and resting and gazing around
Sees not only heather but blaeberries
With bright green leaves and leaves already turned scarlet,
Hiding ripe blue berries; and amongst the sage-green leaves
Of the bog-myrtle the golden flowers of the tormentil shining;
And on the small bare places, where the little Blackface sheep
Found grazing, milkworts blue as summer skies;
And down in neglected peat-hags, not worked
In living memory, sphagnum moss in pastel shades
Of yellow, green and pink; sundew and butterwort
And nodding harebells vying in their colour
With the blue butterflies that poise themselves delicately upon them,
And stunted rowans with harsh dry leaves of glorious colour
“Nothing but heather!” — How marvellously descriptive! And incomplete!

 

by Hugh McDiarmid (1892-1978)


by telescoper at September 18, 2014 12:01 PM

Jaques Distler - Musings

Mrs. Adler Lies!

Such a sweet looking old lady…

Mrs. Adler's Gefilte Fish label: 21 piece - institutional pack

Above is the label from a can of gefilte fish that we bought for the holidays. A large can, to be sure, but not (as I discovered, upon opening it) a can containing 21 pieces of gefilte fish. Nor 20 pieces. 14 pieces of gefilte fish … in a can labeled as containing 21.

Wars have started over more trivial affronts.

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 01:15 AM

Jaques Distler - Musings

Halloween 2013

It’s Halloween, again. Time for another pumpkin.

Wendy Davis pumpkin
Run, Wendy, run!

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 01:14 AM

Jaques Distler - Musings

Audiophilia

Humans are hard-wired to find patterns.

Even when there are none.

Explaining those patterns (at least, the ones which are real) is what science is all about. But, even there, lie pitfalls. Have you really controlled for all of the variable which might have led to the result?

A certain audiophile and journalist posted a pair of files, containing a ~43sec clip of music, and challenged his readers to see if they could hear the difference between them. Sure enough, “File A” sounds a touch brighter than “File B.” A lively discussion ensued, before he revealed the “reason” for the difference:

  • File B was recorded, from his turntable, via a straight-through cable.
  • File A was recorded, from his turntable, via a cable that passed through a switchbox.

Somehow or other, the switchbox (or the associated cabling) was responsible for the added brightness. An even more lively discussion ensued.

Now, if you open up the two files in Audacity, you discover something interesting: File A gets from the beginning of the musical clip to the same point at the end of the clip 43 milliseconds faster than File B does. That’s a 0.1% difference in speed (and hence pitch) of the recording. Such a difference, while too small to be directly discernible as a change in pitch, ought to be clearly perceptible as “brighter.”

On the other hand, I would contend that putting a switchbox in the signal path cannot possibly cause the information, traveling down the wire, to have shorter duration. The far more likely explanation was that there was a 0.1% variation in the speed of Mr. Fremer’s turntable between the two recordings.

I used Audacity’s “Change speed …” effect to slow down File A (by -0.099 percent), clipped the result (so that the two files have exactly the same duration), and posted them below.

See if you can tell which one is Mr. Fremer’s File A, and which is his File B, and, more importantly, whether you can detect a difference in the brightness (or edginess or whatever other audiophiliac descriptions you’d like to attach to them), now that the speed difference has been corrected.

Update: (2/9/2014)

As I said in the comments, and as anyone who opens up the files in Audacity immediately discovers, it is easy to tell which file is which, by examining their waveforms. Here are Files “B” (upper) and “D” (lower)

Waveforms of Files B and D

(click on any image for a larger view). It’s not obvious, from this picture that they’re the same (bear with me, about that). But, if we think they are, it’s easy enough to check.

  1. First, zoom in as far as you can.
  2. Then use the time-shift tool, to align the two waveforms precisely, at some point in the clip. It’s best to focus on a segment with some high frequency (rapidly varying) content. Fortunately, the pops and clicks on Mr. Fremer’s vinyl record give us plenty to choose from.
  3. Now select one of the wave-forms, and choose “Effect“→”Invert”.
  4. Finally, select both and choose “Tracks“→”Mix and Render”.

If we’re correct, the two traces should precisely cancel each other out.

Exact cancellation Files B and D

And they do (except for the little bits at the beginning and end that I trimmed in creating File “D”). Note that I magnified the vertical scale by a factor of 5, so that you can really see the perfect cancellation.

You can repeat the same procedure for Files “B” and “C” but, no matter how carefully you perform step 2, you can never get anything close to perfect cancellation. So, without even listening to the files, Mr. Fremer (after he got over his initial misapprehension that “C” and “D” were the same) was able to confidently determine which was which.

In a way, that’s rather disappointing, because it doesn’t really tell us anything about how different “C” and “D” are, and whether fixing the speed of the former diminished, in any way, the perceptible differences between them. As Mr. Fremer says,

As to how the two files sound, I didn’t have time last night to listen but will do so today. Of course I know which is which so I’m not sure what my result might prove.

But, since we have Audacity fired up, let’s see what the story is.

Though I said that, for the clip as a whole, there’s no way to line the tracks, so as to achieve cancellation, on a short-enough timescale, you can get very good (though, of course, not perfect) cancellation. The cancellation doesn’t persist – the tracks wander in and out of phase with each other, due (at least in part, if not in toto) to the wow-and-flutter of Mr. Fremer’s turntable.

Here’s an example (chosen because Mr. Fremer seemed to think that there’s a flagrant disparity in the waveforms, right in the middle of this excerpt).

1/4 second excerpt from Files C and D.
File C (upper) and D (lower), from 32.750s to 33.000s.

Following the procedure outlined above, we align the two clips at the center, and attempt to cancel the waveforms:

1/4 second excerpt from Files C and D.

They cancel very well at the center, but progressively poorly as you move to either end, where the two tracks wander out-of-phase. Notice the sharp spikes. If you zoom in, you’ll notice that these are actually S-shaped: they’re the result of superposing two musical peaks (one of which we inverted, of course) that have gotten slightly out-of-phase with each other. They cancel at the center, but not at either end, where they have ceased to overlap. Of course, not just the peaks but everything else has also gone out-of-phase, so these S-shaped spikes sit on top of an incomprehensible hash.

You can repeat the process for other short excerpts, with similar-looking results.

Now, “Andy”, below, says he heard a systematic difference between the files, similar to what others reported for Fremer’s Files “A” and “B.” That bears further investigation. But I’ve said to Mr. Fremer that, if he really wants to get to the bottom of what differences, if any, the cables contributed to these recordings, it would be best to eliminate the wow-and-flutter that is clearly responsible for most (if not all) of the visible differences displayed here.

He should start with a digital source (like, say, one of the 24/96 FLAC files we’ve been discussing), played back through the two different cables he wants to test. That source, at least, won’t vary (in a random and uncontrollable fashion) from one playback to the next.

Repeatability is another one of those things that we strive for in Science.


In fact, it is that wandering in-and-out of phase that is the most glaring difference between the files and it (rather than the more sophisticated procedure that I outlined above) is what makes it trivial, for even a casual observer, to pick out which file is which.

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 01:14 AM

Jaques Distler - Musings

Golem V

For nearly 20 years, Golem has been the machine on my desk. It’s been my mail server, web server, file server, … ; it’s run Mathematica and TeX and compiled software for me. Of course, it hasn’t been the same physical machine all these years. Like Doctor Who, it’s gone through several reincarnations.

Alas, word came down from the Provost that all “servers” must move (physically or virtually) to the University Data Center. And, bewilderingly, the machine on my desk counted as a “server.”

Obviously, a 27” iMac wasn’t going to make such a move. And, equally obvious, it would have been rather difficult to replace/migrate all of the stuff I have running on the current Golem. So we had to go out shopping for Golem V. The iMac stayed on my desk; the machine that moved to the Data Center is a new Mac Mini

The new Mac Mini
side view
Golem V, all labeled and ready to go
  • 2.3 GHz quad-core Intel Core i7 (8 logical cores, via hyperthreading)
  • 16 GB RAM
  • 480 GB SSD (main drive)
  • 1 TB HD (Time Machine backup)
  • 1 TB external HD (CCC clone of the main drive)
  • Dual 1 Gigabit Ethernet Adapters, bonded via LACP

In addition to the dual network interface, it (along with, I gather, a rack full of other Mac Minis) is plugged into an ATS, to take advantage of the dual redundant power supply at the Data Center.

Not as convenient, for me, as having it on my desk, but I’m sure the new Golem will enjoy the austere hum of the Data Center much better than the messy cacophony of my office.


I did get a tour of the Data Center out of the deal. Two things stood out for me.

  1. Most UPSs involve large banks of lead-acid batteries. The UPSs at the University Data Center use flywheels. They comprise a long row of refrigerator-sized cabinets which give off a persistent hum due to the humongous flywheels rotating in vacuum within.
  2. The server cabinets are painted the standard generic white. But, for the networking cabinets, the University went to some expense to get them custom-painted … burnt orange.
Custom paint job on the networking cabinets.

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 01:10 AM

Jaques Distler - Musings

Bringing the Web to America

It has long been my conviction that anything appearing on the Wall Street Journal’s Editorial/Op-Ed pages is a lie. In fact, if there’s a paragraph appearing on those pages, in which you can’t spot an evident falsehood or obfuscation, then the problem is that you haven’t studied the topic, at hand, in sufficient depth.

On that note, it comes as no surprise that we “learn” [via Kevin Drum] that the Internet was the creation of private industry (specifically, Xerox PARC), not some nasty Government agency (DARPA). Nor is it surprising that the author of the book about PARC, on which the claims of the WSJ Op-Ed were based, promptly took to the pages of of the LA Times to debunk each and every paragraph. (See also Vint Cerf: “I would happily fertilize my tomatoes with Crovitz’ assertion.”)

Which leaves me little to do, but post a copy of this lecture, from 1999, by Paul Kunz of SLAC. The video quality is really bad, but this is (to my knowledge) the only extant copy. He tells a bit of the pre-history of the internet, and the role high energy physicists played.

As Michael Hiltzik alluded to, in his LA Times piece, AT&T (and, more relevant for Kunz’s story, the Europeen Telecoms) were dead-set against the internet, and did everything they could to smother it in its cradle. High energy physicists (who were, in turn, funded by …) played a surprising role in defeating them. (And yes, unsurprisingly, Al Gore makes a significant appearance towards the end.)

Enjoy ….


Paul Kunz: Bringing the Web to America

And now you know the answer to the trivia question: “What was the first website outside of Europe?”

Update:

For those unfamiliar with how this all works, Gordon Crovitz, the author of the hilariously wrong column in question, is the former publisher of the Wall Street Journal. And the column, itself, is now endlessly echoed and repeated in the wingnutosphere.

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 12:57 AM

September 17, 2014

Peter Coles - In the Dark

Market Garden

I’m just back to Brighton after a meeting in London so I hope you will excuse me for my brevity on this occasion. On the other hand I feel obliged to note an important anniversary.

Seventy years ago today, on 17th September 1944, the largest airborne operation in military history began. Operation Market Garden (as it was called) saw about 35,000 Allied troops dropped by parachute or landed in gliders behind German lines in Holland, with the aim of seizing key bridges in order to allow infantry and armoured divisions to advance, eventually into Germany. Of more immediate tactical importance was that capture of the Northernmost bridges over the Rhine at Arnhem would prevent German reinforcements from moving South to confront the advancing troops and armoured vehicles of XXX Corps whose job was to punch a hole in the German defences and link up with the airborne troops.

 

Operation_MARKET-GARDEN_-_82.Airborne_near_Grave

Motivated by the belief that German armies in the West were exhausted and on the brink of collapse as well as the desire if possible to finish the war before Christmas, Operation Market Garden was daring and imaginative, but began to unravel right from the outset and ended as a disastrous failure, with the loss of many lives.

I’m not a military historian, so am not competent to add anything significant to the huge amount that has been written about what went wrong, but I will add a personal note. A cousin of my Grandfather flew to Arnhem with the 1st British Airborne division whose job was to take and hold the bridges over the Rhine that would open the door to an invasion of Germany. Sadly, he was one of those many troops who never even made it to their objective. In fact he was dead before he even hit the ground; his unit was dropped virtually on top of heavily armed German forces and had no chance of defending themselves. I had always been told that he had been dropped by parachute, but the records at the cemetery revealed that was wrong; he was on a glider which was badly shot up during its approach.

In fact the action at Arnhem involved two bridges, one a railway bridge at Oosterbeek and the other a road bridge in Arnhem itself. British paratroopers did manage to capture one end of the road bridge, but never succeeded in securing both ends of the structure. Cut off from the much larger force pinned down near their landing zones they were eventually forced to surrender simply because they had run out of ammunition. The other units that landed near Arnhem never made their objectives and had to dig in and hope for reinforcements that never came. They fought a brave but desperate defensive action until 25th September when some were successfully evacuated across the Rhine. The original battle orders had specified they were to hold their ground for 48 hours until relieved by armour and infantry advancing from the South.

Some years ago, after attending a conference in Leiden, I took time out to visit Oosterbeek cemetery, where  1437 soldiers lie buried. Such was the chaos at Arnhem that bodies of fallen soldiers are still being discovered in gardens and woods; as there were so many dead that there was only time to bury them in shallow graves where they had fallen. As remains are discovered they are removed and reburied in Oosterbeek. When I visited the cemetery about 20 years ago, there were several brand new graves.

The local people looked on in horror as their potential liberators were cut down. It must have been deeply traumatizing for them. I think it is telling that when, in 1969, the British Army proposed bringing to an end the annual ceremonies in commemoration of these events, local Dutch civilians insisted that they continue.

As I stood by the grave I couldn’t help thinking of how lucky members of my generation are that we have not been called on to make such a sacrifice. The failure of Operation Market Garden had other terrible consequences. The winter of 1944/45 was a bitter one for Dutch civilians in the part of their country that had not been liberated, with many thousands dying from hunger and cold.

And of course had the Allies succeeded in penetrating into Germany in 1944 the post-war map of Europe would probably have been very different. Had Market Garden been successful would there have been 45 years of Cold War?

 


by telescoper at September 17, 2014 06:44 PM

Emily Lakdawalla - The Planetary Society Blog

NASA Kicks Off a Private Space Race Between Boeing and SpaceX
Boeing and SpaceX have won multi-billion dollar contracts to ferry astronauts to the International Space Station.

September 17, 2014 05:55 PM

Symmetrybreaking - Fermilab/SLAC

XKCD creator answers ‘What if?’

Randall Munroe, author of the webcomic xkcd, has found another outlet for his inquisitive nature.

Relentless curiosity is the driving force behind Internet phenomenon Randall Munroe’s new book, What If? Serious Scientific Answers To Absurd Hypothetical Questions.

Munroe, a former NASA roboticist with an undergraduate degree in physics, is known for drawing xkcd, stick figure comics that cover a range of topics including math, coding and physics. The illustrated book, released on September 2, answers reader-submitted questions about imaginary scenarios that spark Munroe’s interest.

“When people ask me these questions, I get really curious about the answer,” Munroe said during a Google Hangout with fans on September 12. “At the end of the day, the thing that really drives me is when someone asks the question and I can’t stop wondering about the answer.”

Munroe said he receives more inquiries than he can read individually. This torrent of far-fetched wonderings also feeds a weekly series on the xkcd website.

Since its release earlier this month, What If? reached number one on Amazon’s bestseller list and topped the New York Times combined print and e-book nonfiction category.

During an hour-long video chat from Google’s headquarters in Mountain View, California, Munroe answered questions from online fans and host Hank Green of the “vlogbrothers” YouTube channel.

One example of a typical What If? question: How close would you need to be to a supernova to get a lethal dose of neutrino radiation? Munroe pointed out that although supernovae emit a large number of neutrinos that could interact with your DNA, if you got close enough to a supernova you would be vaporized before neutrinos became an issue.

It’s hard to give a real sense of how ghostly neutrinos are to someone who is unfamiliar with the topic, Munroe said. “The idea of having them interact with you at all is unlikely.”

Other topics covered during the discussion ranged from how large a mole of moles would be (they would form a planet, then heat up and erupt in volcanoes) to constraints on skyscraper size (primarily elevators, wind and money).

“I’ve found that some of the very best questions are definitely the ones from little kids,” Munroe said. “I think adults try to make the question really clever and try to bake in a bunch of crazy consequences.”

He also added that readers sometimes submit questions simply to try to get Munroe to do their homework. But such unimaginative queries are unlikely to find a response.

“That’s my basic gauge: Do I want to know the answer to this?” Munroe said. “Is it something that I don’t know already but I would like to know?”

 

Like what you see? Sign up for a free subscription to symmetry!

by Amanda Solliday at September 17, 2014 05:00 PM

The Great Beyond - Nature blog

Ebola economic impacts to hit US$359 million in 2014

The Ebola outbreak in West Africa is not only devastating the lives of thousands of people in Liberia, Guinea and Sierra Leone, but it is devastating the economies in those countries as well.

The outbreak is expected to halve economic growth this year in Guinea and Liberia, and reduce growth by 30% in Sierra Leone, according to a 17 September report from the World Bank. It estimates total economic damages from the three countries will total US$359 million in 2014. If the world does not respond quickly with money and resources to halt Ebola’s spread, this impact could grow eight-fold next year, warns the report — the first quantitative estimate of the outbreak’s economic impact.

The United Nations’ 16 September Ebola response plan estimates that the cost of immediate response to the crisis will be close to $1 billion, double the $495 million called for by the World Health Organization on 28 August. This estimate will only continue to increase if other countries do not contribute to the response soon, World Bank Group President Jim Yong Kim said in a phone conference with reporters.

In the long term, the World Bank imagines two scenarios for Ebola’s economic impact: a ‘low Ebola’ scenario in which the outbreak is rapidly contained within the three affected countries, and a ‘high Ebola’ scenario in which it goes unchecked until well into 2015. Under the latter scenario, the economies of Guinea, Liberia and Sierra Leone would suffer significantly; Liberia could lose as much as 12% of its gross domestic product in 2015, the analysis says, thus reducing the country’s growth rate from 6.8% to –4.9%.

Agriculture and mining are the sectors worst hit, along with manufacturing and construction.

“There are two kinds of contagion,” Kim said. “One is related to the virus itself and the other is related to the spread of fear about the virus.” Health-care costs and illness from the virus itself contribute little to the economic impact, the report found. Rather, 80–90% of the economic effects are due to the “fear factor” that shuts down transportation systems, including ports and airports, and keeps people away from their jobs.

The exact number of Ebola cases, for which estimates are constantly changing, is not relevant to the economic model that the World Bank developed, Kim said. “What really matters is how quickly we scale up the response so that we can address the entire number of cases. If we get an effective response on the ground in the next few months, we can blunt the vast majority, 80–90%, of the economic impact,” he added. If this does not happen and the epidemic spreads to other countries such as Nigeria, Ghana and Senegal, Kim cautioned, the ultimate economic hit from this outbreak could reach “many billions”.

Kim also announced a new effort to develop a “universal protocol” for Ebola treatment. Paul Farmer, a physician and global-health expert at Harvard University in Cambridge, Massachusetts; Anthony Fauci, director of the US National Institute for Allergy and Infectious Disease in Bethesda, Maryland; and several non-governmental organizations such as Médecins Sans Frontières will work on the protocol for the World Health Organization to adopt to ensure that all health-care workers will be trained to treat Ebola in the same way.

Such protocols have been crucial in improving management of diseases such as tuberculosis, Kim said. For Ebola, measures that are likely to be part of the protocol include simple steps such as isolation of patients and hydration, which can greatly improve survival.

by Sara Reardon at September 17, 2014 04:55 PM

Emily Lakdawalla - The Planetary Society Blog

Comet Siding Spring Mars encounter: One Mars Express plan becomes two
The Mars Express Flight Control Team at ESOC have been actively preparing for the flyby of comet C/2013 A1/Siding Spring on October 19. Initial estimates gave the possibility that Mars Express might be hit by 2 or 3 high-speed particles. Happily, additional observations by ground and space telescopes have shown the risk to be much lower – and perhaps even as low as zero. In today's blog post, the team explain how this (happy!) real-life, real-time development is affecting their preparations for fly-by.

September 17, 2014 03:44 PM

Quantum Diaries

Calm before the storm: Preparing for LHC Run2

It’s been a relatively quiet summer here at CERN, but now as the leaves begin changing color and the next data-taking period draws nearer, physicists on the LHC experiments are wrapping up their first-run analyses and turning their attention towards the next data-taking period. “Run2″, expected to start in the spring of 2015, will be the biggest achievement yet for particle physics, with the LHC reaching a higher collision energy than has ever been produced in a laboratory before.

As someone who was here before the start of Run1, the vibe around CERN feels subtly different. In 2008, while the ambitious first-year physics program of ATLAS and CMS was quite broad in scope, the Higgs prospects were certainly the focus. Debates (and even some bets) about when we would find the Higgs boson – or even if we would find it – cropped up all over CERN, and the buzz of excitement could be felt from meeting rooms to cafeteria lunch tables.

Countless hours were also spent in speculation about what it would mean for the field if we *didn’t* find the elusive particle that had evaded discovery for so long, but it was Higgs-centric discussion nonetheless. If the Higgs boson did exist, the LHC was designed to find this missing piece of the Standard Model, so we knew we were eventually going to get our answer one way or another.

Slowly but surely, the Higgs boson emerged in Run1 data

Slowly but surely, the Higgs boson emerged in Run1 data. (via CERN)

Now, more than two years after the Higgs discovery and armed with a more complete picture of the Standard Model, attention is turning to the new physics that may lie beyond it. The LHC is a discovery machine, and was built with the hope of finding much more than predicted Standard Model processes. Big questions are being asked with more tenacity in the wake of the Higgs discovery: Will we find supersymmetry? will we understand the nature of dark matter? is the lack of “naturalness” in the Standard Model a fundamental problem or just the way things are?

The feeling of preparedness is different this time around as well. In 2008, besides the data collected in preliminary cosmic muon runs used to commission the detector, we could only rely on simulation to prepare the early analyses, inducing a bit of skepticism in how much we could trust our pre-run physics and performance expectations. Compounded with the LHC quenching incident after the first week of beam on September 19, 2008 that destroyed over 30 superconducting magnets and delayed collisions until the end of 2009, no one knew what to expect.

Expect the unexpected.

Expect the unexpected…unless it’s a cat.

Fast forward to 2014, we have an increased sense of confidence stemming from our Run1 experience, having put our experiments to the test all the way from data acquisition to event reconstruction to physics analysis to publication…done at a speed which surpassed even our own expectations. We know to what extent we can rely the simulation, and know how to measure the performance of our detectors.

We also have a better idea of what our current analysis limitations are, and have been spending this LHC shutdown period working to improve them. Working meeting agendas, usually with the words “Run2 Kick-off” or “Task Force” in the title, have been filled with discussions of how we will handle data in 2015, with what precision can we measure objects in the detector, and what our early analysis priorities should be.

The Run1 dataset was also used as a dress rehearsal for future runs, where for example, many searches employed novel techniques to reconstruct highly boosted final states often predicted in new physics scenarios. The aptly-named BOOST conference recently held at UCL this past August highlighted some of the most state-of-the-art tools currently being developed by both theorists and experimentalists in order to extend the discovery reach for new fundamental particles further into the multi-TeV region.

Even prior to Run1, we knew that such new techniques would have to be validated in data in order to convince ourselves they would work, especially in the presence of extreme pileup (ie: multiple, less-interesting interactions in the proton bunches we send around the LHC ring…a side effect from increased luminosity). While the pileup conditions in 7 and 8 TeV data were only a taste of what we’ll see in Run2 and beyond, Run1 gave us the opportunity to try out these new techniques in data.

One of the first ever boosted top candidate events recorded in the ATLAS detector, where all three top decay products can be found inside a single hadronic jet.

One of the first ever boosted hadronic top candidate events recorded in the ATLAS detector, where all three decay products (denoted by red circles) can be found inside a single large jet, denoted by a green circle. (via ATLAS)

Conversations around CERN these days sound similar to those we heard before the start of Run1…what if we discover something new, or what if we don’t, and what will that mean for the field of particle physics? Except this time, the prospect of not finding anything is less exciting. The Standard Model Higgs boson was expected to be in a certain energy range accessible at the LHC, and if it was excluded it would have been a major revelation.

There are plenty of well-motivated theoretical models (such as supersymmetry) that predict new interactions to emerge around the TeV scale, but in principle there may not be anything new to discover at all until the GUT scale. This dearth of any known physics processes spanning a range of orders of magnitude in energy is often referred to as the “electroweak desert.”

Physicists taking first steps out into the electroweak desert will still need their caffeine.

Physicists taking first steps out into the electroweak desert will still need their caffeine. (via Dan Piraro)

Particle physics is entering a new era. Was the discovery of the Higgs just the beginning, and there is something unexpected to find in the new data? or will we be left disappointed? Either way, the LHC and its experiments struggled through the growing pains of Run1 to produce one of the greatest discoveries of the 21st century, and if new physics is produced in the collisions of Run2, we’ll be ready to find it.

by Emily Thompson at September 17, 2014 03:29 PM

The Great Beyond - Nature blog

Prime numbers, black carbon and nanomaterials win 2014 MacArthur ‘genius grants’

Yitang Zhang, a mathematician who recently emerged from obscurity when he partly solved a long-standing puzzle in number theory, is one of the 2014 fellows of the John D. & Catherine T. MacArthur Foundation.

The awards, commonly known as ‘genius grants’, were announced on 17 September. Each comes with a no-strings-attached US$625,000 stipend paid out over five years.

Zhang, a mathematician at the University of New Hampshire in Durham, was honored for his work on prime numbers, whole numbers that are divisible only by 1 or themselves. In April 2013 he published a partial solution to a 2,300-year-old question: how many ‘twin primes’ — or pairs of prime numbers separated by two, such as 41 and 43 — exist.

The twin-prime conjecture, often attributed to the Greek mathematician Euclid of Alexandria, posits that there is an infinite number of such pairs. But mathematicians have not been able to prove that the conjecture is true.

Zhang’s work has narrowed the problem, however. In his 2013 proof, Zhang showed that there are infinitely many prime pairs that are less than 70 million units apart.

Other science and maths-related winners of this year’s fellowships are listed below.

Danielle Bassett, a physicist at the University of Pennsylvania in Philadelphia, studies the organizational principles at work in the brain, and how connections within the organ change over time and under stress. Her research, which draws on network science, has revealed that people with more ‘flexible’ brains — those that can easily make new connections — are better at learning new information.

Tami Bond, an environmental engineer at the University of Illinois, Urbana-Champaign, studies the effects of sooty ‘black carbon’ on climate and human health. Bond, who led the most comprehensive study to date of black carbon’s environmental effects, has found that the pollutant is second only to carbon dioxide in terms of its warming impact.

Jennifer Eberhardt, a social psychologist at Stanford University in California, studies the effects of racial bias on the criminal-justice system in the United States. Her analyses have shown, for example, that black defendants with stereotypical ‘black’ features are more likely to receive the death penalty in cases where victims are white.

Craig Gentry, a computer scientist at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, has shown that encrypted data can be manipulated without being decrypted, and that programs themselves can be encrypted and still function.

Mark Hersam, a materials scientist at Northwestern University in Evanston, Illinois, is developing nanomaterials for a range of uses, such as solar cells and batteries, information technology and biotechnology.

Pamela Long, an historian of science based in Washington DC, has examined intersections between the arts and sciences and issues of authorship and intellectual property. She is now at work on a book tracing the development of engineering in 16th-century Rome.

Jacob Lurie, a mathematician at Harvard University in Cambridge, Massachusetts, studies derived algebraic geometry. “With an entire generation of young theorists currently being trained on Lurie’s new foundations, his greatest impact is yet to come,” the MacArthur Foundation said in its award announcement. In June, Lurie was named a winner of the inaugural $3-million Breakthrough Prize in Mathematics.

 

by Lauren Morello at September 17, 2014 02:16 PM

Lubos Motl - string vacua and pheno

Ambulance-chasing Large Hadron Collider collisions
Guest blog by Ben Allanach on the impure fun of rapid-response physics
B.A. is a professor of theoretical physics at the University of Cambridge. He is a supersymmetry enthusiast, and is always looking for ways to interpret data using it. You can watch his TEDx talk giving some background to the LHC, supersymmetry and dark matter, or (for experts) look at the paper that this blog refers to.
“Ambulance chasing” refers to the morally dubious practice of lawyers chasing down accident victims in order to help them sue. In a physics context, when some recent data disagrees with the Standard Model of particle physics and researchers come up with an interpretation in terms of new physics, they are called ambulance chasers too. This is probably because some view the practice as a little glory-grabbing and somehow impure: you’re not solving problems purely using your mind (you’re using data as well), and even worse that that, you’ve had to be quick or other researchers might have been able produce something similar before you. It’s not that the complainers get really upset, more that they can be a bit sniffy (and others are just taking the piss in a fun way). I’ve been ambulance chasing some data just recently with collaborators, and we’ve been having a great time. These projects are short, snappy and intense. You work long hours for a short period, playing ping-pong with the draft in the final stages while you quickly write the work up as a short scientific paper.



A couple of weeks ago, the CMS experiment released an analysis of some data (TRF) that piqued our interest because it had a small disagreement with Standard Model predictions. In order to look for interesting effects, CMS sieved the data in the following way: they required either an electron and an anti-electron or a muon and an anti-muon. Electrons and muons are called `leptons’ collectively. They also required two jets (sprays of strongly interacting particles) and some apparent missing energy. We’ve known for years that maybe you could find supersymmetry with this kind of sieving. The jets and leptons could come from the production of supersymmetric particles which decay into them and a supersymmetric dark matter particle. So if you find too many of these type of collisions compared to Standard Model predictions, it could be due to supersymmetric particle production.




The ‘missing energy’ under the supersymmetry hypothesis would be due to a supersymmetric dark matter particle that does not interact with the detector, and steals off momentum and energy from the collision. Some ordinary Standard Model type physics can produce collisions that pass through the sieve: for example top, anti-top production. But top anti-top production will give electron anti-muon production with the same probability as electron anti-electron production. So, to account for the dominant background (i.e. ordinary collisions that we are less interested in but that get through the sieve still), the experiment does something clever: they subtract off the electron anti-muon collisions from the electron anti-electron collisions.




The picture below shows the number of collisions that passed through the sieve depending upon the invariant mass of the lepton pair. The big peak is expected and is due to production of a Z-boson. But toward the left-hand side of the plot, you can see that there are a few too many observed events with low invariant masses, compared to the “background” prediction. We’ve interpreted this excess with our supersymmetric particle production hypothesis.



Plot of the dilepton mass distribution from CMS

For those in the know, this is a “2.6 sigma” discrepancy in the rate of production of the type of collisions that CMS had sieved. The number of sigma tells you how unlikely the data is to have come from your model (in this case, the Standard Model). The greater the number of sigma, the more unlikely. 2.6 sigma means that, if you had performed a hundred LHC experiments with identical conditions, the measurement would only have such a large discrepancy once, on average, assuming that the Standard Model is the correct theory of nature. At this point, it’s easy to make it sound like the signal is definitely a discovery. The trouble is, though, that the experiments look at hundreds upon hundreds of measurements, so some of them will come up as discrepant as 2.6 sigma and of course those are the ones you notice. So no one can claim that this is a discovery. Perhaps it will just disappear in new data, because it was one of those chance fluctuations (we’ve seen several like this disappear before). But perhaps it will stick around and get stronger, and that’s the possibility that we are excited about.

When you do this kind of project, the first thing is to check and see if your hypothesis is ruled out by other data, in which case it’s dead in the water before it can get swimming. After that, the question is: does your hypothesis make any other predictions that can be tested? For instance, we’ve been suggesting how the experiment can take another look at their own data to check our hypothesis (there should also be an obvious excess in the events if you plot them against another variable: `jet di-lepton invariant mass’). And we’ve been making predictions of our hypothesis for the prospects of detecting supersymmetry in Run II next year.

You can be sniffy about our kind of ambulance chasing for a variety of reasons - one of them is that it might be a waste of time because it’s “only a 2.6 sigma effect”. There is an obvious response to this: it’s better to work on a 2.6 sigma signal than a 0.0 sigma one.

by Luboš Motl (noreply@blogger.com) at September 17, 2014 01:22 PM

Tommaso Dorigo - Scientificblogging

John Ellis On The Ascent Of The Standard Model
Being at CERN for a couple of weeks, I could not refrain from following yesterday's talks in the Main Auditorium, which celebrated the 90th birthday of Herwig Schopper, who directed CERN in the crucial years of the LEP construction.

A talk I found most enjoyable was John Ellis'. He gave an overview of the historical context preceding the decision to build LEP, and then a summary of the incredible bounty of knowledge that the machine produced in the 1990s.

read more

by Tommaso Dorigo at September 17, 2014 09:27 AM

Clifford V. Johnson - Asymptotia

Baby Mothra!!!
So I discovered a terrifying (but also kind of fascinating and beautiful at the same time) new element to the garden this morning. We're having a heat wave here, and so this morning before leaving for work I thought I'd give the tomato plants a spot of moisture. I passed one of the tomato clusters and noticed that one of the (still green) tomatoes had a large bite taken out of it. I assumed it was an experimental bite from a squirrel (my nemesis - or one of them), and muttered dark things under my breath and then prepared to move away the strange coiled leaf that seemed to be on top of it. Then I noticed. It wasn't a leaf. caterpillar_horn_1 It was a HUGE caterpillar! Enormous! Giant and green with spots and even a red horn at one end! There's a moment when you're unexpectedly close to a creature like that where your skin crawls for a bit. Well, mine did for a while [...] Click to continue reading this post

by Clifford at September 17, 2014 04:14 AM

astrobites - astro-ph reader's digest

Our Moon, the Cosmic Ray Detector

Title: Lunar Detection of Ultra-High-Energy Cosmic Rays and Neutrinos
Authors: J.D. Bray et. al.
First Author’s Institution: University of Southampton

A great mystery in particle astrophysics today is the production of the so-called ultra-high-energy cosmic rays. In general, cosmic rays are produced in a variety of contexts (see this recent Astrobite for more on that), but astronomers have measured a few to have almost unbelievable energies. The first observation came in 1962 at the Volcano Ranch experiment in New Mexico where Dr. John D. Linsley measured a cosmic ray to have an energy of 1020 eV or 16 J. Another ultra-high-energy cosmic ray, discovered in October of 1991, was dubbed the “Oh-My-God Particle”, which had an energy of 3 x 1020 eV (50 J). To put that into context, 50 J is the kinetic energy of a baseball traveling at 60 miles per hour.

The name ‘cosmic ray‘ is something of a misnomer. The word ‘ray’ makes it sound like it’s some sort of light, like gamma rays. But, that is not the case. Cosmic rays are simply protons that have been accelerated to high energies by some astrophysical mechanism. The mystery of ultra-high-energy cosmic rays lies in that acceleration. Nobody is sure exactly what is accelerating these cosmic rays to such high velocities.

Unfortunately, they are also very difficult to study because they have a relatively low flux here at Earth. The arrival rate of ultra-high-energy cosmic rays are approximately one per square kilometer per century. The Pierre Auger Observatory in Argentina is an array of cosmic ray detectors that spans an area of 3,000 square kilometers (roughly the size of Rhode Island or Luxembourg), but despite their impressive detector size they still only detect 15 ultra-high-energy cosmic rays per year. Today’s paper by J.D. Bray et. al. explains how to use the Moon as a cosmic ray detector to increase the collection area far beyond that of the Pierre Auger.

When cosmic rays smash into things, a shower of various particles is produced. At Earth, this usually happens in the upper atmosphere, and it is actually those secondary particles present in the shower that are detected by observatories like the Pierre Auger. Those particles are often charged, and their rapid movement through a dense medium, like the atmosphere, causes them to emit a very brief pulse of light in the form of radio waves (Figure 1). The authors hope to use these characteristic radio emissions that occur when cosmic rays strike the Moon’s surface to study the cosmic rays.

Figure 1. Caption goes here...

Figure 1. Left: Radio waves are produced at an angle relative to the direction of the particle shower. The radiation pattern is dependent on the frequency of the produced radio waves (two examples are shown in blue and green). Right: The red line represents the radio pulse that would be detected at Earth.  The pulse is very short, only lasting for several nanoseconds. [source: Bray et. al, figure 1.]

Radio waves cannot travel very far through the Moon’s interior, which means only cosmic rays skimming the Moon’s surface will produce radio pulses that can be detected here on Earth. That notwithstanding, the Moon will still make a truly impressive cosmic ray detector. The authors estimate that it will be equivalent to a 33,000 square kilometer (roughly the size of Maryland or Belgium) cosmic ray detector on Earth. That’s 10 Pierre Auger observatories! With a detector of that size, the authors expect to see up to 165 ultra-high-energy cosmic rays per year.

There is still one issue for the team, though: a radio telescope array that is sensitive enough to detect the faint pulses produced by these cosmic rays does not exist yet. Fortunately, astronomers are about to build the highly anticipated Square Kilometer Array in South Africa and Australia, which will provide the sensitivity necessary to put this lunar technique to use. Once the array is complete, we may finally learn something about the origins of these ultra-energetic cosmic rays.

by Justin Vasel at September 17, 2014 01:10 AM

September 16, 2014

Emily Lakdawalla - The Planetary Society Blog

Mars Orbiter Mission prepares for Mars arrival
The countdown for the crucial and nerve-wracking Mars orbit insertion of India's Mars Orbiter Mission (MOM) on September 24 has kicked off. At ISRO's telemetry, tracking and command network (ISTRAC) in Bangalore, the mood among the scientists is right now a mixture of optimism, excitement, and nervous apprehension.

September 16, 2014 06:12 PM

astrobites - astro-ph reader's digest

Apply to Write for Astrobites

Astrobites is seeking new graduate students to join the Astrobites collaboration.

Please share the information below with your graduate student colleagues. Applicants must be current graduate students. The deadline for applications is 15 October. Please email write4astrobites@gmail.com if you have any questions.

Application Details

The application consists of a sample Astrobite and two short essays. The application can be found and submitted at http://astrobites.org/apply-to-write-for-astrobites/

Goal: Your sample astrobite should discuss the motivation, methods, results, and conclusions of a published paper that has not been featured on Astrobites. Please do not summarize a paper of which you are an author, as this might lead to an unfair advantage relative to an application where the applicant is not involved with the paper.  If there are any concerns or ambiguities regarding this point, do not hesitate to seek guidance: write4astrobites@gmail.com.

Age of paper: We suggest you choose a paper that is at least 3 months old. Astrobites articles published during the author selection process will focus on newer papers, so you do not need to worry that your chosen article will be covered on astrobites.

Style: Please write at a level appropriate for undergraduate physics or astronomy majors and remember to explain jargon. We encourage you to provide links to previous astrobites or other science websites where appropriate. Links may either be provided as hyperlinks or as parenthetical citations. We suggest you read a few Astrobites posts to get a sense for how posts are typically written. You might use them as a guide for your sample post.

Figures: Your sample post should include at least one figure from the paper with an appropriate caption (not just the original caption). Figures may either be embedded in the text or placed at the end of the sample, and need to include appropriate citations to their source as well.

Length: Please keep your submission under 1,000 words including the figure caption(s). As we have received numerous applications for previous cycles, we unfortunately do not expect to be able to read beyond this limit.  Importantly, typical astrobites are usually between 500-800 words, so successful applications will demonstrate the ability to explain their chosen papers concisely. “Brevity is the soul of wit.”

Dates and Decision Process

The deadline for applications is 15 October. The Astrobites hiring committee will then review the applications and invite new authors to join based on the quality of their sample Astrobite and two short essays as well as on our needs for number of new authors. All applications will be reviewed anonymously in the interests of fairness. Applicants can expect to be notified by the end of November. If you have questions about the application process or responsibilities of Astrobites authors, don’t hesitate to get in touch at write4astrobites@gmail.com.

 

by Astrobites at September 16, 2014 06:01 PM

Quantum Diaries

Summer intern studies physics for self, family

This article appeared in Fermilab Today on Sept. 16, 2014.

Summer intern Sheri Lopez, here with son Dominic, pursues her love of physics as a student at the University of New Mexico-Los Alamos. She spent this summer at Fermilab as a summer intern. Photo courtesy of Sheri Lopez

Summer intern Sheri Lopez, here with son Dominic, pursues her love of physics as a student at the University of New Mexico-Los Alamos. She spent this summer at Fermilab as a summer intern. Photo courtesy of Sheri Lopez

Dominic is two. He is obsessed with “Despicable Me” and choo-choos. His mom Sheri Lopez is 29, obsessed with physics, and always wanted to be an astronaut.

But while Dominic’s future is full of possibilities, his mom’s options are narrower. Lopez is a single mother and a sophomore at the University of New Mexico-Los Alamos, where she is double majoring in physics and mechanical engineering. Her future is focused on providing for her son, and that plan recently included 10 weeks spent at Fermilab for a Summer Undergraduate Laboratories Internship (SULI).

“Being at Fermilab was beautiful, and it really made me realize how much I love physics,” Lopez said. “On the other end of the spectrum, it made me realize that I have to think of my future in a tangible way.”

Instead of being an astronaut, now she plans on building the next generation of particle detectors. Lopez is reaching that goal by coupling her love of physics with practical trade skills such as coding, which she picked up at Fermilab as part of her research developing new ways to visualize data for the MINERvA neutrino experiment.

“The main goal of it was to try to make the data that the MINERvA project was getting a lot easier to read and more presentable for a web-based format,” Lopez said. Interactive, user-friendly data may be one way to generate interest in particle physics from a more diverse audience. Lopez had no previous coding experience but quickly realized at Fermilab that it would allow her to make a bigger difference in the field.

Dominic, meanwhile, spent the summer with his grandparents in New Mexico. That was hard, Lopez said, but she received a lot of support from Internship Program Administrator Tanja Waltrip.

“I was determined to not let her miss this opportunity, which she worked so hard to acquire,” Waltrip said. Waltrip coordinates support services for interns like Lopez in 11 different programs hosted by Fermilab.

Less than 10 percent of applicants were accepted into Fermilab’s summer program. SULI is funded by the U.S. Department of Energy, so many national labs host these internships, and applicants choose which labs to apply to.

“There was never a moment when anyone doubted or said I couldn’t do it,” Lopez said. Dominic doesn’t understand why his mom was gone this summer, but he made sure to give her the longest hug of her life when she came back. For her part, Lopez was happy to bring back a brighter future for her son.

Troy Rummler

by Fermilab at September 16, 2014 04:21 PM

Symmetrybreaking - Fermilab/SLAC

Science gets social

If you like your science with a cup of coffee, a pint of beer or a raucous crowd, these events may be for you.

With an explosion of informal science events popping up around the world, it’s easier than ever to find ways to connect with scientists and fellow science enthusiasts.

Can’t find an event near you? Start your own! There are plenty of ways to reach out to fellow organizers for support.

 

Science Slam

At a Science Slam, performers compete for the affection of an audience—usually registered by clap-o-meter—by giving their best short, simple explanations of their research.

The first Science Slam took place in 2004 at a festival in Darmstadt, Germany, home of the GSI Centre for Heavy Ion Research and mission control for the European Space Agency. Alex Deppert, a city employee and poet with a PhD related to science communication, adapted the idea from the competitive Poetry Slams that started in Chicago in the 1980s. Science Slams now take place across the globe.

 

Science Festivals

Festivals offer a variety of activities for adults, from live tapings of “You’re the Expert,” a podcast in which comedians attempt to guess the obscure specialty of a scientist guest; to science pub crawls; to after-hours events at local museums; to the Bad Ad Hoc Hypothesis Festival—an event created by cartoonist Zach Weiner of the online comic Saturday Morning Breakfast Cereal at which scientists attempt to sincerely explain and defend fundamentally ridiculous theories before a panel of judges.

The modern science festival began in the late 1980s in Edinburgh, Scotland, and Cambridge, England. It spread across Europe and Asia and, in 2007, arrived at a different Cambridge, the home of MIT and Harvard University.

In 2009, the handful of US-based festival organizers formed the Science Festival Alliance. According to an annual report, in 2013 almost 300 events associated with the Science Festival Alliance drew more than 1000 visitors. About 30 of them drew more than 10,000 visitors each.

 

Nerd Nite

At Nerd Nite, a few people give short talks on their research or other topics of geeky interest in front of a potentially boozy crowd.

The first Nerd Nite took place in 2004 at The Midway Café bar in the Jamaica Plain neighborhood of Boston. Regular patron Christopher Balakrishnan, then a PhD candidate in evolutionary biology at Boston University, often found himself there telling tales from his three-month fieldwork stints in Africa. The bartenders suggested that he call his friends together and put on a slideshow.

Balakrishnan took the concept to the next level, inviting three other BU grad students to join him in explaining their own areas of research. The event drew enough of a crowd that, for the next couple of years, he continued to find researchers and organize talks. He eventually convinced his friend Matt Wasowski, who ran a series of trivia nights in New York, to try it out, too. The two have since helped spread Nerd Nite to more than 75 cities around the world.

 

Science Café

The Science Café is the salon of the informal science-learning world. For the price of a cup of coffee or a glass of wine, Science Café participants receive a short talk on science or technology, and then the floor opens for discussion and debate.

The Science Café is an offshoot of the Café Scientifique, created in 1998 in Leeds, England, by British television producer Duncan Dallas. The Café Scientifique, in turn, is a spinoff of the Café Philosophique, a philosophy-themed café that began in France in 1992.

In 2006, the producers of the public television science program NOVA gathered under one umbrella the few dozen Science Cafés that had popped up in the United States and began to offer resources to organizers, speakers and attendees through the site www.sciencecafes.org.

Today Science Cafés exist in at least 49 US states and 15 countries, operating under names such as Science on Tap, Science Pub, Ask a Scientist and Café Sci.

 

Science Tourism

You can also take science learning on the road—or out to sea—with science tourism companies such as Science Getaways, started in 2011 by astronomer Phil Plait and his wife, Marcella Setter, or Insight Cruises, which since 2008 have taken experts on board to offer lectures, discussions and tours.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at September 16, 2014 03:22 PM

Lubos Motl - string vacua and pheno

Higgs contest: the hard way to return to top 3
Now it's a good (but not stellar) moment for the Higgs ATLAS-Kaggle challenge. If you look at its leaderboard, only one minor permutation of the top 7 rankings (879 teams compete in total) has occurred in the last 7 days:



Due to a permutation of the top 2 places exactly 7 days ago, this screenshot became a bit obsolete minutes after I posted it.

And – the T.A.G. duo will surely agree – it is a small change in a good direction. ;-) And it was so hard to achieve this small change of the AMS score! What have I done?




I decided that one particular algorithm isn't good enough. It's better to write a code that simulates many programmers who are programming machine-learning algorithms and who are killing the programmers who are not good enough.




So I downloaded a Windows desktop OS emulator for Nokia Lumia 520, installed VirtualBox under this Windows system, along with Ubuntu Linux. In that system, I programmed a virtual empire that I call "The Matrix String".

This string-like landscape is a very nice environment for the programmers who live there. The inhabitants have to enjoy something that looks like an exciting life to them. Otherwise, as I realized, they don't perform too well.

Of course, their ultimate job is to write down an algorithm to optimally classify the 550,000 events in the contest. But they don't really know.

There are 220 copies of a city called Székesfehérvár – it's one of the Hungarian words I am proud to have mastered. If you have trouble with the name of the town, just call it "Stool Belgrade" which is the English translation. I am building five new copies of the town every day.

There are many T.A.G.'s hanging everywhere in the cities but I hope that they are not too important anymore! ;-) More importantly, there are numerous copies of two programmers over there. Their names are
Gábor "Neo" Melis

Morphine "Northern Lights Haze" Morpheus
They are designed to resemble the top two contestants in the contest as accurately as I could imagine them. Mr Morphine is trying to convince Gábor "Neo" Melis that he (Neo) is "the One". And make no doubts about it, I also think that Gábor Melis is "the One".



Today, in order to improve the top score by 0.006, after 10 days or so with no improvements, I had to fight against Gábor "Neo" Melis. It was tough. It seems to me that he has won again.

If I happened to win, to be eligible for the prize, I would have to reproduce the exact algorithms that generated the winning submission. So it's important to remember every motion of my hands in the fight against Melis, and so on. Weeks ago, it would have been impossible. Right now, however, it seems that I have gotten more disciplined in creating backups. So all the copies of Gábor "Neo" Melis that had to fight have a code that is saved somewhere, much like the program that determined every motion of the hands in the fight above.

As usually in the morning, I have run out of my limit of 5 submissions per day. But the new 3.76704 submission is relatively new and opens an uncharted territory so it is remotely conceivable that there exists a very minor modification of this code that improves the score sufficiently to beat the real leaders, "Neo" and "Morphine".

"Morphine" is the current leader whose AMS score is 0.03951 above mine. It's just slightly above one percent of my score. To beat him or her (there is at least one woman in the contest, Tatiana Likhomanenko is 20th after 3 submissions only, scary!), one has to improve the score by more than one percent.

It means to increase the number (well, the total weight) of true positives \(s\) by one percent while not increasing the \(b\), or to decrease the number of false positives (well, their total weight) \(b\) by two percent (because AMS is essentially \(s/\sqrt{b}\) while not lowering \(s\), or some linear combination of these options.

It may be done. Maybe.

Of course, the temporary leaderboard may be a misleading benchmark to estimate the final score which will be calculated exactly from the 450,000 test.csv collisions that are not included among the 100,000 collisions used to calculate the preliminary leaderboard. It is plausible that the "differences between AMS of two contestants" will change by 0.1 in average (root mean square) relatively to the preliminary leaderboard so it's possible everyone in the top 20 or 50 has a significant chance. I could do calculations and simulations that would clarify these matters but I think it's better to spend time on improving my (at least temporary) AMS score.

But while "the Matrix String" technology to optimize the machine learning hasn't produced a truly remarkable improvement in the preliminary AMS score, one that could beat "Neo" and "Morphine", for example, I have some reasons to think that its underlying idea is so robust that it could achieve a higher final score than other algorithms (and perhaps other contestants' algorithms).

by Luboš Motl (noreply@blogger.com) at September 16, 2014 02:12 PM

Lubos Motl - string vacua and pheno

Kaggle Higgs: view from Mt Everest
Update Sep 16: ninth place, people couldn't compete against the machine learning gurus who knew what they were doing from the beginning. I am / we are ninth at the end. Also, the winner has 3.805 (although everyone else is below 3.8) so I apparently lose a "below 3.8" $100 bet. Heikki is very lucky, isn't he? ;-)

A minor update Sep 15: I just wanted to experience the fleeting feeling of our team's look from the top of the preliminary leaderboard where we (shortly?) stand on the shoulders of 1,791 giants.



You see the safe gap of 0.00001 between us and the Hungarian competition. ;-)

Today, the "public" dataset of 100,000 events will be replaced by a completely disjoint (but statistically equivalent) dataset of 450,000 "private" ATLAS collisions and our team may – but is far from guaranteed – to drop like a stone. And even if it doesn't drop like a stone, there will be huge hassle to get convinced that the code has all the characteristics it should have. I am actually not 100% sure whether I want to remain in the top 3 because I dislike paperwork and lots of "small rules".

Text below was originally posted on September 4th as "Kaggle Higgs: back to K2"

The ATLAS Kaggle Higgs contest ends in less then two weeks, on September 15th or so, and I wanted to regain at least the second place among the 1,600 contestants seen in the leaderboard – because I still believe that it is unlikely for me to win a prize.

After many and many clever ideas and hundreds of attempts, my team returned to the second place where I have already been for one hour in June.



Gábor Melis is ahead of my team by 0.005. I am learning Hungarian in order to revert this gap.




I've tried about a hundred of amusing ideas to improve the recognition of the Higgs signal events. Maybe, after the contest ends, I will post about 10 blog posts with some examples of these ideas and codes.




Today, to regain the preliminary silver place, I was considering the 38,000 testing.csv events that have undefined values of the ATLAS MMC (missing mass calculator) Higgs mass,\[

MMC=-999.0.

\] The undefined value of the Higgs mass is a big hint that the collision hasn't really produced a Higgs boson. It doesn't look like a Higgs boson so it probably doesn't quack like one, either. The percentage of the "signal" events among the "undefined MMC events" is small.

So last night, on a trip, I was thinking that maybe the xgboost software does a poor job and classifies too many "undefined MMC collisions" as signal. The number of true signals over there is so tiny that you won't lose much if you just remove all these events from your "list of signal candidates" i.e. if you reclassify all of them as background. And indeed, you do not lose too much score if you do so, just about \[

\Delta{\rm AMS} = -0.005

\] However, if the xgboost had been working poorly and most of the "undefined MMC" events would be false positives, removing them could have improved the score by as much as \(0.2\). It would be cool. Unfortunately, xgboost does a good job in filtering these events.

My improvement of the score that came 20 minutes later is closely related to the considerations above but I can't tell you how the improvement was exactly done. ;-)

by Luboš Motl (noreply@blogger.com) at September 16, 2014 02:12 PM

ZapperZ - Physics and Physicists

"What keeps girls from studying physics and STEM" - An Important Article That Did Not Answer Its Own Question
Anyone following this blog for any considerable period of time would have seen my keen involvement in trying to engage more girls and women into physics. So this is a subject that I've followed and had participated in for many years.

So when I came across this opinion piece article, I will read it in its entirety, because even if this is a first-hand account of one's experience (the author is a female physicist), it is still another "data point" in trying to figure out what kind of hurdle a female student like her faced during her academic years.

Unfortunately, after reading the article, I am no closer in understanding the unique challenges that a female student faces, or what a female scientist faces, in the field of physics. She describes what can be done to improve education and open opportunities, but these are NOT specific to female students!

My advanced placement (AP) physics class, unfortunately, was about memorizing equations and applying them to specific contrived examples. I did not perform well on the midterm exam. The teacher advised me to drop the course, along with all the other girls in the class. 

This would be a turn-off for male students as well! So if that is the case, why is there an overwhelmingly more female students leaving the subject? She didn't say.

I stayed despite the teacher’s pressure, as the only girl in the class, and did well in the long run. I learned to love physics again in college, conducting original research with inspiring science professors who valued my presence in the scientific community. Physics professor Mary James at Reed College helped a lot by creating an active learning environment in her courses and teaching me that physics also needs “B” students.

Again, any student of any gender would benefit from that. This is not unique only to female students. So it still does not address the imbalance.

But there is so much more work to do. One key factor is federal funding for research. Federal funding is the main source of support for the kind of high-risk, high-reward investigations that sparked innovations such as the Internet, the MRI and GPS.

U.S. Sen. Patty Murray, D-Wash., serves on the U.S. Senate Appropriations Committee and understands the connection. In her recently released report “Opportunity Outlook: A Path For Tackling All Our Deficits Responsibly” she states, “By supporting early stage basic research that the private sector might not otherwise undertake, federal investment in R&D [research and development] has played a critical role in encouraging innovation across a swath of industries.” 

Again, this doesn't address the lack of women in physics. Increasing the opportunity and funding merely increase the overall number of people in the field, but will probably not change the percentage of women in this area. There's nothing here that reveals the unique and unforeseen hurdles  that only women faced that are keeping the participation down.

In the end, she simply argued for more funding to increase the opportunity of people in physics. There's nothing here whatsoever that addresses the issue of why there are very few women, both in absolute numbers and in relative percentage, in physics. I think there are other, better articles and research that have addressed this issue.

Zz.

by ZapperZ (noreply@blogger.com) at September 16, 2014 12:42 PM

Tommaso Dorigo - Scientificblogging

ATLAS Higgs Challenge Results
After four months of frenzy by over 1500 teams, the very successful Higgs Challenge launched by the ATLAS collaboration ended yesterday, and the "private leaderboard" with the final standings has been revealed. You can see the top 20 scorers below.


read more

by Tommaso Dorigo at September 16, 2014 09:58 AM

Peter Coles - In the Dark

Six Years In The Dark

When I logged onto WordPress to write yesterday’s post I received a message that it was the 6th anniversary of my registration with them as a blogger and thus took my first step into the blogosphere; that was way back on 15th September 2008. I actually wrote my first post that day too. Here it is, in all its glory:

So here we are then. I’ve finally decided to start writing a blog. I’ve been reading quite a few of them recently and most appeared to consist of a load of ill-informed opinionated drivel. So I thought “I can do that!”. And here we are.

I don’t know who (if anyone) will be reading this or even what I’m going to write, but let’s see how it goes until everyone concerned gets bored with it.

And before I start, I’d like to thank Phil Brown from the British Association for the Advancement of Science for inviting me to set this up. I never would have got around to it if he hadn’t done so.

So blame him!

Unfortunately I didn’t really know what I was doing on my first day at blogging – no change there then –  and I didn’t actually manage to figure out how to publish this earth-shattering piece. It was only after I’d written my second post that I realized that the first one wasn’t actually live, so the two appear in the wrong order in my archive.

I’d like to take this opportunity to send my best wishes, and to thank, everyone who reads this blog, however occasionally. According to the WordPress stats, I’ve got readers from all round the world, including one in the Vatican! If you’re interested in statistics then, as of 9.15 this morning, I have published 2537 blog posts in all, and have received 2,032,090 hits altogether; I get an average of about 1300 per day, but this varies in a very erratic fashion. There have been 20,201 comments published on here and 857,904 rejected as spam or abuse; a lot goes on behind the scenes that you don’t want to know about!

Anyway, the numbers don’t really matter but it does mean a lot to know that there are people who find my ramblings interesting enough to look at, and sometimes even to come back for more! This blog is read by a number of powerful and influential people too, as well as John Womersley….

 

 

 


by telescoper at September 16, 2014 08:25 AM

John Baez - Azimuth

Exploring Climate Data (Part 2)

guest post by Blake Pollard

I have been learning to make animations using R. This is an animation of the profile of the surface air temperature at the equator. So, the x axis here is the longitude, approximately from 120° E to 280° E. I pulled the data from the region that Graham Jones specified in his code on github: it’s equatorial line in the region that Ludescher et al. used:

For this animation I tried to show the 1997-1998 El Niño. Typically the Pacific is much cooler near South America, due to the upwelling of deep cold water:

(Click for more information.) That part of the Pacific gets even cooler during La Niña:

But it warms up during El Niños:

You can see that in the surface air temperature during the 1997-1998 El Niño, although by summer of 1998 things seem to be getting back to normal:

I want to practice making animations like this. I could make a much prettier and better-labelled animation that ran all the way from 1948 to today, but I wanted to think a little about what exactly is best to plot if we want to use it as an aid to understanding some of this El Niño business.


by John Baez at September 16, 2014 02:40 AM

September 15, 2014

astrobites - astro-ph reader's digest

Rethinking the Planet Formation Deadline

Stars start their lives surrounded by a disk of gas and dust. After several million years, the gas and most of this dust will dissipate, marking the end of the opportunity for gas giant planets to form. This timescale places strong constraints on our theories for how gas giants must form. The authors of today’s paper take a closer look at how the observations of disks around young stars have been carried out, and they find that many stars may actually retain their disks for far longer than previously thought.

Stars are not born on their own; rather they are born in clusters. The typical lifetime of protoplanetary disks is measured by first looking for disks around the stars in a cluster, and then comparing the fraction of stars with disks among different clusters of different ages. Such results are presented in Figure 1, which shows that 50% of stars lose their disks after 2 or 3 million years and 90% have lost their disks by the time they are 6 million years old. The author’s of today’s paper critique these results by identifying several selection effects that plague cluster observations.

The standard picture showing how quickly the fraction of stars with disks decreases with age. The lines show a linear and exponential fits to the data.

The standard picture showing how quickly the fraction of stars with protoplanetary disks decreases with age. The lines show linear and exponential fits to the data.

The core problem is that shortly after they form clusters will begin to expand and dissolve as the stars near the edges of the clusters are expelled. By the time clusters reach an age of 10 Myr, only 10-20% of their original stars remain. Clusters quickly expand beyond the field of view of most telescopes, so observations of older clusters are dominated by the stars that started in the cluster’s dense central region. The disks around stars in the central region are easily disrupted by gravitational perturbations from nearby stars as well as photoevaporated by the light from nearby bright massive stars. (For more on how cluster environments affect protoplanetary disks, see these astrobites.) In older clusters, the bias towards sampling these central stars may severely under-predict the disk fraction, resulting in an inferred disk dispersal timescale that is too short.

Additionally, star clusters vary considerably in their total mass. Because clusters expand and become more diffuse, low mass clusters cannot even be detected after a certain age, so the observations of older clusters are biased towards massive clusters. This is evident in Figure 1 where massive clusters (>1,000 solar masses) are shown in filled circles and less massive clusters are shown in open circles. The disk dispersal process may vary with cluster mass. Basing the disk fraction of older stars only on those in massive clusters is not ideal–especially considering that the majority of all stars may actually form in less massive clusters.

The authors attempt to correct some of these effects, and in Figure 2 they present an updated version of Figure 1. They added measurements of stars from the outer parts of clusters, and they added disk fractions for sparser clusters. These adjustments increase the disk fraction for older stars and lengthen the average disk lifetime. However, these corrections are not easy to make. The ages of sparse clusters are difficult to determine, and including stars at the outer reaches of clusters runs the risk of including stars that are not actually members of the clusters at all. These interlopers are likely much older and disk-free, so including them would artificially lower the measured disk fraction. For these reasons, the authors note that their updated values are still lower-limits on the true disk fractions.

In the end, the authors estimate that between a third and a half of stars may still host disks when they turn 10 Myr old. Perhaps we need to extend the deadlines we’ve placed on planet formation.

An updated version of Figure 1, now showing a longer protoplanetary disk lifetime (red line) when sparse clusters and stars at the edges of clusters are included. The authors note that this is still an underestimate of the disk fractions and of the average disk lifetime.

An updated version of Figure 1, now showing a longer protoplanetary disk lifetime (red line) when sparse clusters and stars at the edges of clusters are included. The authors note that this is likely still underestimates the disk fractions and of the average disk lifetime.

by Nick Ballering at September 15, 2014 05:16 PM

Symmetrybreaking - Fermilab/SLAC

Sci-fi writers, scientists imagine the future

A new project pairs science fiction authors with scientists to envision worlds that are both inspiring and achievable.

A few years ago, structural engineering professor Keith Hjelmstad received an unusual phone call. On the line was Neal Stephenson, author of futuristic thrillers such as Snow Crash and Cryptonomicon. He wanted to know whether it was possible to build a tower 20 kilometers high.

“Actually, I think he low-balled it at the time and said 15 kilometers,” Hjelmstad says. That’s still more than 15 times the height of the world’s tallest building—the Burj Khalifa in Dubai—and about 5 kilometers higher than the cruising altitude of a commercial aircraft.

Stephenson was working on a short story with a goal. He wanted to describe something fantastic—and feasible—for humans to consider striving for in the future.

Michael Crow, president of Arizona State University, had recently criticized the bleak, dystopian futures presented in much of today’s science fiction—arguably not a great source of inspiration for today’s scientists and engineers.

In response, Stephenson challenged himself to write a story that was both optimistic and realistic. And then he challenged other authors to work with experts to do the same.

With the help of Kathryn Cramer, editor of Year’s Best Science Fiction for the last decade, and Ed Finn, founding director of Arizona State University’s Center for Science and the Imagination, the challenge turned into Project Hieroglyph.

As Cramer says, “What we’re trying to do is envision a survivable future where the human race makes it.”

Cramer and Finn recruited authors including Elizabeth Bear, author of the Eternal Sky fantasy trilogy; Cory Doctorow, co-editor of the site BoingBoing and author of young adult novel Little Brother; Karl Schroeder, author of hard science fiction young adult novel Lockstep; and Bruce Sterling, author of cyberpunk novel Schismatrix. They wrote the short stories collected in the anthology Hieroglyph: Stories & Visions for a Better Future, released this month. The project also lives online as a series of ongoing discussions.

Stephenson, a self-professed “failed physics major,” says the idea for his tall tower story came from a 2003 proposal by Geoffrey Landis of NASA’s John Glenn Research Center and Vincent Denis of the International Space University in France. They argued that the lower atmospheric drag on objects launched from such a height would allow them to carry heavier loads into space.

As Stephenson learned from Hjelmstad, one of the biggest challenges in constructing such a tower would be dealing with wind—in the worst case, a blast from the jet stream. Hjelmstad came up with a way to deal with that by designing part of the tower to act as a sail, which could take advantage of the wind instead of fighting against it.

Hjelmstad says that the tall tower was the most ambitious engineering idea he’d ever been asked to consider, even in school. “I live in a fairly narrow world governed by codes and specifications and lawyers,” he says. “It was refreshing to think about a problem that was completely outside that.”

Hjelmstad says the project also made him think about how to inspire his own students, who continue to email him with new ideas for the tower even though the science fiction anthology is already published.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at September 15, 2014 03:50 PM

arXiv blog

How Network Theory Is Revealing Previously Unknown Patterns in Sports

Analyzing the network of passes between soccer players reveals that one of the world’s most successful teams plays an entirely different type of football to every other soccer team on the planet.

 

September 15, 2014 02:30 PM

arXiv blog

How Network Theory Is Revealing Previously Unknown Patterns in Sport

Analysing the network of passes between soccer players reveals that one of the world’s most successful teams plays an entirely different type of football to every other soccer team on the planet.


If you’ve ever watched soccer, you’ll know of the subtle differences in tactics and formation between different teams. There is the long ball game, the pressing game, the zone defence and so on. Many teams have particular styles of play that fans admire and hate.

September 15, 2014 02:30 PM

Matt Strassler - Of Particular Significance

Why did so few people see Auroras on Friday night?

Why did so few people see auroras on Friday night, after all the media hype? You can see one of two reasons in the data. As I explained in my last post, you can read what happened in the data shown in the Satellite Environment Plot from this website (warning — they’re going to make new version of the website soon, so you might have to modify this info a bit.) Here’s what the plot looked like Sunday morning.

What the "Satellite Environment Plot" on swpc.noaa.gov looked like on Sunday.  Friday is at left; time shown is "Universal" time; New York time is 4 hours later. There were two storms, shown as the red bars in the Kp index plot; one occurred very early Friday morning and one later on Friday.  You can see the start of the second storm in the "GOES Hp" plot, where the magnetic field goes wild very suddenly.  The storm was subsiding by midnight universal time, so it was mostly over by midnight New York time.

What the “Satellite Environment Plot” on swpc.noaa.gov looked like on Sunday. Friday is at left.  Time shown is “Universal” time (UTC); New York time is 4 hours later at this time of year. There were two storms, shown as the red bars in the Kp index chart (fourth line); one occurred very early Friday morning and one later on Friday. You can see the start of the second storm in the “GOES Hp” chart (third line), where the magnetic field goes wild very suddenly. The storm was subsiding by midnight Universal time, so it was mostly over by midnight New York time.

What the figure shows is that after a first geomagnetic storm very early Friday, a strong geomagnetic storm started (as shown by the sharp jump in the GOES Hp chart) later on Friday, a little after noon New York time ["UTC" is currently New York + 4/5 hours], and that it was short — mostly over before midnight. Those of you out west never had a chance; it was all over before the sun set. Only people in far western Europe had good timing. Whatever the media was saying about later Friday night and Saturday night was somewhere between uninformed and out of date.  Your best bet was to be looking at this chart, which would have shown you that (despite predictions, which for auroras are always quite uncertain) there was nothing going on after Friday midnight New York time.

But the second reason is something that the figure doesn’t show. Even though this was a strong geomagnetic storm (the Kp index reached 7, the strongest in quite some time), the auroras didn’t migrate particularly far south. They were seen in the northern skies of Maine, Vermont and New Hampshire, but not (as far as I know) in Massachusetts. Certainly I didn’t see them. That just goes to show you (AccuWeather, and other media, are you listening?) that predicting the precise timing and extent of auroras is educated guesswork, and will remain so until current knowledge, methods and information are enhanced. One simply can’t know for sure how far south the auroras will extend, even if the impact on the geomagnetic field is strong.

For those who did see the auroras on Friday night, it was quite a sight. And for the rest of us who didn’t see them this time, there’s no reason for us to give up. Solar maximum is not over, and even though this is a rather weak sunspot cycle, the chances for more auroras over the next year or so are still pretty good.

Finally, a lesson for those who went out and stared at the sky for hours after the storm was long over — get your scientific information from the source!  There’s no need, in the modern world, to rely on out-of-date media reports.


Filed under: Astronomy, Science and Modern Society Tagged: auroras, press

by Matt Strassler at September 15, 2014 01:38 PM

Peter Coles - In the Dark

Frequentism: the art of probably answering the wrong question

Popped into the office for a spot of lunch in between induction events and discovered that Jon Butterworth has posted an item on his Grauniad blog about how particle physicists use statistics, and the ‘5σ rule’ that is usually employed as a criterion for the detection of, e.g. a new particle. I couldn’t resist bashing out a quick reply, because I believe that actually the fundamental issue is not whether you choose 3σ or 5σ or 27σ but what these statistics mean or don’t mean.

As was the case with a Nature piece I blogged about some time ago, Jon’s article focuses on the p-value, a frequentist concept that corresponds to the probability of obtaining a value at least as large as that obtained for a test statistic under a particular null hypothesis. To give an example, the null hypothesis might be that two variates are uncorrelated; the test statistic might be the sample correlation coefficient r obtained from a set of bivariate data. If the data were uncorrelated then r would have a known probability distribution, and if the value measured from the sample were such that its numerical value would be exceeded with a probability of 0.05 then the p-value (or significance level) is 0.05. This is usually called a ‘2σ’ result because for Gaussian statistics a variable has a probability of 95% of lying within 2σ of the mean value.

Anyway, whatever the null hypothesis happens to be, you can see that the way a frequentist would proceed would be to calculate what the distribution of measurements would be if it were true. If the actual measurement is deemed to be unlikely (say that it is so high that only 1% of measurements would turn out that big under the null hypothesis) then you reject the null, in this case with a “level of significance” of 1%. If you don’t reject it then you tacitly accept it unless and until another experiment does persuade you to shift your allegiance.

But the p-value merely specifies the probability that you would reject the null-hypothesis if it were correct. This is what you would call making a Type I error. It says nothing at all about the probability that the null hypothesis is actually a correct description of the data. To make that sort of statement you would need to specify an alternative distribution, calculate the distribution based on it, and hence determine the statistical power of the test, i.e. the probability that you would actually reject the null hypothesis when it is correct. To fail to reject the null hypothesis when it’s actually incorrect is to make a Type II error.

If all this stuff about p-values, significance, power and Type I and Type II errors seems a bit bizarre, I think that’s because it is. It’s so bizarre, in fact, that I think most people who quote p-values have absolutely no idea what they really mean. Jon’s piece demonstrates that he does, so this is not meant as a personal criticism, but it is a pervasive problem that results quoted in such a way are intrinsically confusing.

The Nature story mentioned above argues that in fact that results quoted with a p-value of 0.05 turn out to be wrong about 25% of the time. There are a number of reasons why this could be the case, including that the p-value is being calculated incorrectly, perhaps because some assumption or other turns out not to be true; a widespread example is assuming that the variates concerned are normally distributed. Unquestioning application of off-the-shelf statistical methods in inappropriate situations is a serious problem in many disciplines, but is particularly prevalent in the social sciences when samples are typically rather small.

While I agree with the Nature piece that there’s a problem, I don’t agree with the suggestion that it can be solved simply by choosing stricter criteria, i.e. a p-value of 0.005 rather than 0.05 or, in the case of particle physics, a 5σ standard (which translates to about 0.000001!  While it is true that this would throw out a lot of flaky ‘two-sigma’ results, it doesn’t alter the basic problem which is that the frequentist approach to hypothesis testing is intrinsically confusing compared to the logically clearer Bayesian approach. In particular, most of the time the p-value is an answer to a question which is quite different from that which a scientist would actually want to ask, which is what the data have to say about the probability of a specific hypothesis being true or sometimes whether the data imply one hypothesis more strongly than another. I’ve banged on about Bayesian methods quite enough on this blog so I won’t repeat the arguments here, except that such approaches focus on the probability of a hypothesis being right given the data, rather than on properties that the data might have given the hypothesis.

I feel so strongly about this that if I had my way I’d ban p-values altogether…

Not that it’s always easy to implement a Bayesian approach. It’s especially difficult when the data are affected by complicated noise statistics and selection effects, and/or when it is difficult to formulate a hypothesis test rigorously because one does not have a clear alternative hypothesis in mind. Experimentalists (including experimental particle physicists) seem to prefer to accept the limitations of the frequentist approach than tackle the admittedly very challenging problems of going Bayesian. In fact in my experience it seems that those scientists who approach data from a theoretical perspective are almost exclusively Baysian, while those of an experimental or observational bent stick to their frequentist guns.

Coincidentally a paper on the arXiv not long ago discussed an interesting apparent paradox in hypothesis testing that arises in the context of high energy physics, which I thought I’d share here. Here is the abstract:

The Jeffreys-Lindley paradox displays how the use of a p-value (or number of standard deviations z) in a frequentist hypothesis test can lead to inferences that are radically different from those of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930’s and common today. The setting is the test of a point null (such as the Standard Model of elementary particle physics) versus a composite alternative (such as the Standard Model plus a new force of nature with unknown strength). The p-value, as well as the ratio of the likelihood under the null to the maximized likelihood under the alternative, can both strongly disfavor the null, while the Bayesian posterior probability for the null can be arbitrarily large. The professional statistics literature has many impassioned comments on the paradox, yet there is no consensus either on its relevance to scientific communication or on the correct resolution. I believe that the paradox is quite relevant to frontier research in high energy physics, where the model assumptions can evidently be quite different from those in other sciences. This paper is an attempt to explain the situation to both physicists and statisticians, in hopes that further progress can be made.

This paradox isn’t a paradox at all; the different approaches give different answers because they ask different questions. Both could be right, but I firmly believe that one of them answers the wrong question.


by telescoper at September 15, 2014 12:32 PM

September 13, 2014

Tommaso Dorigo - Scientificblogging

Life After The 125 GeV Higgs: What Is Left Of Two-Higgs Doublet Models
I just read with interest the new paper on the arxiv by my INFN-Padova colleague Massimo Passera and collaborators, titled "Limiting Two-Higgs Doublet Models", and I thought I would explain to you here why I consider it very interesting and what are its conclusions.

read more

by Tommaso Dorigo at September 13, 2014 03:16 PM

ZapperZ - Physics and Physicists

When Stephen Hawking Burps, The World Media Goes Crazy!
Yes, I categorize this as a burp, which reveals how uninteresting and how little importance I put on this piece of news that has somehow garnered such widespread attention.

Whenever the name Stephen Hawking and the phrase "destruction of our universe" appear on the same sentence, that is just an incendiary combination that usually caused a world-wide explosion (pun intended). That's what happened when Hawking said that the Higgs boson that was discovered a couple of years ago at the LHC will result in the destruction of our universe.

My first reaction when I read this was: YAWN!

But of course, the public, and the popular media, ran away with it. After all, what more eye-catching headline can one make beyond something like "Higgs boson destroys the universe - Hawking". However, I think those strangelets in the LHC collisions that were going to form micro blackholes that will swallow our universe were here first, and they demand that they'd be the first to destroy our universe.

There is an opinion piece on the CNN webpage that addressed this issue. When CNN had to invite someone to write an opinion piece of a physics news, you know that it had gotten way too much attention!

So, the simplified argument goes like something like this -- the Higgs particle pervades space roughly uniformly, with a relatively high mass -- about 126 times that of the proton (a basic building block of atoms). Theoretical physicists noted even before the Higgs discovery that its relatively high mass would mean lower energy states exist. Just as gravity makes a ball roll downhill, to the lowest point, so the universe (or any system) tends toward its lowest energy state. If the present universe could one day transition to that lower energy state, then it is unstable now and the transition to a new state would destroy all the particles that exist today.

This would happen spontaneously at one point in space and time and then expand throughout the universe at the speed of light. There would be no warning, because the fastest a warning signal could travel is also at the speed of light, so the disaster and the warning would arrive at the same time.

That was the pedestrian description of what Hawking is talking about. But don't just stop there or you'll miss the CONTEXT of the probability of this happening.
 
Back to the universe. Whether the existence of Higgs boson means we're doomed depends on the mass of another fundamental particle, the top quark. It's the combination of the Higgs and top quark masses that determine whether our universe is stable.

Experiments like those at the Large Hadron Collider allow us to measure these masses. But you don't need to hold your breath waiting for the answer. The good news is that such an event is very unlikely and should not occur until the universe is many times its present age.
.
.
So don't lose any sleep over possible danger from the Higgs boson, even if the most famous physicist in the world likes to speculate about it. You're far more likely to be hit by lightning than taken out by the Higgs boson.

 See what I mean when I said that I yawned when I first read about Hawking's speculation?

Zz.

by ZapperZ (noreply@blogger.com) at September 13, 2014 01:28 PM

September 12, 2014

Matt Strassler - Of Particular Significance

Auroras — Quantum Physics in the Sky — Tonight?

Maybe. If we collectively, and you personally, are lucky, then maybe you might see auroras — quantum physics in the sky — tonight.

Before I tell you about the science, I’m going to tell you where to get accurate information, and where not to get it; and then I’m going to give you a rough idea of what auroras are. It will be rough because it’s complicated and it would take more time than I have today, and it also will be rough because auroras are still only partly understood.

Bad Information

First though — as usual, do NOT get your information from the mainstream media, or even the media that ought to be scientifically literate but isn’t. I’ve seen a ton of misinformation already about timing, location, and where to look. For instance, here’s a map from AccuWeather, telling you who is likely to be able to see the auroras.

Don't believe this map by AccuWeather.  Oh, sure, they know something about clouds.  But auroras, not much.

Don’t believe this map by AccuWeather. Oh, sure, they know something about clouds. But auroras, not much.

See that line below which it says “not visible”? This implies that there’s a nice sharp geographical line between those who can’t possibly see it and those who will definitely see it if the sky is clear. Nothing could be further than the truth. No one knows where that line will lie tonight, and besides, it won’t be a nice smooth curve. There could be auroras visible in New Mexico, and none in Maine… not because it’s cloudy, but because the start time of the aurora can’t be predicted, and because its strength and location will change over time. If you’re north of that line, you may see nothing, and if you’re south of it you still might see something.  (Accuweather also says that you’ll see it first in the northeast and then in the midwest.  Not necessarily.  It may become visible across the U.S. all at the same time.  Or it may be seen out west but not in the east, or vice versa.)

Auroras aren’t like solar or lunar eclipses, absolutely predictable as to when they’ll happen and who can see them. They aren’t even like comets, which behave unpredictably but at least have predictable orbits. (Remember Comet ISON? It arrived exactly when expected, but evaporated and disintegrated under the Sun’s intense stare.) Auroras are more like weather — and predictions of auroras are more like predictions of rain, only in some ways worse. An aurora is a dynamic, ever-changing phenomenon, and to predict where and when it can be seen is not much more than educated guesswork. No prediction of an aurora sighting is EVER a guarantee. Nor is the absence of an aurora prediction a guarantee one can’t be seen; occasionally they appear unexpectedly.  That said, the best chance of seeing one further away from the poles than usual is a couple of days after a major solar flare — and we had one a couple of days ago.

Good Information and How to Use it

If you want accurate information about auroras, you want to get it from the Space Weather Prediction Center, click here for their main webpage. Look at the colorful graph on the lower left of that webpage, the “Satellite Environment Plot”. Here’s an example of that plot taken from earlier today:

The "Satellite Environment Plot" from earlier today; focus your attention on the two lower charts, the one with the red and blue wiggly lines (GOES Hp) and on the one with the bars (Kp Index).  How to use them is explained in the text.

The “Satellite Environment Plot” from earlier today; focus your attention on the two lower charts, the one with the red and blue wiggly lines (GOES Hp) and on the one with the bars (Kp Index). How to use them is explained in the text.

There’s a LOT of data on that plot, but for lack of time let me cut to the chase. The most important information is on the bottom two charts.

The bottom row, the “Estimated Kp index”, tells you, roughly, how much “geomagnetic activity” there is (i.e., how disturbed is the earth’s magnetic field). If the most recent bars are red, then the activity index is 5 or above, and there’s a decent chance of auroras. The higher the index, the more likely are auroras and the further away from the earth’s poles they will be seen. That is, if you live in the northern hemisphere, the larger is the Kp index, the further south the auroras are likely to be visible. [If it's more than 5, you've got a good shot well down into the bulk of the United States.]

The only problem with the Kp index is that it is a 3-hour average, so it may not go red until the auroras have already been going for a couple of hours! So that’s why the row above it, “GOES Hp”, is important and helpful. This plot gives you much more up-to-date information about what the magnetic field of the earth is up to. Notice, in the plot above, that the magnetic field goes crazy (i.e. the lines get all wiggly) just around the time that the Kp index starts to be yellow or starts to be red.

Therefore, keep an eye on the GOES Hp chart. If you see it start to go crazy sometime in the next 48 hours, that’s a strong indication that the blast of electrically-charged particles from the Sun, thrown out in that recent solar flare, has arrived at the Earth, and auroras are potentially imminent.  It won’t tell you how strong they are though.  Still, this is your signal, if skies near you are dark and sufficiently clear, to go out and look for auroras. If you don’t see them, try again later; they’re changeable. If you don’t see them over the coming hour or so, keep an eye on the Kp index chart. If you’re in the mid-to-northern part of the U.S. and you see that index jump higher than 5, there’s a significant geomagnetic storm going on, so keep trying. And if you see it reach 8 or so, definitely try even if you’re living quite far south.

Of course, don’t forget Twitter and other real-time feeds.  These can tell you whether and where people are seeing auroras. Keeping an eye on Twitter and hashtags like #aurora, #auroras, #northernlights is probably a good idea.

One more thing before getting into the science. We call these things the “northern lights” in the northern hemisphere, but clearly, since they can be seen in different places, they’re not always or necessarily north of any particular place. Looking north is a good idea — most of us who can see these things tonight or tomorrow night will probably be south of them — but the auroras can be overhead or even south of you. So don’t immediately give up if your northern sky is blocked by clouds or trees. Look around the sky.

Auroras: Quantum Physics in the Sky

Now, what are you seeing if you are lucky enough to see an aurora? Most likely what you’ll see is green, though red, blue and purple are common (and sometimes combinations which give other colors, but these are the basic ones.)  Why?

The typical sequence of events preceding a bright aurora is this:

  1. A sunspot — an area of intense magnetic activity on the Sun, where the sun’s apparent surface looks dark — becomes unstable and suffers an explosion of sorts, a solar flare.
  2. Associated with the solar flare may be a “coronal mass ejection” — the expulsion of huge numbers of charged (and neutral) particles out into space. These charged particles include both electrons and ions (i.e. atoms which have lost one or more electrons). (Coronal mass ejections, which are not well understood, can occur in other ways, but the strongest are from big flares.)
  3. These charged particles travel at high speeds (much faster than any current human spaceship, but much slower than the speed of light) across space. If the sunspot that flared happens to be facing Earth, then some of those particles will arrive at Earth after as little as a day and as much as three days. Powerful flares typically make faster particles which therefore arrive sooner.
  4. When these charged particles arrive near Earth, it may happen (depending on what the Sun’s magnetic field and the Earth’s magnetic field and the magnetic fields near the particles are all doing) that many of the particles may spiral down the Earth’s magnetic field, which draws them to the Earth’s north and south magnetic poles (which lie close to the Earth’s north and south geographic poles.)
  5. When these high-energy particles (electrons and ions) rain down onto the Earth, they typically will hit atoms in the Earth’s upper atmosphere, 40 to 200 miles up. The ensuing collisions kick electrons in the struck atoms into “orbits” that they don’t normally occupy, as though they were suddenly moved from an inner superhighway ring road around a city to an outer ring road highway. We call these outer orbits “excited orbits”, and an atom of this type an “excited atom”.
  6. Eventually the electrons fall from these “excited orbits” back down to their usual orbits. This is often referred to as a “quantum transition” or, colloquially, a “quantum jump”, as the electron is never really found between the starting outer orbit and the final inner one; it almost instantaneously transfers from one to the other.
  7. In doing so, the jumping electron will emit a particle of electromagnetic radiation, called a “photon”. The energy of that photon, thanks to the wonderful properties of quantum mechanics, is always the same for any particular quantum transition.
  8. Visible light is a form of electromagnetic radiation, and photons of visible light are, by definition, ones that our eyes can see. The reason we can see auroras is that for particular quantum transitions of oxygen and nitrogen, the photons emitted are indeed those of visible light. Moreover, because the energy for each photon from a given transition is always the same, the color of the light that our eyes see, for that particular transition, is always the same. There is a transition in oxygen that always gives green light; that’s why auroras are often green. There is a more fragile transition that always gives red light; powerful auroras, which can excite oxygen atoms even higher in the atmosphere, where they are more diffuse and less likely to hit something before they emit light, can give red auroras. Similarly, nitrogen molecules have a transition that can give blue light. (Other transitions give light that our eyes can’t see.)  Combinations of these can give yellows, pinks, purples, whites, etc. But the basic colors are typically green and red, occasionally blue, etc.

So if you are lucky enough to see an aurora tonight or tomorrow night, consider what you are seeing.  Huge energies involving magnetic fields on the Sun have blown particles — the same particles that are of particular significance to this website — into space.  Particle physics and atomic physics at the top of the atmosphere lead to the emission of light many miles above the Earth.  And the remarkable surprises of quantum mechanics make that light not a bland grey, with all possible colors blended crudely together, but instead a magical display of specific and gorgeous hues, reminding us that the world is far more subtle than our daily lives would lead us to believe.


Filed under: Astronomy, Particle Physics Tagged: astronomy, atoms, auroras, press

by Matt Strassler at September 12, 2014 03:33 PM

ZapperZ - Physics and Physicists

The New Physical Review Journals Website - It Sucks!
Yeah, so from the title, you can already tell how I feel about it.

I look at the Physical Review Journals webpage quite often, at least a few times a week. After all, PRL is a journal that I scan pretty often, and I'm sure most physicists do as well. They changed the look and feel of the webpage several months ago, and right off the bat, there were a few annoying things.

First of all, one used to be able to see immediately the current list new papers appearing that week (for PRL, for example). Now, you need to click a few links to find it.

The page is heavily emphasized on "highlighted" papers, as if they are desperately trying to push to everyone how important these are. I don't mind reading them, but I'd like to see the entire listing of papers that week first and foremost. This somehow has been pushed back.

Lastly, and this is what is annoying the most, they seemed to not be optimized for tablet viewing, at least, not for me. I often read these journals on my iPad. I have iPad3, and I use the Safari browser that came with it. Had no problem with the old webpage, and other journals' webpages. But the new Phys. Rev. webpage is downright annoying! Part of the table of content "floats" with the page as one is scrolling down! I've uploaded a video of what I'm seeing so that you can see it for yourself.

I've e-mailed my complaints to the Feedback link. I had given it a few months in case this was a glitch or if they were still trying to sort out the kinks. But this seems to have persisted. I can't believe I'm the only one having this problem.

It is too bad. They had a nice, simple design before, and I could find things very quickly. Now, in trying to make it more sophisticated and more slick, they've ruined the usability for us who care more about getting the information than the bells and whistles.

Zz.

by ZapperZ (noreply@blogger.com) at September 12, 2014 01:47 PM

astrobites - astro-ph reader's digest

The Impossible Star

Title: KIC 2856960: the impossible triple star
Authors:
T.R. Marsh, D.J. Armstrong, and P.J. Carter
First Author’s Institution:
University of Warwick, Coventry, UK
Status: Accepted to MNRAS

Binary stars are extremely common in the universe, and many of them further reside in triple star systems. Further, because of the close orbits of many binary systems, they are often observed to eclipse, which gives astronomers extremely detailed information on both stars. However, third stars in triple systems tend to be at longer orbital separations, lessening the likelihood of all three bodies eclipsing. But thanks to Kepler’s continuous coverage, a number of multiply eclipsing triple systems have now been observed. KIC 2856960 is one of them—or is it?

KIC 2856960 was first identified as a binary system in 2011, with the main pair eclipsing every 0.258 days. But in 2012, a secondary, deeper dip was found, which lasts about one day every ~204 days. This “dip” is actually the complicated series of dips seen in Figure 1, as the close binary passes in front of some other object. These dips could have been caused by a planet orbiting the binary system, but in 2013, a different group proved that the third object must be a star. They did this by finding variations in the orbital period of the binary, variations caused by the changing light travel time as the binary orbits a third star.

No one had attempted to model what the orbits, masses, and radii of these three stars could be to match the observed light curves from Kepler, and so the authors of this paper dove in.

Screen Shot 2014-09-09 at 1.22.26 PM

Figure 1. Kepler light curves shown in black, both panels. Models are shown in blue. The small variations on the edges are the close binary’s eclipses, while the larger dips in the middle are due to a geometrically complicated scenario where the binary crosses a third star. The left panel shows the best fit triple star model, which is a poor fit. The right panel shows the same basic model, but ignores Kepler’s laws. This produces a much better fit, but has the unfortunate drawback of breaking the laws of physics. Credit: Marsh et al., 2014.

Because the binary eclipses so many times, yielding extremely precise data, the authors focused only on the binary’s orbit at first, ignoring the large dips. They explained that they hoped to lock down as many parameters as possible for the binary system, leaving less parameter space to explore when they moved on to the triple. They were able to improve on past work, partially due to the accumulation of more data as Kepler continued to observe, but mostly thanks to short cadence Kepler data unavailable to previous groups. They strongly confirmed the earlier findings that the binary is in orbit around a third star.

The authors next attempted to model a triple star system, but found that even their best fit is quite poor (see Figure 1, left panel). It is also astrophysically impossible, yielding a radius for one of the stars of 1.29 R (reasonable), but a mass of 10-5 M (impossibly small density; this star would completely overflow its Roche lobe). The authors did manage to achieve a statistically good fit (Figure 1, right panel), but only if they ignored Kepler’s laws. This is, of course, also physically impossible, but they argue that this “solution” allows them to qualitatively investigate the problems with the triple star solution. Specifically, it shows that the data requires a longer binary separation than is physically possible in the three star model, especially to explain the double dips labeled B-C-D in Figure 1, and repeated at roughly 1119 on the x axis. Put very simply, the dips last too long for any physically possible three star system.

Screen Shot 2014-09-09 at 12.56.44 PM

Figure 2. Three different orbital modes considered in this paper. In all cases, the observer is located at the bottom of the figure and plus signs indicate the center of mass of the system. The lower panels are a 10x magnified view of the dotted box, centered around the close binary. In the left two panels, the orbit of the center of mass of the close binary is shown by the solid ellipse. In the left panel (triple system), the orbit of star 3 is shown by the dashed ellipse. In the middle panel (double binary), “star 3″ is a second binary, and the dashed ellipse shows the orbit of the center of mass of that binary, with the red circle showing the orbit of “star 3″ around a fixed point on the ellipse. The right panel shows a hierarchical system, where the close binary (orbit given by the tiny red circle in lower panel) orbits a third star (orbit given by the large red circle in lower panel/red circle in upper panel), and this triple (moving on the solid ellipse) in turn orbits a fourth star, (moving on the dashed ellipse). Credit: Marsh et al., 2014.

A way around this problem is to introduce a fourth star into the model. Either “star 3″ can be replaced by a second binary, or the first, close binary can orbit a third star, and that triple system can in turn orbit a fourth star. This second method is called a hierarchical quadruple. Both models have the effect of slowing the relative speed between the close binary and the more distant star, allowing the long transit time seen in the Kepler lightcurves. See Figure 2 for a representation of these different model scenarios.

Screen Shot 2014-09-09 at 1.49.48 PM

Figure 3. Best fit quadruple model plotted in blue over black lightcurve data. This model additionally allows the epoch of the close binary to vary, which improves the fit, but results in the close binary model eclipses arriving too early compared to the data. Credit: Marsh et al., 2014.

These quadruple models yield reasonably good fits to the data, and are physically possible, which is an improvement over the triple model! The double binary model provides slightly more reasonable results than the hierarchical model. But the authors caution that there are still unresolved problems. First, the models yield uneven radii for the stars in the close binary (2.5-4x difference in radius). This is a problem because the lightcurve depths indicate that the two stars must be of similar brightness, so it is hard to imagine a scenario where these two unevolved, low-mass, main sequence stars end up with such different radii. Another problem is that even the best quadruple model fit is not a great fit to the data. By opening up more parameters (remember that the authors originally fixed the close binary’s orbital parameters based on data excluding the big dips), they achieved a better total χ2, but at a cost: the close binary’s model eclipses now arrive too early when compared to the data (see Figure 3 for their best fit, though the scaling means the eclipse timing offsets aren’t really visible on the plot).

The authors admit that even their best model cannot tell the whole story of this system. But it’s a pretty interesting story so far!

 

 

by Korey Haynes at September 12, 2014 01:13 PM

Symmetrybreaking - Fermilab/SLAC

Astrophysics at the edge of the Earth

Conducting research at the South Pole takes a unique level of commitment.

The sun sets but once a year at the South Pole, and it is a prolonged process. During a recent stay at the Amundsen-Scott South Pole Station, postdoctoral researcher Jason Gallicchio saw it hover along the horizon for about a week before dropping out of sight for six months. The station’s chefs prepared an eight-course meal to mark the occasion.

The South Pole is not the easiest place to mark the passage of time, but the spectacular view it offers of the night sky has made it one of the best places to study astrophysics—and a unique place to work.

Gallicchio, an associate fellow for the Kavli Insitute of Physics at the University of Chicago, is part of an astrophysics experiment at the South Pole Telescope. Last year he spent nearly a full year at the station, including a “winterover,” during which the crew for his experiment dwindled to just him and a colleague.

Life at the station

Making it to the South Pole took interviews and a rigorous training program that included emergency medical and firefighting training and a mandatory psychological evaluation, Gallicchio says.

“People were bombarding me with information—which grease to grease which parts with, how to analyze data in certain ways,” Gallicchio says. “It was totally intense every day. It was one of the best, most educational things in my life.”

Gallicchio landed at the South Pole in early January 2013 and remained there until mid-November.

“When you get off the plane at the South Pole, there is a feeling like you’re out in the ocean,” says University of Chicago physicist John Carlstrom, the principal investigator for the South Pole Telescope team, who has logged 15 round trips to the South Pole over the past two decades. “It’s just a featureless horizon. The snow is so dry it feels like Styrofoam.”

The living quarters at the South Pole station are comfortable and dorm-like, Gallicchio says. A brightly lit greenhouse provides an escape from the constant night and nosebleed-inducing dryness. It also supplies some greens to the daily menu, but no fresh fruit or nuts.

“One of the most popular things after dinner, ironically, is ice cream,” he says.

Gallicchio fashioned his sleeping schedule around his duties at the telescope. “I was always on-call,” he says. “A lot of people there were in that situation. It was totally acceptable to be eating or watching a movie and then to go off to work and come back.”

The living quarters are about a half-mile hike from the telescope building.

Because welding is challenging in the Antarctic chill, and because the long winter season limits construction time, the telescope was designed in pieces that could be quickly fastened together with thousands of structural bolts. The ski-equipped cargo planes that carry supplies to the South Pole station are limited to carrying 26,000 pounds per trip; the bolts practically required their own dedicated flight.

Much of the telescope’s instrumentation is tucked away in a heated building beneath the exposed dish. Its panels, machined to hair’s-width precision, are slightly warmed to keep them free of frost.

Earlier astrophysics experiments at the South Pole provided important lessons for how to best build, maintain and operate the South Pole Telescope, Carlstrom says. “Everything left out and exposed to the cold will fail in a way you probably hadn’t thought of.”

Measuring 10 meters across and weighing 280 tons, the South Pole Telescope precisely maps temperature variations in the cosmic microwave background, a kind of faint static left in the sky from the moment that light first escaped the chaos that followed the big bang.

The telescope was installed in 2007 and upgraded in 2012 to be sensitive to a type of pattern, called polarization, in the CMB. Studying the polarization of the CMB could tell scientists about the early universe.

Another South Pole experiment, BICEP2, recently reported finding a pattern that could be the first proof that our universe underwent a period of rapid expansion the likes of which we haven’t seen since just after the big bang. One of the goals of the South Pole Telescope is to further investigate and refine this result.

The South Pole Telescope’s next upgrade, which will grow its array from 1,500 to 15,000 detectors, is set for late 2015.

Icy isolation

During the winter months, the average temperature is negative 72 degrees, but “a lot of people find the altitude much worse than the cold,” Gallicchio says. No flights are scheduled in or out.

Gallicchio says he had never experienced such isolation. Even if you’re aboard the International Space Station, if something goes wrong you’re only a few hours away from civilization, he says. During a winter at the South Pole, there’s no quick return trip.

During the warmer months, up to about 200 scientists and support staff can occupy the South Pole station at any given time. During the winter, the group is cut to about 50.

During Gallacchio’s winterover, he was generally responsible for the telescope’s data acquisition and software systems, though he occasionally assisted with “crawling around fixing things.” Gallacchio could work on some of the telescope’s computer and electronics systems from the main station, while his more seasoned colleague Dana Hrubes often spent at least eight hours a day at the telescope. “He really taught me a lot and was a great partner,” Gallachio says.

At one point, the power went out, and his emergency training kicked in. Gallicchio and Hrubes began the steps needed to dock the telescope to protect it from the elements in case its heating elements ran out of backup power.

“Power going out is a big deal, as all of the heat from the station comes from waste heat in the generators, and eventually there’s going to be no heat,” he says. “The circuit-breaker kept tripping and it took [the staff] a while to figure out that a control cable had frayed and shorted itself. It got a little scary.”

Once the power plant mechanics found the problem, they repaired it and got all systems back online. “They did a great job.”

A view like no other

Gallicchio recalls the appearance of the first star after the weeklong sunset. Gradually, more and more stars appeared. Eventually, the sky was aglow.

On some nights, the southern lights, the aurora australis, took over. “When the auroras are active they are by far the brightest thing,” he says. “Everything has a green tint to it, including the snow and the buildings.”

Besides missing family, friends, bike rides and working in coffee shops, Gallicchio says he did enjoy his time at the South Pole. “Nothing about the experience itself would keep me from doing it again.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at September 12, 2014 01:00 PM

September 11, 2014

Clifford V. Johnson - Asymptotia

Screen Junkies Chat: Guardians of the Galaxy
Screen Shot 2014-09-11 at 3.13.03 PMYou may recall that back in June I had a chat with Hal Rudnick over at Screen Junkies about science and time travel in various movies (including the recent "X-Men: Days of Future Past"). It was a lot of fun, and people seemed to like it a lot. Well, some good news: On Tuesday we recorded (along with my Biophysicist colleague Moh El-Naggar) another chat for Screen Junkies, this time talking a bit about the fun movie "Guardians of the Galaxy"! Again, a lot of fun was had... I wish you could hear all of the science (and more) that we went into, but rest assured that they* did a great job of capturing some of it in this eight-minute episode. Have a look. (Embed below the more-click): [...] Click to continue reading this post

by Clifford at September 11, 2014 10:19 PM

Clifford V. Johnson - Asymptotia

But How…?
I get questions from time to time about where the drawings on the site come from, or how they are done. notebook_and_tools The drawing I had in one of last week's posts is a good example of one that can raise questions, partly because you don't get a sense of scale after I've done a scan and cropped off the notebook edges and so forth. Also, people are not expecting much in the way of colour from drawing on location. Anyway, the answer is, yes I drew it, and yes it was drawn on location. I was just sitting on a balcony, chose which part of the view I wanted to represent on the page, and went for it. I wanted to spread across two pages of my notebook and make something of a tall sketch. See above right (click for larger view.) A quick light pencil rough helped me place things, and then a black [...] Click to continue reading this post

by Clifford at September 11, 2014 08:46 PM

Symmetrybreaking - Fermilab/SLAC

What Hawking really meant

Fermilab physicist Don Lincoln explains the idea of a metastable universe, what it has to do with the Higgs boson, and why we're still in good shape.

If you’re a science enthusiast, this week you have likely encountered headlines claiming that physicist Stephen Hawking thinks the Higgs boson will cause the end of the universe.

This is a jaw-dropping misrepresentation of science. The universe is safe and will be for a very long time—for trillions of years.

To understand how abominably Hawking’s words have been twisted, first we need to understand his statement. To paraphrase just a little, Hawking said that in a world in which the Higgs boson and another fundamental particle—the top quark—have the masses that scientists have measured them to have, the universe is in a metastable state.

Basically, metastable means “kind of stable.” So what does that mean? Let’s consider an example. Take a pool cue and lay it on the pool table. The cue is stable; it’s not going anywhere. Take the same cue and balance it on your finger. That’s unstable; under almost any circumstances, the cue will fall over.

The analogy for a metastable object is a barstool. Under almost all circumstances, the stool will sit there for all eternity. However, if you bump the stool hard enough, it will fall over. When the stool falls, it is more stable than it was, just like the pool cue lying on the table.

Now we need to bring in the universe and the laws that govern it. Here is an important guiding principle: The universe is lazy—a giant, cosmic couch potato. If at all possible, the universe will figure out a way to move to the lowest energy state it can. A simple analogy is a ball placed on the side of a mountain. It will roll down the mountainside and come to rest at the bottom of the valley. This ball will then be in a stable configuration.

The universe is the same way. After the cosmos was created, the fields that make up the universe should have arranged themselves into the lowest possible energy state.

There is a proviso. It is possible that there could be little “valleys” in the energy slope. As the universe cooled, it might have been caught in one of those little valleys. Ideally, the universe would like to fall into the deeper valley below, but it could be trapped.

This is an example of a metastable state. As long as the little valley is deep enough, it’s hard to get out of. Indeed, using classical physics, it is impossible to get out of it.

However, we don’t live in a classical world. In our universe, we must take into account the nature of quantum mechanics. There are many ways to describe the quantum realm, but one of the properties most relevant here is “rare things happen.” In essence, if the universe was trapped in a little valley of metastability, it could eventually tunnel out of the valley and fall down into the deeper valley below.

So what are the consequences of the universe slipping from one valley to another? Well, the rules of the universe are governed by the valley in which it finds itself. In the metastable valley that defines our familiar universe, we have the rules of physics and chemistry that allow matter to assemble into atoms and, eventually, us.

If the universe slipped into a different valley, the rules that govern matter and energy would be different. This means, among other things, particles such as quarks and leptons might be impossible. The known forces that govern the interaction of those particles might not apply. In short, there is no reason to think we’d exist at all.

Would we have any warning if this transition occurred? Actually, we’d have no warning at all. If, somewhere in the cosmos, the universe made a transition from a metastable valley to a deeper one, the laws of physics would change and sweep away at the speed of light. As the shockwave passed over the solar system, we’d simply disappear as the laws that govern the matter that makes us up ceased to apply. One second we’d be here; the next we’d be gone.

Coming back to the original question, what does the Higgs boson tell us about this? It turns out that we can use the Standard Model to tell us whether we are in a stable, unstable or metastable universe.

We know we don’t live in an unstable one, because we’re here, but the other two options are open. So, what is the answer? It depends on two parameters: the mass of the top quark and the mass of the Higgs boson.

If we follow our understanding of the Standard Model, combined with our best measurements, it appears that we live in a metastable universe that could one day disappear without warning. You can be forgiven if you take that pronouncement as a reason to indulge in some sort of rare treat tonight.

But before you splurge too much, take heed of a few words of caution. Using the same Standard Model we used to figure out whether the cosmos is metastable, we can predict how long it is likely to take for quantum mechanics to let the universe slip from the metastable valley to the stable one: It will take trillions of years.

Mankind has only existed for about 100,000 years, and the sun will grow to a red giant and incinerate the Earth in about 5 billion years. Since we’re talking about the universe existing as a metastable state for trillions of years, maybe overindulging tonight might be a bad idea.

It is important to note that finding the Higgs boson has no effect on whether the universe is in a metastable state. If we live in a metastable cosmos, it has been that way since the universe was created. The discovery of the Higgs boson has no effect at all on whether the universe is in a metastable state.

Returning to the original, overly hyped media stories, you can see that there was a kernel of truth and a barrel full of hysteria. There is no danger, and it’s completely OK to resume watching with great interest the news reports of the discovery and careful measurement of the Higgs boson. And, yes, you have to go to work tomorrow.


A version of this article was published in Fermilab Today.

 

Like what you see? Sign up for a free subscription to symmetry!

by Don Lincoln, Fermi National Accelerator Laboratory at September 11, 2014 06:36 PM

Clifford V. Johnson - Asymptotia

No Pressure Then…
Saw this the other day: Screen Shot 2014-09-11 at 09.42.23 Eek! Better get around to writing my remarks before Saturday! In case you're wondering, find out more about the Bridging the STEM Divide [...] Click to continue reading this post

by Clifford at September 11, 2014 04:56 PM

September 10, 2014

The n-Category Cafe

Quasistrict Symmetric Monoidal 2-Categories via Wire Diagrams

Guest post by Bruce Bartlett

I recently put an article on the arXiv:

It’s about Chris Schommer-Pries’s recent strictification result from his updated thesis, that every symmetric monoidal bicategory is equivalent to a quasistrict one. Since symmetric monoidal bicategories can be viewed as the syntax for ‘stable 3-dimensional algebra’, one aim of the paper is to write out this stuff out in a diagrammatic notation, like this:

pic

The other aim is to try to strip down the definition of a ‘quasistrict symmetric monoidal bicategory’, emphasizing the central role played by the interchangor isomorphisms. Let me explain a bit more.

Motivation

Firstly, some motivation. For a long time now I’ve been finishing up a project together with Chris Douglas, Chris Schommer-Pries and Jamie Vicary about 1-2-3 topological quantum field theories. The starting point is a generators-and-relations presentation of the oriented 3-dimensional bordism bicategory (objects are closed 1-manifolds, morphisms are two-dimensional bordisms, and 2-morphisms are diffeomorphism classes of three-dimensional bordisms between those). So, you present a symmetric monoidal bicategory from a bunch of generating objects, 1-morphisms, and 2-morphisms, and a bunch of relations between the 2-morphisms. These relations are written diagrammatically. For instance, the ‘pentagon relation’ looks like this:

pic

To make rigorous sense of these diagrams, we needed a theory of presenting symmetric monoidal bicategories via generators-and-relations in the above sense. So, Chris Schommer-Pries worked such a theory out, using computads, and proved the above strictification result. This implies that we could use the simple pictures above to perform calculations.

Strictifying symmetric monoidal bicategories

The full algebraic definition of a symmetric monoidal bicategory is quite intimidating, amounting to a large amount of data satisfying a host of diagrams. A self-contained definition can be found in this paper of Mike Stay. So, it’s of interest to see how much of this data can be strictified, at the cost of passing to an equivalent symmetric monoidal bicategory.

Before Schommer-Pries’s result, the best strictification result was that of Gurski and Osorno.

Theorem (GO). Every symmetric monoidal bicategory is equivalent to a semistrict symmetric monoidal 2-category.

Very roughly, a semistrict symmetric monoidal 2-category consists of a strict 2-category equipped with a strict tensor product, plus the following coherence data (see eg. HDA1 for a fuller account) satisfying a bunch of equations:

  • tensor naturators, i.e. 2-isomorphisms <semantics>Φ f,g:(fg)(fg)(ff)(gg)<annotation encoding="application/x-tex">\Phi_{f,g} : (f' \otimes g') \circ (f \otimes g) \Rightarrow (f' \circ f) \otimes (g' \circ g)</annotation></semantics>
  • braidings, i.e. 1-morphisms <semantics>β A,B:ABBA<annotation encoding="application/x-tex">\beta_{A,B} : A \otimes B \rightarrow B \otimes A</annotation></semantics>
  • braiding naturators, i.e. 2-isomorphisms <semantics>β f,g:β A,B(fg)(gf)β A,B<annotation encoding="application/x-tex">\beta_{f,g} : \beta_{A,B} \circ (f \otimes g) \Rightarrow (g \otimes f) \circ \beta_{A,B}</annotation></semantics>
  • braiding bilinearators, i.e. 2-isomorphisms <semantics>R (A|B,C):(idR B,C)(R A,Bid)R A,BC<annotation encoding="application/x-tex">R_{(A|B, C)} : (id \otimes R_{B,C}) \circ (R_{A,B} \otimes id) \Rightarrow R_{A, B\otimes C}</annotation></semantics>
  • symmetrizors, i.e. 2-isomorphisms <semantics>ν A,B:id ABR B,AR A,B<annotation encoding="application/x-tex">\nu_{A,B} : id_{A \otimes B} \Rightarrow R_{B,A} \circ R_{A,B}</annotation></semantics>

So — Gurski and Osorno’s result represents a lot of progress. It says that the other coherence data in a symmetric monoidal bicategory (associators for the underlying bicategory, associators for the underlying monoidal bicategory, pentagonator, unitors, adjunction data, …) can be eliminated, or more precisely, strictified.

Schommer-Pries’s result goes further.

Theorem (S-P). Every semistrict monoidal bicategory is equivalent to a quasistrict symmetric monoidal 2-category.

A quasistrict symmetric monoidal 2-category is a semistrict symmetric monoidal 2-category where the braiding bilinearators and symmetrizors are equal to the identity. So - only the tensor naturators, braiding 1-morphisms, and braiding naturators remain!

The method of proof is to show that every symmetric monoidal bicategory admits a certain kind of presentation by generators-and-relations (a ‘quasistrict 3-computad’). And the gismo built out of a quasistrict 3-computad is a quasistrict symmetric monoidal 2-category! Q.E.D.

Stringent symmetric monoidal 2-categories

In my article, I reformulate the definition of a quasistrict symmetric monoidal 2-category a bit, removing redundant data. Firstly, the tensor naturators <semantics>Φ (f,g),(f,g)<annotation encoding="application/x-tex">\Phi_{(f',g'),(f,g)}</annotation></semantics> are fully determined by their underlying interchangors <semantics>ϕ f,g<annotation encoding="application/x-tex">\phi_{f,g}</annotation></semantics>,

(1)<semantics>ϕ f,g=Φ (f,id),(id,g):(fid)(idg)(idg)(fid)<annotation encoding="application/x-tex"> \phi_{f,g} = \Phi_{(f, id), (id, g)} : (f \otimes id) \circ (id \otimes g) \Rightarrow (id \otimes g) \circ (f \otimes id) </annotation></semantics>

This much is well-known. But also, the braiding naturators are fully determined by the interchangors. So, I define a stringent symmetric monoidal 2-category purely in terms of this coherence data: interchangors, and braiding 1-morphisms. I show that they’re equivalent to quasistrict symmetric monoidal bicategories.

Wire diagrams

The ‘stringent’ version of the definition is handy, because it admits a nice graphical calculus which I call ‘wire diagrams’. I needed a new name just to distinguish them from vanilla-flavoured string diagrams for 2-categories where the objects of the 2-category correspond to planar regions; now the objects of the 2-category correspond to lines. But it’s really just a rotated version of string diagrams in 3 dimensions. So, the basic setup is as follows:

pic

But to keep things nice and planar, we’ll draw this as follows:

pic

These diagrams are interpreted according to the prescription: tensor first, then compose! So, the interchangor isomorphisms look as follows:

pic

So, what I do is write out the definitions of quasistrict and stringent symmetric monoidal 2-categories in terms of wire diagrams, and use this graphical calculus to prove that they’re the same thing.

That’s good for us, because it turns out these ‘wire diagrams’ are precisely the diagrammatic notation we were using for the generators-and-relations presentation of the oriented 3-dimensional bordism bicategory. For instance, I hope you can see the interchangor <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> being used in the ‘pentagon relation’ I drew near the top of this post. So, that diagrammatic notation has been justified.

by willerton (S.Willerton@sheffield.ac.uk) at September 10, 2014 08:11 PM

Sean Carroll - Preposterous Universe

Cosmological Attractors

I want to tell you about a paper I recently wrote with grad student Grant Remmen, about how much inflation we should expect to have occurred in the early universe. But that paper leans heavily on an earlier one that Grant and I wrote, about phase space and cosmological attractor solutions — one that I never got around to blogging about. So you’re going to hear about that one first! It’s pretty awesome in its own right. (Sadly “cosmological attractors” has nothing at all to do with the hypothetical notion of attractive cosmologists.)

Attractor Solutions in Scalar-Field Cosmology
Grant N. Remmen, Sean M. Carroll

Models of cosmological scalar fields often feature “attractor solutions” to which the system evolves for a wide range of initial conditions. There is some tension between this well-known fact and another well-known fact: Liouville’s theorem forbids true attractor behavior in a Hamiltonian system. In universes with vanishing spatial curvature, the field variables (\phi, \dot\phi) specify the system completely, defining an effective phase space. We investigate whether one can define a unique conserved measure on this effective phase space, showing that it exists for m2φ2 potentials and deriving conditions for its existence in more general theories. We show that apparent attractors are places where this conserved measure diverges in the (\phi, \dot\phi) variables and suggest a physical understanding of attractor behavior that is compatible with Liouville’s theorem.

This paper investigates a well-known phenomenon in inflationary cosmology: the existence of purported “attractor” solutions. There is a bit of lore that says that an inflationary scalar field might start off doing all sorts of things, but will quickly settle down to a preferred kind of evolution, known as the attractor. But that lore is nominally at odds with a mathematical theorem: in classical mechanics, closed systems never have attractor solutions! That’s because “attractor” means “many initial conditions are driven to the same condition,” while Liouville’s theorem says “a set of initial conditions maintains its volume as it evolves.” So what’s going on?

Let’s consider the simplest kind of model: you just have a single scalar field φ, and a potential energy function V(φ), in the context of an expanding universe with no other forms of matter or energy. That fully specifies the model, but then you have to specify the actual trajectory that the field takes as it evolves. Any trajectory is fixed by giving certain initial data in the form of the value of the field φ and its “velocity” \dot\phi. For a very simple potential like V(φ) ~ φ2, the trajectories look like this:

attractors

This is the “effective phase space” of the model — in a spatially flat universe (and only there), specifying φ and its velocity uniquely determines a trajectory, shown as the lines on the plot. See the dark lines that start horizontally, then spiral toward the origin? Those are the attractor solutions. Other trajectories (dashed lines) basically zoom right to the attractor, then stick nearby for the rest of their evolution. Physically, the expansion of the universe acts as a kind of friction; away from the attractor the friction is too small to matter, but once you get there friction begins to dominate and the the field rolls very slowly. So the idea is that there aren’t really that many different kinds of possible evolution; a “generic” initial condition will just snap onto the attractor and go from there.

This story seems to be in blatant contradiction with Liouville’s Theorem, which roughly says that there cannot be true attractors, because volumes in phase space (the space of initial conditions, i.e. coordinates and momenta) remain constant under time-evolution. Whereas in the picture above, volumes get squeezed to zero because every trajectory flows to the 1-dimensional attractor, and then of course eventually converges to the origin. But we know that the above plot really does show what the trajectories do, and we also know that Liouville’s theorem is correct and does apply to this situation. Our goal for the paper was to show how everything actually fits together.

Obviously (when you think about it, and know a little bit about phase space), the problem is with the coordinates on the above graph. In particular, \dot\phi might be the “velocity” of the field, but it definitely isn’t its “momentum,” in the strict mathematical sense. The canonical momentum is actually a^3\dot\phi, where a is the scale factor that measures the size of the universe. And the scale factor changes with time, so there is no simple translation between the nice plot we saw above and the “true” phase space — which should, after all, also include the scale factor itself as well as its canonical momentum.

So there are good reasons of convenience to draw the plot above, but it doesn’t really correspond to phase space. As a result, it looks like there are attractors, although there really aren’t — at least not by the strict mathematical definition. It’s just a convenient, though possibly misleading, nomenclature used by cosmologists.

Still, there is something physically relevant about these cosmological attractors (which we will still call “attractors” even if they don’t match the technical definition). If it’s not “trajectories in phase space focus onto them,” what is it? To investigate this, Grant and I turned to a formalism for defining the measure on the space of trajectories (rather than just points in phase space), originally studied by Gibbons, Hawking, and Stewart and further investigated by Heywood Tam and me a couple of years ago.

The interesting thing about the “GHS measure” on the space of trajectories is that it diverges — becomes infinitely big — for cosmologies that are spatially flat. That is, almost all universes are spatially flat — if you were to pick a homogeneous and isotropic cosmology out of a hat, it would have zero spatial curvature with probability unity. (Which means that the flatness problem you were taught as a young cosmologist is just a sad misunderstanding — more about that later in another post.) That’s fine, but it makes it mathematically tricky to study those flat universes, since the measure is infinity there. Heywood and I proposed a way to regulate this infinity to get a finite answer, but that was a mistake on our part — upon further review, our regularization was not invariant under time-evolution, as it should have been.

That left an open problem — what is the correct measure on the space of flat universes? This is what Grant and I tackled, and basically solved. Long story short, we studied the necessary and sufficient conditions for there to be the right kind of measure on the effective phase space shown in the plot above, and argued that such a measure (1) exists, and (2) is apparently unique, at least in the simple case of a quadratic potential (and probably more generally). That is, we basically reverse-engineered the measure from the requirement that Liouville’s theorem be obeyed!

So there is such a measure, but it’s very different from the naïve “graph-paper measure” that one is tempted to use for the effective phase space plotted above. (A temptation to which almost everyone in the field gives in.) Unsurprisingly, the measure blows up on the attractor, and near the origin. That is, what looks like an attractor when you plot it in these coordinates is really a sign that the density of trajectories grows very large there — which is the least surprising thing in the world, really.

At the end of the day, despite the fact that we mildly scold fellow cosmologists for their sloppy use of the word “attractor,” the physical insights connected to this idea go through essentially unaltered. The field and its velocity are the variables that are most readily observable (or describable) by us, and in terms of these variables the apparent attractor behavior is definitely there. The real usefulness of our paper would come when we wanted to actually use the measure we constructed, for example to calculate the expected amount of inflation in a given model — which is what we did in our more recent paper, to be described later.

This paper, by the way, was one from which I took equations for the blackboards in an episode of Bones. It was fun to hear Richard Schiff, famous as Toby from The West Wing, play a physicist who explains his alibi by saying “I was constructing an invariant measure on the phase space of cosmological spacetimes.” 

richard-schiff-on-bones

The episode itself is great, you should watch it if you can. But I warn you — you will cry.

by Sean Carroll at September 10, 2014 05:37 PM

ZapperZ - Physics and Physicists

"Interactions between teaching assistants and students boost engagement in physics labs"
I will say that, having read this paper rather quickly, I am not surprised by the conclusion, and neither should you. The paper is available for free at the link given above.

Abstract: Through in-class observations of teaching assistants (TAs) and students in the lab sections of a large introductory physics course, we study which TA behaviors can be used to predict student engagement and, in turn, how this engagement relates to learning. For the TAs, we record data to determine how they adhere to and deliver the lesson plan and how they interact with students during the lab. For the students, we use observations to record the level of student engagement and pretests and post-tests of lab skills to measure learning. We find that the frequency of TA–student interactions, especially those initiated by the TAs, is a positive and significant predictor of student engagement. Interestingly, the length of interactions is not significantly correlated with student engagement. In addition, we find that student engagement was a better predictor of post-test performance than pretest scores. These results shed light on the manner in which students learn how to conduct inquiry and suggest that, by proactively engaging students, TAs may have a positive effect on student engagement, and therefore learning, in the lab.

When I was a lab TA way back when, I tried to engage the students while they were performing the experiments. I tried to ask them on-the-spot questions, such as why do we need to measure the time for the pendulum to make 20 oscillations when all we care about is the time for one oscillation (period). I ask them many things about why they think we do this and that, rather than a seemingly-simpler and more direct measurements. I also ask them stuff related to the physics, such as during an experiment that used springs, what they think would be different if we were to do the same experiments on the moon instead.

Obviously, I couldn't do a study like these people did and investigate if what I was doing had any effects on the students and their lessons. However, I did get a lot of positive feedback from the course review. This new study reinforces the vital role that Lab TAs could play, and they need to read this paper to realize that they might have a non-trivial influence on the students.

Zz.

by ZapperZ (noreply@blogger.com) at September 10, 2014 05:28 PM

ZapperZ - Physics and Physicists

The Physics of Wireless Charging
Rhett Allain has another informative article on how wireless charging of electronic devices work. This technology will be more prevalent in the near future as everyone is getting fed up with searching for power cords to charge their cell phones, tablets, etc.

Zz.

by ZapperZ (noreply@blogger.com) at September 10, 2014 03:12 PM

Matt Strassler - Of Particular Significance

Will the Higgs Boson Destroy the Universe???

No.

The Higgs boson is not dangerous and will not destroy the universe.

The Higgs boson is a type of particle, a little ripple in the Higgs field. [See here for the Higgs FAQ.] This lowly particle, if you’re lucky enough to make one (and at the world’s largest particle accelerator, the Large Hadron Collider, only one in a trillion proton-proton collisions actually does so) has a brief life, disintegrating to other particles in less than the time that it takes light to cross from one side of an atom to another. (Recall that light can travel from the Earth to the Moon in under two seconds.) Such a fragile creature is hardly more dangerous than a mayfly.

Anyone who says otherwise probably read Hawking’s book (or read about it in the press) but didn’t understand what he or she was reading, perhaps because he or she had not read the Higgs FAQ.

If you want to worry about something Higgs-related, you can try to worry about the Higgs field, which is “ON” in our universe, though not nearly as “on” as it could be. If someone were to turn the Higgs field OFF, let’s say as a practical joke, that would be a disaster: all ordinary matter across the universe would explode, because the electrons on the outskirts of atoms would lose their mass and fly off into space. This is not something to worry about, however. We know it would require an input of energy and can’t happen spontaneously.  Moreover, the amount of energy required to artificially turn the Higgs field off is immense; to do so even in a small room would require energy comparable to that of a typical supernova, an explosion of a star that can outshine an entire galaxy and releases the vast majority of its energy in unseen neutrinos. No one, fortunately, has a supernova in his or her back pocket. And if someone did, we’d have more immediate problems than worrying about someone wasting a supernova trying to turn off the Higgs field in a basement somewhere.

Now it would also be a disaster if someone could turn the Higgs field WAY UP… more than when your older brother turned up the volume on your stereo or MP3 player and blew out your speakers. In this case atoms would violently collapse, or worse, and things would be just as nasty as if the Higgs field were turned OFF. Should you worry about this? Well, it’s possible this could happen spontaneously, so it’s slightly more plausible. But I do mean slightly. Very slightly.

Recently, physicists have been writing about this possibility because if (a) you ASSUME that the types of particles that we’ve discovered so far are the only ones that affect the Higgs field, and (b) you ASSUME that there are no other important forces that affect the Higgs field other than the ones we know, then you can calculate, with some degree of reliability (though there is a debate about that degree) that (1) the Higgs field could lower the energy of the universe by suddenly jumping from ON to WAY WAY SUPER-DUPER ON, and (2) that the time we’d have to wait for it to do so spontaneously isn’t infinite.  It would do this in two steps: first a bubble of WAY WAY ON Higgs field would form (via the curious ability of quantum mechanics to make the improbable happen, rarely), and then that bubble would expand and sweep across the universe, destroying everything in its path.

An aside: In particle physics lingo explained here, we say that “the universe has two possible ‘vacua’, the vacuum we live in, in which the Higgs field is ON a bit, and a second vacuum in which the Higgs field is HUGELY ON.” If the second vacuum has lower energy than the first, then the first vacuum is said to be “metastable”: although it lasts a very long time, it has a very small but non-zero probability of turning into the second vacuum someday.  That’s because a bubble of the second vacuum that appears by chance inside the first vacuum will expand, and take over the whole universe.

Ok. First, should you buy the original assumptions? No. It’s just humans assuming that what we currently know is all there is to know; since when has that been true?  Second, even if you do buy them, should you worry about the conclusion? No. The universe has existed in its current form for about 13.7 billion years. The Higgs field may not perform this nasty jump for trillions of years, or trillions of trillions, or trillions of trillions of trillions, or more. Likely more. In any case, nobody knows, but really, nobody should care very much. The calculation is hard, the answer highly uncertain, and worse, the whole thing is profoundly dependent on the ASSUMPTIONS. In fact, if the assumptions are slightly wrong — if there are other particles and forces that affect the Higgs field, or if there is more than one Higgs-like field in nature — then the calculation could end up being way off from the truth. Also possible is that the calculational method, which is subtle, isn’t yet refined enough to give the right answer.  Altogether, this means that not only might the Higgs field’s nasty jump be much more or less likely than is currently believed, it might not even be possible at all. So we don’t actually know anything for sure, despite all the loose talk that suggests that we do.  But in any case, since the universe has lived 13.7 billion years already, the chance is ridiculously tiny that this Higgs field jump, even if it is possible at all, will occur in your ultra-short 100 year-ish lifetime, or even that of any of your descendants.

What about the possibility that human beings could artificially cause the Higgs field to turn WAY WAY ON? Again, the amount of energy involved in trying to do that is extremely large — not a supernova, now, but far, far beyond current human capability, and likely impossible.  (The technology required to build a particle accelerator with collisions at this energy, and the financial and environmental cost of running it, are more than a little difficult to imagine.)  At this point we can barely make Higgs bosons — little ripples in the Higgs field; now you want to imagine us making a bubble where the Higgs field is WAY WAY WAY MORE ON than usual? We’re scientists, not magicians. And we deal in science — i.e., reality.  Current and foreseeable technology cannot turn this imaginary possibility into reality.

Some dangers actually exist in reality. Asteroidlets do sometimes hit the earth; supernovas do explode, and a nearby one would be terrible; and you should not be wandering outdoors or in a shower during a lightning storm.  As for the possible spontaneous destruction of the universe? Well, if it happens some day, it may have nothing to do with the Higgs field; it may very well be due to some other field, about which we currently know nothing, making a jump of a sort that we haven’t even learned about yet. Humans tend to assume that the things they know about are much scarier than they actually are (e.g. Yellowstone, the “super”-volcano) and that the things they don’t know about are much less scary than they actually are (e.g. what people used to think about ozone-destroying chemicals before they knew they destroyed ozone.) This is worth keeping in mind.

So anyone who tells you that we know that the universe is only “meta-stable”, and that someday the Higgs field will destroy it by suddenly screaming at the top of its lungs, or that we might cause it to do so, is forgetting to tell you about all the assumptions that went into that conclusion, and about the incredible energies required which may far exceed what humans can ever manage, and about the incredible lengths of time that may be involved, by which point there may be no more stars left to keep life going anyway, and possibly not even any more protons to make atoms out of. There’s a word for this kind of wild talk: “scare-mongering”.  You can safely go back to sleep.

Or not. There’s plenty to keep you awake. But by comparison with the spread of the Ebola virus, the increasing carbon dioxide in the atmosphere and acidification of the oceans, or the accelerating loss of the world’s biodiversity, not to mention the greed and violence common in our species, worrying about Higgs bosons, or even the Higgs field in which a Higgs particle is a tiny ripple, seems to me a tempest in a top quark.


Filed under: Higgs, Science and Modern Society Tagged: cosmology, Higgs, press, VacuumEnergy

by Matt Strassler at September 10, 2014 12:35 PM