Particle Physics Planet


July 23, 2014

John Baez - Azimuth

El Niño Project (Part 6)

guest post by Steven Wenner

Hi, I’m Steve Wenner.

I’m an industrial statistician with over 40 years of experience in a wide range applications (quality, reliability, product development, consumer research, biostatistics); but, somehow, time series only rarely crossed my path. Currently I’m working for a large consumer products company.

My undergraduate degree is in physics, and I also have a master’s in pure math. I never could reconcile how physicists used math (explain that Dirac delta function to me again in math terms? Heaviside calculus? On the other hand, I thought category theory was abstract nonsense until John showed me otherwise!). Anyway, I had to admit that I lacked the talent to pursue pure math or theoretical physics, so I became a statistician. I never regretted it—statistics has provided a very interesting and intellectually challenging career.

I got interested in Ludescher et al’s paper on El Niño prediction by reading Part 3 of this series. I have no expertise in climate science, except for an intense interest in the subject as a concerned citizen. So, I won’t talk about things like how Ludescher et al use a nonstandard definition of ‘El Niño’—that’s a topic for another time. Instead, I’ll look at some statistical aspects of their paper:

• Josef Ludescher, Avi Gozolchiani, Mikhail I. Bogachev, Armin Bunde, Shlomo Havlin, and Hans Joachim Schellnhuber, Very early warning of next El Niño, Proceedings of the National Academy of Sciences, February 2014. (Click title for free version, journal name for official version.)

Analysis

I downloaded the NOAA adjusted monthly temperature anomaly data and compared the El Niño periods with the charts in this paper. I found what appear to be two errors (“phantom” El Niños) and noted some interesting situations. Some of these are annotated on the images below. Click to enlarge them:

 

I also listed for each year whether an El Niño initiation was predicted, or not, and whether one actually happened. I did the predictions five ways: first, I listed the author’s “arrows” as they appeared on their charts, and then I tried to match their predictions by following in turn four sets of rules. Nevertheless, I could not come up with any detailed rules that exactly reproduced the author’s results.

These were the rules I used:

An El Niño initiation is predicted for a calendar year if during the preceding year the average link strength crossed above the 2.82 threshold. However, we could also invoke additional requirements. Two possibilities are:

1. Preemption rule: the prediction of a new El Niño is canceled if the preceding year ends in an El Niño period.

2. End-of-year rule: the link strength must be above 2.82 at year’s end.

I counted the predictions using all four combinations of these two rules and compared the results to the arrows on the charts.

I defined an “El Niño initiation month” to be a month where the monthly average adjusted temperature anomaly rises up to at least 0.5 C and remains above or equal to 0.5 °C for at least five months. Note that the NOAA El Niño monthly temperature estimates are rounded to hundredths; and, on occasion, the anomaly is reported as exactly 0.5 °C. I found slightly better agreement with the authors’ El Niño periods if I counted an anomaly of exactly 0.5 °C as satisfying the threshold criterion, instead of using the strictly “greater than” condition.

Anyway, I did some formal hypothesis testing and estimation under all five scenarios. The good news is that under most scenarios the prediction method gave better results than merely guessing. (But, I wonder how many things the authors tried before they settled on their final method? Also, did they do all their work on the learning series, and then only at the end check the validation series—or were they checking both as they went about their investigations?)

The bad news is that the predictions varied with the method, and the methods were rather weak. For instance, in the training series there were 9 El Niño periods in 30 years; the authors’ rules (whatever they were, exactly) found five of the nine. At the same time, they had three false alarms in the 21 years that did not have an El Niño initiated.

I used Fisher’s exact test to compute some p-values. Suppose (as our ‘null hypothesis’) that Ludescher et al’s method does not improve the odds of a successful prediction of an El Nino initiation. What’s the probability of that method getting at least as many predictions right just by chance? Answer: 0.032 – this is marginally more significant than the conventional 1 in 20 chance that is the usual threshold for rejecting a null hypothesis, but still not terribly convincing. This was, by the way, the most significant of the five p-values for the alternative rule sets applied to the learning series.

I also computed the “relative risk” statistics for all scenarios; for instance, we are more than three times as likely to see an El Niño initiation if Ludescher et al predict one, than if they predict otherwise (the 90% confidence interval for that ratio is 1.2 to 9.7, with the point estimate 3.4). Here is a screen shot of some statistics for that case:

Here is a screen shot of part of the spreadsheet list I made. In the margin on the right I made comments about special situations of interest.

Again, click to enlarge—but my whole working spreadsheet is available with more details for anyone who wishes to see it. I did the statistical analysis with a program called JMP, a product of the SAS corporation.

My overall impression from all this is that Ludescher et al are suggesting a somewhat arbitrary (and not particularly well-defined) method for revealing the relationship between link strength and El Niño initiation, if, indeed, a relationship exists. Slight variations in the interpretation of their criteria and slight variations in the data result in appreciably different predictions. I wonder if there are better ways to analyze these two correlated time series.


by John Baez at July 23, 2014 12:45 AM

July 22, 2014

The Great Beyond - Nature blog

Scripps president resigns after faculty revolt

The president of the Scripps Research Institute intends to leave his post, according to a statement from Richard Gephardt, the chair of the institute’s board of trustees. The announcement came in the wake of a faculty rebellion against the president, Michael Marletta, who had attempted to broker a deal in which the La Jolla, California, research lab would be acquired by the University of Southern California for $600 million.

In the statement, posted on 21 July, Gephardt said that Marletta “has indicated his desire to leave TSRI” and that the board “is working with Dr. Marletta on a possible transition plan.”

Scripps Research Institute president Michael Marletta resigned after clashing with faculty over a proposed merger.

Scripps Research Institute president Michael Marletta resigned after clashing with faculty over a proposed merger.

Scripps Research Institute

Scripps faculty see Marletta’s departure as a victory. They had been angered by the terms of the USC deal, which was scrapped on 9 July, and by the fact that Marletta did not consult with faculty during his negotiations with USC. Faculty told the Scripps board of trustees earlier this month that they had an almost unanimous consensus of no confidence in Marletta.

“I think we are more optimistic than we have been in many years, because we feel like we have some control over our own fate,” says Scripps biologist Jeanne Loring.

Loring said that at a meeting with a majority of Scripps faculty on 21 July, Gephardt indicated that the board had thought that Marletta was communicating with the faculty as he negotiated the USC deal. Gephardt also promised that faculty would involved in choosing Marletta’s successor.

Whoever replaces Marletta must find a way to close a projected $21 million budget gap this year left by the contraction of funding from the US National Institutes of Health and by the virtual disappearance of support from pharmaceutical companies, who had provided major support for Scripps until 2011.

How Scripps solves its funding issue will be watched by other independent institutes, which have been hard hit by the contraction in NIH dollars. Scripps’ neighbor institutes have brought in hundreds of millions of dollars in philanthropy, and many involved see that as part of the solution for Scripps as well. But, Loring says, “The funding that other institutes have gotten from philanthropy is going to be a short-term solution, because even though it seems like an awful lot of money, they have to spend it, so they will eventually be facing the same issues.”

Follow Erika on Twitter @Erika_Check.

 

by Erika Check Hayden at July 22, 2014 11:33 PM

Christian P. Robert - xi'an's og

Cancún, ISBA 2014 [day #3]

Cancun13…already Thursday, our [early] departure day!, with an nth (!) non-parametric session that saw [the newly elected ISBA Fellow!] Judith Rousseau present an ongoing work with Chris Holmes on the convergence or non-convergence conditions for a Bayes factor of a non-parametric hypothesis against another non-parametric. I wondered at the applicability of this test as the selection criterion in ABC settings, even though having an iid sample to start with is a rather strong requirement.

Switching between a scalable computation session with Alex Beskos, who talked about adaptive Langevin algorithms for differential equations, and a non-local prior session, with David Rossell presenting a smoother way to handle point masses in order to accommodate frequentist coverage. Something we definitely need to discuss the next time I am in Warwick! Although this made me alas miss both the first talk of the non-local session by Shane Jensen  the final talk of the scalable session by Doug Vandewrken where I happened to be quoted (!) for my warning about discretising Markov chains into non-Markov processes. In the 1998 JASA paper with Chantal Guihenneuc.

After a farewell meal of ceviche with friends in the sweltering humidity of a local restaurant, I attended [the newly elected ISBA Fellow!] Maria Vanucci’s talk on her deeply involved modelling of fMRI. The last talk before the airport shuttle was François Caron’s description of a joint work with Emily Fox on a sparser modelling of networks, along with an auxiliary variable approach that allowed for parallelisation of a Gibbs sampler. François mentioned an earlier alternative found in machine learning where all components of a vector are updated simultaneously conditional on the previous avatar of the other components, e.g. simulating (x’,y’) from π(x’|y) π(y’|x) which does not produce a convergent Markov chain. At least not convergent to the right stationary. However, running a quick [in-flight] check on a 2-d normal target did not show any divergent feature, when compared with the regular Gibbs sampler. I thus wonder at what can be said about the resulting target or which conditions are need for divergence. A few scribbles later, I realised that the 2-d case was the exception, namely that the stationary distribution of the chain is the product of the marginal. However, running a 3-d example with an auto-exponential distribution in the taxi back home, I still could not spot a difference in the outcome.


Filed under: pictures, Statistics, Travel, University life Tagged: Cancún, ISBA, Langevin MCMC algorithm, MCMC algorithms, non-local priors, University of Warwick

by xi'an at July 22, 2014 10:14 PM

ZapperZ - Physics and Physicists

Big Mystery in the Perseus Cluster
The news about the x-ray emission line seen in the Perseus cluster that can't be explained (yet) by current physics.



The preprint that this video is based on can be found here.

Zz.

by ZapperZ (noreply@blogger.com) at July 22, 2014 10:08 PM

Quantum Diaries

Welcome to Thesisland

When I joined Quantum Diaries, I did so with trepidation: while it was an exciting opportunity, I was worried that all I could write about was the process of writing a thesis and looking for postdoc jobs. I ended up telling the site admin exactly that: I only had time to work on a thesis and job hunt. I thought I was turning down the offer. But the reply I got was along the lines of “It’s great to know what topics you’ll write about! When can we expect a post?”. So, despite the fact that this is a very different topic from any recent QD posts, I’m starting a series about the process of writing a physics PhD thesis. Welcome.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

The main thesis editing desk: laptop, external monitor keyboard mouse; coffee, water; notes; and lots of encouragement.

There are as many approaches to writing a PhD thesis as there are PhDs, but they can be broadly described along a spectrum.

On one end is the “constant documentation” approach: spend some fixed fraction of your time on documenting every project you work on. In this approach, the writing phase is completely integrated with the research work, and it’s easy to remember the things you’re writing about. There is a big disadvantage: it’s really easy to write too much, to spend too much time writing and not enough doing, or otherwise un-balance your time. If you keep a constant fraction of your schedule dedicated to writing, and that fraction is (in retrospect) too big, you’ve lost a lot of time. But you have documented everything, which everyone who comes after will be grateful for. If they ever see your work.

The other end of the spectrum is the “write like hell” approach (that is, write as fast as you can), where all the research is completed and approved before writing starts. This has the advantage that if you (and your committee) decide you’ve written enough, you immediately get a PhD! The disadvantage is that if you have to write about old projects, you’ll probably have forgotten a lot. So this approach typically leads to shorter theses.

These two extremes were first described to me (see the effect of thesis writing? It’s making my blog voice go all weird and passive) by two professors who were in grad school together and still work together. Each took one approach, and they both did fine, but the “constant documentation” thesis was at least twice (or was it three times?) as long as the “write like hell” thesis.

Somewhere between those extremes is the funny phenomenon of the “staple thesis”: a thesis primarily composed of all the papers you wrote in grad school, stapled together. A few of my friends have done this, but it’s not common in my research group because our collaboration is so large. I’ll discuss that in more detail later.

I’m going for something in the middle: as soon as I saw a light at the end of the tunnel, I wanted to start writing, so I downloaded the UW latex template for PhD theses and started filling it in. It’s been about 14 months since then, with huge variations in the writing/research balance. To help balance between the two approaches, I’ve found it helpful to keep at least some notes about all the physics I do, but nothing too polished: it’s always easier to start from some notes, however minimal, than to start from nothing.

When I started writing, there were lots of topics available that needed some discussion: history and theory, my detector, all the calibration work I did for my master’s project–I could have gone full-time writing at that point and had plenty to do. But my main research project wasn’t done yet. So for me, it’s not just a matter of balancing “doing” with “documenting”; it’s also a question of balancing old documentation with current documentation. I’ve almost, *almost* finished writing the parts that don’t depend on my work from the last year or so. In the meantime, I’m still finishing the last bits of analysis work.

It’s all a very long process. How many readers are looking towards writing a thesis later on? How many have gone through this and found a method that served them well? If it was fast and relatively low-stress, would you tell me about it?

by Laura Gladstone at July 22, 2014 05:13 PM

The Great Beyond - Nature blog

São Paulo state joins mega-telescope

The Giant Magellan Telescope received a boost today when Brazil’s São Paulo Research Foundation (FAPESP) confirmed its plans to join the project. The $880-million facility, some components of which have already been built, is one of three competing mega-telescopes that will study the skies in the next decade.

Approving plans reported by Nature in February, the richest state in Brazil confirmed on 22 July that it would contribute US$40 million toward membership of the GMT, which is managed by a consortium of institutions in the United States, Australia and South Korea.

São Paulo researchers might not be the only ones to benefit. FAPESP scientific director Carlos Henrique de Brito Cruz told Nature’s news team that negotiations between the foundation and the Ministry of Science and Technology of Brazil were “well advanced to share these costs and allow astronomers from all states of Brazil to have access to the telescope”. If that plan goes ahead, the ministry will refund part of the costs to FAPESP.

Although a boon for Brazilian astronomers, the move could raise concerns for advocates of the Extremely Large Telescope (E-ELT), which is being built by the European Southern Observatory in Chile. ESO has begun blasting the top off the 3,000-metre peak of Cerro Armazones where the E-ELT will be based, but is reliant on funding from Brazil’s federal government to enter the main construction phase. In 2010 Brazil agreed to contribute €270 million (US$371 million) to ESO over a decade, but the deal has yet to be ratified and remains held up in legislative committees.

Some legislators may see the GMT agreement as a cheaper way for Brazil’s astronomers to access a future mega-telescope, even though the ESO deal also allows access to existing observatories in Chile. However Beatrice Barbuy, head of the Astronomical Society of Brazil’s ESO committee, says the plans are still moving ahead. She adds that they had stalled in recent months due to the country hosting the FIFA World Cup and staff going on winter vacations, but discussions were likely get underway again in August.

The 25-metre GMT, to be built at the Carnegie Institution for Science’s Las Campanas Observatory in Chile, is scheduled to begin operations in 2020. It is designed to have six times the collecting power of the largest existing observatories and 10 times the resolution of NASA’s Hubble Space Telescope. The agreement is expected to secure São Paulo a 4% stake in the GMT project, guaranteeing 4% of observation time for Brazilian astronomers each year, as well as representation on the consortium’s decision-making board.

The GMT, E-ELT and a third planned next-generation ground-based observatory, the Thirty Meter Telescope proposed to be built in Mauna Kea in Hawaii, are intended to address similar science questions. Astronomers hope to use the huge light-collecting capacity of the telescopes to explore planets outside our Solar System, study supermassive black holes and galaxy formation, and unravel the nature of dark matter and dark energy.

by Elizabeth Gibney at July 22, 2014 04:41 PM

Emily Lakdawalla - The Planetary Society Blog

Women Working on Mars: Curiosity Women's Day
Just after completing the primary mission of 669 sols on Mars, Curiosity's managers planned a special day -- June 26, 2014 -- in which mostly women were assigned to the more than 100 different operational roles.

July 22, 2014 03:57 PM

Symmetrybreaking - Fermilab/SLAC

Exploratorium exhibit reveals the invisible

A determined volunteer gives an old detector new life as the centerpiece of a cosmic ray exhibit.

Watch one of the exhibits in San Francisco’s Exploratorium science museum and count to 10, and you’ll have a very good chance of seeing a three-foot-long, glowing red spark.

The exhibit is a spark chamber, a piece of experimental equipment 5 feet wide and more than 6 feet tall, and the spark marks the path of a muon, a particle released when a cosmic ray hits the Earth’s atmosphere. The spark chamber came to the museum by way of the garage of physicist and computer scientist Dave Grossman.

“I always thought this would make a great science exhibit,” says Grossman, who spent more than eight years gathering funding and equipment from places like SLAC National Accelerator Laboratory and Fermi National Accelerator Laboratory, building the chamber, and trying to find it a home.

Grossman wrote the book—the PhD dissertation, actually—on this type of spark chamber during the mid-1960s when he was a graduate student at Harvard University. Grossman’s task was to help design and build a spark chamber that could reveal the precise paths of certain types of particles.

All spark chambers contain a mixture of inert gases—usually neon and helium—that glow when an electric current passes through them (think neon signs). When an energetic charged particle passes through the gas, it leaves a trail of ionized molecules. When voltage is applied to the gas, the current flows along the trail, illuminating the particle’s path.

The longer the path, the higher the necessary voltage. Typical spark chambers from before Grossman’s time at Harvard could light up only an inch or two of trail. Grossman labored to design a compact, dependable generator that could produce 30000 volts for 100 nanoseconds, enough voltage to illuminate charged particle paths measured in feet instead of inches.

Grossman’s spark chamber design worked well but was quickly rendered obsolete by more sensitive, more compact digital technology. After graduation, Grossman shifted from particle physics to computer science and went on to a long, successful career with IBM.

But during the years he spent as an occasional volunteer at his sons’ schools, teaching kids about robotics or sharing his telescope at star parties, Grossman never forgot his pet project or the thesis advisor and friend that guided him through it, Karl Strauch.

“Karl taught me the most by his own example,” Grossman said. “He was willing to do anything necessary for the sake of the science. He would even sweep the floor if he thought it was too dirty.”

Finally, retirement provided time; the garage of his Palo Alto home gave him the space; and donors provided the means for him to rebuild his spark chamber. Nobel Laureates Steven Weinberg and Norman Ramsey (Harvard colleagues of Strauch’s), Strauch’s son, venture capitalist Roger Strauch, and his business partner Dan Miller all pitched in.

The Exploratorium was happy to reap the benefits.

“I went to Dave Grossman’s house twice to look at it and I was impressed,” says Exploratorium Senior Scientist Thomas Humphrey. “I’ve made spark chambers, and they’re finicky beasts.”

Humphrey gave the go-ahead, and the detector was installed in the museum’s Central Gallery, where it attracts visitors young and old.

“Visitors are really excited to see it,” Humphrey says. “Cosmic rays are so mysterious. But here you can walk right up to a device and see a spark in real time. It makes the unseen seen.”


Editor's note: Please call ahead before visiting the museum to determine whether the exhibit is available for viewing.

Like what you see? Sign up for a free subscription to symmetry!

by Lori Ann White at July 22, 2014 02:54 PM

CERN Bulletin

CERN Bulletin Issue No. 30-31/2014
Link to e-Bulletin Issue No. 30-31/2014Link to all articles in this issue No.

July 22, 2014 02:39 PM

Tommaso Dorigo - Scientificblogging

True And False Discoveries: How To Tell Them Apart
Many new particles and other new physics signals claimed in the last twenty years were later proven to be spurious effects, due to background fluctuations or unknown sources of systematic error. The list is long, unfortunately - and longer than the list of particles and effects that were confirmed to be true by subsequent more detailed or more statistically-rich analysis.

read more

by Tommaso Dorigo at July 22, 2014 02:25 PM

Clifford V. Johnson - Asymptotia

74 Questions
open_questions_cvjHello from the Aspen Center for Physics. One of the things I wanted to point out to you last month was the 74 questions that Andy Strominger put on the slides of his talk in the last session of the Strings 2014 conference (which, you may recall from earlier posts, I attended). This was one of the "Vision Talks" that ended the sessions, where a number of speakers gave some overview thoughts about work in the field at large. Andy focused mostly on progress in quantum gravity matters in string theory, and was quite upbeat. He declines (wisely) to make predictions about where the field might be going, instead pointing out (not for the first time) that if you look at the things we've made progress on in the last N years, most (if not all) of those things would not have been on anyone's list of predictions N years ago. (He gave a specific value for N, I just can't recall what it is, but it does not matter.) He sent an email to everyone who was either speaking, organising, moderating a session or similarly involved in the conference, asking them to send, off the [...] Click to continue reading this post

by Clifford at July 22, 2014 02:06 PM

Peter Coles - In the Dark

Time for a Factorial Moment…

Another very busy and very hot day so no time for a proper blog post. I suggest we all take a short break and enjoy a Factorial Moment:

Factorial Moment

I remember many moons ago spending ages calculating the factorial moments of the Poisson-Lognormal distribution, only to find that they were well known. If only I’d had Google then…


by telescoper at July 22, 2014 12:21 PM

Lubos Motl - string vacua and pheno

CMS: a \(2.1\TeV\) right-handed \(W_R^\pm\)-boson
Since the beginning of this month, the ATLAS and CMS collaborations have reported several intriguing excesses such as the apparent enhancement of the \(W^+W^-\) cross section (which may be due to some large logarithms neglected by theorists, as a recent paper indicated), a flavor-violating Higgs decay, leptoquarks, and a higgsino excess, among others.

Bizarrely enough, all of us missed another, 2.8-sigma excess exactly one week ago:
CMS: Search for heavy neutrinos and \(W^\pm_R\) bosons with right-handed couplings in proton-proton collisions at \(\sqrt{s} = 8 \TeV\) (arXiv)
The ordinary \(W^\pm\)-bosons only interact with the left-handed component of the electron, muon, and tau, because only those transform nontrivially (as a doublet) under the relevant \(SU(2)_W\) part of the electroweak gauge group.




However, there exist models of new physics where this left-right asymmetry is fundamentally "repaired" at higher energies – and its apparent breakdown at accessible energies is due to some spontaneous symmetry breaking.




The CMS search assumed a new spontaneously broken non-Abelian gauge group with a gauge boson \(W^\pm_R\). Under this gauge group, the right-handed electron and muon may transform nontrivially and it doesn't create too much havoc at accessible energies as long as the gauge boson \(W^\pm_R\) is very heavy.

In the search, one assumes that the \(W^\pm_R\) boson is created by the proton-proton collisions and decays\[

pp\to W^\pm_R \to \ell_1^\pm N_\ell \to \dots

\] to a charged lepton and a (new) right-handed neutrino. The latter hypothetical particle is also in the multi-\({\rm TeV}\) range and it decays to another charged lepton along with a new but virtual (therefore the asterisk) \(W^\pm_R\) bosons, so the chain above continues as\[

\dots \to \ell_1 \ell_2 W_R^* \to \ell_1\ell_2 q\bar q

\] where the final step indicates the decay of the virtual \(W_R^*\) boson to a quark-antiquark pair. Great. They have to look for events with two charged leptons and two jets.

So the CMS folks have made a search and wrote that there is nothing interesting to be seen over there. They may obliterate the proposals of new physics (of right-handed couplings of new gauge bosons) more lethally than anyone before them, they boast, and the exclusion zone for the \(W_R^\pm\) goes as high as \(3\TeV\).



However, under this boasting about exclusion, there is a "detail" that isn't advertised too much. Look at the exclusion graph above. You must have seen many graphs of this kind. On the \(x\)-axis, you see a parameter labeling the hypothesis about new physics – in this case, it's the mass of the \(W^\pm_R\)-boson. The right-handed neutrino is assumed to have mass \(m_N=m_{W(R)}/2\).

On the \(y\)-axis, you see the number of \(\ell\ell q\bar q\) events that look like if they originated from the new particle decaying as indicated above. If there is no new physics, the expected or predicted number of events (the "background", i.e. boring events without new physics that imitate new physics) is captured by the dotted line plus minus the green and yellow (1-sigma and 2-sigma) band. The actual number of measured events is depicted by the full black line.

If there is no new physics, the wiggly black line is expected to fluctuate with the Brazil band 95% of the time. The red strip shows the prediction assuming that there is new physics – in this case, new \(W^\pm_R\)-bosons that are coupled as strongly as the known \(W_L^\pm\)-bosons.

The wiggly black curve (observation) never gets close to the red strip. However, you may see that the wiggly black curve violates the Brazil band. If the wiggly curve were black-red-yellow (German), it would tear the Brazil band apart by 7.1 sigma. (That was a stupid soccer joke.) But even the black wiggly curve deviates by 2.8 sigma, something like 99.5% confidence level.

This may be interpreted as a "near-certainty" that there are new \(W^\pm_R\)-bosons whose mass is about \(2.1\TeV\) or perhaps between \(1.9\TeV\) and \(2.4\TeV\). Well, I am of course joking about the "near-certainty" but still, this "near-certainty" is 86 times stronger than the strongest available "proofs" that global warming exists.

The CMS collaboration dismisses the excess because it is nowhere near the red curve. So it must be a fluke. Well, it may also be a sign of new physics – but a different kind of physics than what the search was assuming. It's actually easy to adjust the theory so that it does predict a signal of this sort. Somewhat lower (\(g_R=0.6 g_L\)) couplings of the right-handed bosons are enough to weaken the predicted signal.

In a new hep-ph paper today,
A Signal of Right-Handed Charged Gauge Bosons at the LHC?,
Frank Deppisch and four co-authors argue that such new gauge bosons coupled to right-handed fermions may be predicted by \(SO(10)\) grand unified theories. The minimal \(SU(5)\) group is no good. Needless to say, I indeed love the \(SO(10)\) grand unification more than I love the \(SU(5)\) grand unification – especially because it's more (heterotic and) stringy and the fermions are produced in a single multiplet, not two.

The asymmetry in the left-handed and right-handed coupling (note that they need a suppression \(0.6\) when going from the left to the right) may be achieved in "LRSM scenarios": the scalars charged under \(SU(2)_L\) have a different mass than those under \(SU(2)_R\), and the implied modifications of the RG running are enough to make the left-handed and right-handed couplings significantly different at low energies.

All these possibilities sound rather natural and Deppisch et al. are clearly excited about their proposal and think that it's the most promising potential signal of new physics at the LHC yet. I think that the probability is above 95% that this particular "signal" will go away but people who are interested in HEP experiments and phenomenology simply cannot and shouldn't ignore such news.

by Luboš Motl (noreply@blogger.com) at July 22, 2014 06:23 AM

Emily Lakdawalla - The Planetary Society Blog

Chang'e 3 update: Both rover and lander still alive at the end of their eighth lunar day
Despite the fact that it hasn't moved for 6 months, the plucky Yutu rover on the Moon is still alive. Its signal is periodically detected by amateur radio astronomers, most recently on July 19. A story posted today by the Chinese state news agency offers a new hypothesis to explain the failure of the rover's mobility systems.

July 22, 2014 12:26 AM

July 21, 2014

The n-Category Cafe

Pullbacks That Preserve Weak Equivalences

The following concept seems to have been reinvented a bunch of times by a bunch of people, and every time they give it a different name.

Definition: Let <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> be a category with pullbacks and a class of weak equivalences. A morphism <semantics>f:AB<annotation encoding="application/x-tex">f:A\to B</annotation></semantics> is a [insert name here] if the pullback functor <semantics>f *:C/BC/A<annotation encoding="application/x-tex">f^\ast:C/B \to C/A</annotation></semantics> preserves weak equivalences.

In a right proper model category, every fibration is one of these. But even in that case, there are usually more of these than just the fibrations. There is of course also a dual notion in which pullbacks are replaced by pushouts, and every cofibration in a left proper model category is one of those.

What should we call them?

The names that I’m aware of that have so far been given to these things are:

  1. sharp map, by Charles Rezk. This is a dualization of the terminology flat map used for the dual notion by Mike Hopkins (I don’t know a reference, does anyone?). I presume that Hopkins’ motivation was that a ring homomorphism is flat if tensoring with it (which is the pushout in the category of commutative rings) is exact, hence preserves weak equivalences of chain complexes.

    However, “flat” has the problem of being a rather overused word. For instance, we may want to talk about these objects in the canonical model structure on <semantics>Cat<annotation encoding="application/x-tex">Cat</annotation></semantics> (where in fact it turns out that every such functor is a cofibration), but flat functor has a very different meaning. David White has pointed out that “flat” would also make sense to use for the monoid axiom in monoidal model categories.

  2. right proper, by Andrei Radulescu-Banu. This is presumably motivated by the above-mentioned fact that fibrations in right proper model categories are such. Unfortunately, proper map also has another meaning.

  3. <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics>-fibration, by Berger and Batanin. This is presumably motivated by the fact that “<semantics>h<annotation encoding="application/x-tex">h</annotation></semantics>-cofibration” has been used by May and Sigurdsson for an intrinsic notion of cofibration in topologically enriched categories, that specializes in compactly generated spaces to closed Hurewicz cofibrations, and pushouts along the latter preserve weak homotopy equivalences. However, it makes more sense to me to keep “<semantics>h<annotation encoding="application/x-tex">h</annotation></semantics>-cofibration” with May and Sigurdsson’s original meaning.

  4. Grothendieck <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>-fibration (where <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> is the class of weak equivalences on <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>), by Ara and Maltsiniotis. Apparently this comes from unpublished work of Grothendieck. Here I guess the motivation is that these maps are “like fibrations” and are determined by the class <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> of weak equivalences.

Does anyone know of other references for this notion, perhaps with other names? And any opinions on what the best name is? I’m currently inclined towards “<semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>-fibration” mainly because it doesn’t clash with anything else, but I could be convinced otherwise.

by shulman (viritrilbia@gmail.com) at July 21, 2014 10:43 PM

Christian P. Robert - xi'an's og

Off from Cancun [los scientificos Maya]

Maya1The flight back from ISBA 2014 was not as smooth as the flight in: it took one hour for the shuttle to take us to the airport thanks to a driver posing as a touristic guide [who needs a guide when going home?!] and droning on and on about Cancún and the Maya heritage [as far as I could guess from his Spanish]. Learning at the airport that out flight to Mexico City was delayed, then too delayed for us to make the connection, with no hotel room available there, then suggesting to the desk personal every possible European city to learn the flight had left or was about to leave, missing London by an hair, thanks to our droning friend on the scientific Mayas, and eventually being bused to the hotel airport, too far from the last poster session we could have attended!, and leaving early the next morning to Atlanta and then Paris. Which means we could have stayed for most of the remaining sessions and been back home at about the same time…
Maya2


Filed under: pictures, Statistics, Travel, University life Tagged: Aero Mexico, Cancún, flight, ISBA 2014, Maya, Mexico, poster session

by xi'an at July 21, 2014 10:14 PM

Quantum Diaries

Prototype CT scanner could improve targeting accuracy in proton therapy treatment

This article appeared in Fermilab Today on July 21, 2014.

Members of the prototype proton CT scanner collaboration move the detector into the CDH Proton Center in Warrenville. Photo: Reidar Hahn

Members of the prototype proton CT scanner collaboration move the detector into the CDH Proton Center in Warrenville. Photo: Reidar Hahn

A prototype proton CT scanner developed by Fermilab and Northern Illinois University could someday reduce the amount of radiation delivered to healthy tissue in a patient undergoing cancer treatment.

The proton CT scanner would better target radiation doses to the cancerous tumors during proton therapy treatment. Physicists recently started testing with beam at the CDH Proton Center in Warrenville.

To create a custom treatment plan for each proton therapy patient, radiation oncologists currently use X-ray CT scanners to develop 3-D images of patient anatomy, including the tumor, to determine the size, shape and density of all organs and tissues in the body. To make sure all the tumor cells are irradiated to the prescribed dose, doctors often set the targeting volume to include a minimal amount of healthy tissue just outside the tumor.

Collaborators believe that the prototype proton CT, which is essentially a particle detector, will provide a more precise 3-D map of the patient anatomy. This allows doctors to more precisely target beam delivery, reducing the amount of radiation to healthy tissue during the CT process and treatment.

“The dose to the patient with this method would be lower than using X-ray CTs while getting better precision on the imaging,” said Fermilab’s Peter Wilson, PPD associate head for engineering and support.

Fermilab became involved in the project in 2011 at the request of NIU’s high-energy physics team because of the laboratory’s detector building expertise.

The project’s goal was a tall order, Wilson explained. The group wanted to build a prototype device, imaging software and computing system that could collect data from 1 billion protons in less than 10 minutes and then produce a 3-D reconstructed image of a human head, also in less than 10 minutes. To do that, they needed to create a device that could read data very quickly, since every second data from 2 million protons would be sent from the device — which detects only one proton at a time — to a computer.

NIU physicist Victor Rykalin recommended building a scintillating fiber tracker detector with silicon photomultipliers. A similar detector was used in the DZero experiment.

“The new prototype CT is a good example of the technical expertise of our staff in detector technology. Their expertise goes back 35 to 45 years and is really what makes it possible for us to do this,” Wilson said.

In the prototype CT, protons pass through two tracking stations, which track the particles’ trajectories in three dimensions. (See figure.) The protons then pass through the patient and finally through two more tracking stations before stopping in the energy detector, which is used to calculate the total energy loss through the patient. Devices called silicon photomultipliers pick up signals from the light resulting from these interactions and subsequently transmit electronic signals to a data acquisition system.

In the prototype proton CT scanner, protons enter from the left, passing through planes of fibers and the patient's head. Data from the protons' trajectories, including the energy deposited in the patient, is collected in a data acquisition system (right), which is then used to map the patient's tissue. Image courtesy of George Coutrakon, NIU

In the prototype proton CT scanner, protons enter from the left, passing through planes of fibers and the patient’s head. Data from the protons’ trajectories, including the energy deposited in the patient, is collected in a data acquisition system (right), which is then used to map the patient’s tissue. Image courtesy of George Coutrakon, NIU

Scientists use specialized software and a high-performance computer at NIU to accurately map the proton stopping powers in each cubic millimeter of the patient. From this map, visually displayed as conventional CT slices, the physician can outline the margins, dimensions and location of the tumor.

Elements of the prototype were developed at both NIU and Fermilab and then put together at Fermilab. NIU developed the software and computing systems. The teams at Fermilab worked on the design and construction of the tracker and the electronics to read the tracker and energy measurement. The scintillator plates, fibers and trackers were also prepared at Fermilab. A group of about eight NIU students, led by NIU’s Vishnu Zutshi, helped build the detector at Fermilab.

“A project like this requires collaboration across multiple areas of expertise,” said George Coutrakon, medical physicist and co-investigator for the project at NIU. “We’ve built on others’ previous work, and in that sense, the collaboration extends beyond NIU and Fermilab.”

Rhianna Wisniewski

by Fermilab at July 21, 2014 08:45 PM

The n-Category Cafe

The Place of Diversity in Pure Mathematics

Nope, this isn’t about gender or social balance in math departments, important as those are. On Friday, Glasgow’s interdisciplinary Boyd Orr Centre for Population and Ecosystem Health — named after the whirlwind of Nobel-Peace-Prize-winning scientific energy that was John Boyd Orr — held a day of conference on diversity in multiple biological senses, from the large scale of rainforest ecosystems right down to the microscopic scale of pathogens in your blood.

Cartoon of John Boyd Orr

I used my talk (slides here) to argue that the concept of diversity is fundamentally a mathematical one, and that, moreover, it is closely related to core mathematical quantities that have been studied continuously since the time of Euclid.

In a sense, there’s nothing new here: I’ve probably written about all the mathematical content at least once before on this blog. But in another sense, it was a really new talk. I had to think very hard about how to present this material for a mixed group of ecologists, botanists, epidemiologists, mathematical modellers, and so on, all of whom are active professional scientists but some of whom haven’t studied mathematics since high school. That’s why I began the talk with an explanation of how pure mathematics looks these days.

I presented two pieces of evidence that diversity is intimately connected to ancient, fundamental mathematical concepts.

The first piece of evidence is a connection at one remove, and schematically looks like this:

maximum diversity <semantics><annotation encoding="application/x-tex">\leftrightarrow</annotation></semantics> magnitude <semantics><annotation encoding="application/x-tex">\leftrightarrow</annotation></semantics> intrinsic volumes

The left leg is a theorem asserting that when you have a collection of species and some notion of inter-species distance (e.g. genetic distance), the maximum diversity over all possible abundance distributions is closely related to the magnitude of the metric space that the species form.

The right leg is a conjecture by Simon Willerton and me. It states that for convex subsets of <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics>, magnitude is closely related to perimeter, volume, surface area, and so on. When I mentioned “quantities that have been studied continuously since the time of Euclid”, that’s what I had in mind. The full-strength conjecture requires you to know about “intrinsic volumes”, which are the higher-dimensional versions of these quantities. But the 2-dimensional conjecture is very elementary, and described here.

The second piece of evidence was a very brief account of a theorem of Mark Meckes, concerning fractional dimension of subsets <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> of <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> (slide 15, and Corollary 7.4 here). One of the standard notions of fractional dimension is Minkowski dimension (also known by other names such as Kolmogorov or box-counting dimension). On the other hand, the rate of growth of the magnitude function <semantics>t|tX|<annotation encoding="application/x-tex">t \mapsto \left| t X \right|</annotation></semantics> is also a decent notion of dimension. Mark showed that they are, in fact, the same. Thus, for any compact <semantics>X n<annotation encoding="application/x-tex">X \subseteq \mathbb{R}^n</annotation></semantics> with a well-defined Minkowski dimension <semantics>dimX<annotation encoding="application/x-tex">dim X</annotation></semantics>, there are positive constants <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics> and <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> such that

<semantics>ct dimX|tX|Ct dimX<annotation encoding="application/x-tex"> c t^{dim X} \leq \left| t X \right| \leq C t^{dim X} </annotation></semantics>

for all <semantics>t0<annotation encoding="application/x-tex">t \gg 0</annotation></semantics>.

One remarkable feature of the proof is that it makes essential use of the concept of maximum diversity, where diversity is measured in precisely the way that Christina Cobbold and I came up with for use in ecology.

So, work on diversity has already got to the stage where application-driven problems are enabling advances in pure mathematics. This is a familiar dynamic in older fields of application such as physics, but I think the fact that this is already happening in the relatively new field of diversity theory is a promising sign. It suggests that aside from all the applications, the mathematics of diversity has a lot to give pure mathematics itself.

Next April, John Baez and friends are running a three-day investigative workshop on Entropy and information in biological systems at the National Institute for Mathematical and Biological Synthesis in Knoxville, Tennessee. I hope this will provide a good opportunity for deepening our understanding of the interplay between mathematics and diversity (which is closely related to entropy and information). If you’re interested in coming, you can apply online.

by leinster (tom.leinster@ed.ac.uk) at July 21, 2014 08:36 PM

Marco Frasca - The Gauge Connection

Do quarks grant confinement?

ResearchBlogging.org

In 2010 I went to Ghent in Belgium for a very nice Conference on QCD. My contribution was accepted and I had the chance to describe my view about this matter. The result was this contribution to the proceedings. The content of this paper was really revolutionary at that time as my view about Yang-Mills theory, mass gap and the role of quarks was almost completely out of track with respect to the rest of the community. So, I am deeply grateful to the Organizers for this opportunity. The main ideas I put forward were

  • Yang-Mills theory has an infrared trivial fixed point. The theory is trivial exactly as the scalar field theory is.
  • Due to this, gluon propagator is well-represented by a sum of weighted Yukawa propagators.
  • The theory acquires a mass gap that is just the ground state of a tower of states with the spectrum of a harmonic oscillator.
  • The reason why Yang-Mills theory is trivial and QCD is not in the infrared limit is the presence of quarks. Their existence moves the theory from being trivial to asymptotic safety.

These results that I have got published on respectable journals become the reason for rejection of most of my successive papers from several referees notwithstanding there were no serious reasons motivating it. But this is routine in our activity. Indeed, what annoyed me a lot was a refeee’s report claiming that my work was incorrect because the last of my statement was incorrect: Quark existence is not a correct motivation to claim asymptotic safety, and so confinement, for QCD. Another offending point was the strong support my approach was giving to the idea of a decoupling solution as was emerging from lattice computations on extended volumes. There was a widespread idea that the gluon propagator should go to zero in a pure Yang-Mills theory to grant confinement and, if not so, an infrared non-trivial fixed point must exist.

Recently, my last point has been vindicated by a group that was instrumental in the modelling of the history of this corner of research in physics. I have seen a couple of papers on arxiv, this and this, strongly supporting my view. They are Markus Höpfer, Christian Fischer and Reinhard Alkofer. These authors work in the conformal window, this means that, for them, lightest quarks are massless and chiral symmetry is exact. Indeed, in their study quarks not even get mass dynamically. But the question they answer is somewhat different: Acquired the fact that the theory is infrared trivial (they do not state this explicitly as this is not yet recognized even if this is a “duck” indeed), how does the trivial infrared fixed point move increasing the number of quarks? The answer is in the following wonderful graph with N_f the number of quarks (flavours):

QCD Running CouplingFrom this picture it is evident that there exists a critical number of quarks for which the theory becomes asymptotically safe and confining. So, quarks are critical to grant confinement and Yang-Mills theory can happily be trivial. The authors took great care about all the involved approximations as they solved Dyson-Schwinger equations as usual, this is always been their main tool, with a proper truncation. From the picture it is seen that if the number of flavours is below a threshold the theory is generally trivial, so also for the number of quarks being zero. Otherwise, a non-trivial infrared fixed point is reached granting confinement. Then, the gluon propagator is seen to move from a Yukawa form to a scaling form.

This result is really exciting and moves us a significant step forward toward the understanding of confinement. By my side, I am happy that another one of my ideas gets such a substantial confirmation.

Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3

Markus Hopfer, Christian S. Fischer, & Reinhard Alkofer (2014). Running coupling in the conformal window of large-Nf QCD arXiv arXiv: 1405.7031v1

Markus Hopfer, Christian S. Fischer, & Reinhard Alkofer (2014). Infrared behaviour of propagators and running coupling in the conformal
window of QCD arXiv arXiv: 1405.7340v1


Filed under: Particle Physics, Physics, QCD Tagged: Confinement, Mass Gap, Quantum chromodynamics, Running coupling, Triviality, Yang-Mills Propagators, Yang-Mills theory

by mfrasca at July 21, 2014 06:31 PM

CERN Bulletin

Marco Grippeling (1966-2014)

It was with great sadness that we learnt that our former colleague and friend Marco Grippeling was amongst the victims of the Malaysia Airlines crash.

 

Marco, a Melbourne-based cyber security specialist, boarded flight MH17 on his way back to Australia after spending his last days with friends and family in his home country of the Netherlands.

Marco joined CERN as a Technical Student in the PS Division in 1992.  In 1994 he moved to the LHC Division as a Staff Member, leaving for more exotic horizons in 2000.

Marco will always be remembered for his enthusiasm and joie de vivre.

Our deepest condolences go to his family and friends at this time.

His former colleagues and friends at CERN

July 21, 2014 06:07 PM

Symmetrybreaking - Fermilab/SLAC

Helping cancer treatment hit its mark

A prototype CT scanner could improve targeting accuracy in proton therapy treatment.

A prototype medical device developed by Fermilab and Northern Illinois University could someday reduce the amount of radiation delivered to healthy tissue in a patient undergoing cancer treatment.

The device, called a proton CT scanner, would better target radiation doses to cancerous tumors during proton therapy treatment. Physicists recently started testing with beam at the CDH Proton Center in Warrenville, Illinois.

To create a custom treatment plan for each proton therapy patient, radiation oncologists currently use X-ray CT scanners to develop 3-D images of patient anatomy, including the tumor, to determine the size, shape and density of all organs and tissues in the body. To make sure all the tumor cells are irradiated to the prescribed dose, doctors often set the targeting volume to include a minimal amount of healthy tissue just outside the tumor.

Collaborators believe that the prototype proton CT, which is essentially a particle detector, will provide a more precise 3-D map of the patient anatomy. This allows doctors to more precisely target beam delivery, reducing the amount of radiation to healthy tissue during the CT process and treatment.

“The dose to the patient with this method would be lower than using X-ray CTs while getting better precision on the imaging,” says Fermilab’s Peter Wilson, associate head for engineering and support in the particle physics division.

Fermilab became involved in the project in 2011 at the request of NIU’s high-energy physics team because of the laboratory’s detector building expertise.

The project’s goal was a tall order, Wilson explains. The group wanted to build a prototype device, imaging software and computing system that could collect data from 1 billion protons in less than 10 minutes and then produce a 3-D reconstructed image of a human head, also in less than 10 minutes. To do that, they needed to create a device that could read data very quickly, since every second data from 2 million protons would be sent from the device—which detects only one proton at a time—to a computer.

NIU physicist Victor Rykalin recommended building a scintillating fiber tracker detector with silicon photomultipliers. A similar detector was used in the DZero experiment at Fermilab.

“The new prototype CT is a good example of the technical expertise of our staff in detector technology. Their expertise goes back 35 to 45 years and is really what makes it possible for us to do this,” Wilson says.

In the prototype CT, protons pass through two tracking stations, which track the particles’ trajectories in three dimensions. The protons then pass through the patient and finally through two more tracking stations before stopping in the energy detector, which is used to calculate the total energy loss through the patient. Devices called silicon photomultipliers pick up signals from the light resulting from these interactions and subsequently transmit electronic signals to a data acquisition system.

Scientists use specialized software and a high-performance computer at NIU to accurately map the proton stopping powers in each cubic millimeter of the patient. From this map, visually displayed as conventional CT slices, the physician can outline the margins, dimensions and location of the tumor.

Elements of the prototype were developed at both NIU and Fermilab and then put together at Fermilab. NIU developed the software and computing systems. The teams at Fermilab worked on the design and construction of the tracker and the electronics to read the tracker and energy measurement. The scintillator plates, fibers and trackers were also prepared at Fermilab. A group of about eight NIU students, led by NIU’s Vishnu Zutshi, helped build the detector at Fermilab.

“A project like this requires collaboration across multiple areas of expertise,” says George Coutrakon, medical physicist and co-investigator for the project at NIU. “We’ve built on others’ previous work, and in that sense, the collaboration extends beyond NIU and Fermilab.”


A version of this article was published in Fermilab Today.

 

Like what you see? Sign up for a free subscription to symmetry!

by Rhianna Wisniewski at July 21, 2014 03:52 PM

arXiv blog

Mathematicians Explain Why Social Epidemics Spread Faster in Some Countries Than Others

Psychologists have always puzzled over why people in Sweden were slower to start smoking and slower to stop. Now a group of mathematicians have worked out why.


In January 1964, the U.S. Surgeon General’s Advisory Committee on Smoking and Health published a landmark report warning of the serious health effects of tobacco. It was not the first such report but it is probably the most famous because it kick-started a global campaign to reduce the levels of smoking and the deaths it causes.

July 21, 2014 02:38 PM

Emily Lakdawalla - The Planetary Society Blog

One Day on Mars
A single day's observations take us from orbital overviews all the way down to ground truth.

July 21, 2014 01:03 PM

ZapperZ - Physics and Physicists

Angry Birds Realized In A Classroom Experiment
If you can't get kids/students to be interested in a lesson when you can tie in with a favorite game, then there's nothing more you can do.

This article (which you can get for free) shows the physics and what you will need to do water balloon launcher to teach projectile motion. It includes the air drag factor, since this is done not in the world of Angry Birds, but in real life.

Abstract: A simple, collapsible design for a large water balloon slingshot launcher features a fully adjustable initial velocity vector and a balanced launch platform. The design facilitates quantitative explorations of the dependence of the balloon range and time of flight on the initial speed, launch angle, and projectile mass, in an environment where quadratic air drag is important. Presented are theory and experiments that characterize this drag, and theory and experiments that characterize the nonlinear elastic energy and hysteresis of the latex tubing used in the slingshot. The experiments can be carried out with inexpensive and readily available tools and materials. The launcher provides an engaging way to teach projectile motion and elastic energy to students of a wide variety of ages.

There ya go!

What I like about this one than the common projectile motion demo that occurs in many high school is that there is quite a careful thought being given to the physics. One can do this as simple as one wants to, or ramp up the complexities by including factors that are not normally considered in such situation.

Zz.

by ZapperZ (noreply@blogger.com) at July 21, 2014 12:39 PM

Peter Coles - In the Dark

The Origin of Mass

Back in Cardiff for the weekend I was looking for some documents and stumbled across this, my National Health Service Baby Weight Card (vintage 1963). I’m told that I even lost a bit of weight between my birth and the first entry on the card:

Birth_weight

Aside from my considerable mass two further facts about my birth are worth mentioning. One is that I emerged in the incorrect polarization state, with shoulders East-West instead of North-South; the result of this was that my left collarbone was broken during the delivery. I imagine this wasn’t exactly a comfortable experience for my mother either! I subsequently broke the same collarbone falling off a wall when I was a toddler and it never healed properly, hence I can’t rotate my left arm. If I try to do the front crawl when swimming I go around in circles! The other noteworthy fact of my birth was that when I was finally extricated I was found to be completely covered in hair, like a monkey…


by telescoper at July 21, 2014 12:30 PM

July 20, 2014

The n-Category Cafe

The Ten-Fold Way

There are 10 of each of these things:

  • Associative real super-division algebras.

  • Classical families of compact symmetric spaces.

  • Ways that Hamiltonians can get along with time reversal (<semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>) and charge conjugation (<semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>) symmetry.

  • Dimensions of spacetime in string theory.

It’s too bad nobody took up writing This Week’s Finds in Mathematical Physics when I quit. Someone should have explained this stuff in a nice simple way, so I could read their summary instead of fighting my way through the original papers. I don’t have much time for this sort of stuff anymore!

Luckily there are some good places to read about this stuff:

Let me start by explaining the basic idea, and then move on to more fancy aspects.

Ten kinds of matter

The idea of the ten-fold way goes back at least to 1996, when Altland and Zirnbauer discovered that substances can be divided into 10 kinds.

The basic idea is pretty simple. Some substances have time-reversal symmetry: they would look the same, even on the atomic level, if you made a movie of them and ran it backwards. Some don’t — these are more rare, like certain superconductors made of yttrium barium copper oxide! Time reversal symmetry is described by an antiunitary operator <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> that squares to 1 or to -1: please take my word for this, it’s a quantum thing. So, we get 3 choices, which are listed in the chart under <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> as 1, -1, or 0 (no time reversal symmetry).

Similarly, some substances have charge conjugation symmetry, meaning a symmetry where we switch particles and holes: places where a particle is missing. The ‘particles’ here can be rather abstract things, like phonons - little vibrations of sound in a substance, which act like particles — or spinons — little vibrations in the lined-up spins of electrons. Basically any way that something can wave can, thanks to quantum mechanics, act like a particle. And sometimes we can switch particles and holes, and a substance will act the same way!

Like time reversal symmetry, charge conjugation symmetry is described by an antiunitary operator <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> that can square to 1 or to -1. So again we get 3 choices, listed in the chart under <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> as 1, -1, or 0 (no charge conjugation symmetry).

So far we have 3 × 3 = 9 kinds of matter. What is the tenth kind?

Some kinds of matter don’t have time reversal or charge conjugation symmetry, but they’re symmetrical under the combination of time reversal and charge conjugation! You switch particles and holes and run the movie backwards, and things look the same!

In the chart they write 1 under the <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> when your matter has this combined symmetry, and 0 when it doesn’t. So, “0 0 1” is the tenth kind of matter (the second row in the chart).

This is just the beginning of an amazing story. Since then people have found substances called topological insulators that act like insulators in their interior but conduct electricity on their surface. We can make 3-dimensional topological insulators, but also 2-dimensional ones (that is, thin films) and even 1-dimensional ones (wires). And we can theorize about higher-dimensional ones, though this is mainly a mathematical game.

So we can ask which of the 10 kinds of substance can arise as topological insulators in various dimensions. And the answer is: in any particular dimension, only 5 kinds can show up. But it’s a different 5 in different dimensions! This chart shows how it works for dimensions 1 through 8. The kinds that can’t show up are labelled 0.

If you look at the chart, you’ll see it has some nice patterns. And it repeats after dimension 8. In other words, dimension 9 works just like dimension 1, and so on.

If you read some of the papers I listed, you’ll see that the <semantics><annotation encoding="application/x-tex">\mathbb{Z}</annotation></semantics>’s and <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>’s in the chart are the homotopy groups of the ten classical series of compact symmetric spaces. The fact that dimension <semantics>n+8<annotation encoding="application/x-tex">n+8</annotation></semantics> works like dimension <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> is called Bott periodicity.

Furthermore, the stuff about operators <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> and <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> that square to 1, -1 or don’t exist at all is closely connected to the classification of associative real super division algebras. It all fits together.

Super division algebras

In 2005, Todd Trimble wrote a short paper called The super Brauer group and super division algebras.

In it, he gave a quick way to classify the associative real super division algebras: that is, finite-dimensional associative real <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>-graded algebras having the property that every nonzero homogeneous element is invertible. The result was known, but I really enjoyed Todd’s effortless proof.

However, I didn’t notice that there are exactly 10 of these guys. Now this turns out to be a big deal.

3 of them are purely even, with no odd part: the usual division algebras <semantics>,<annotation encoding="application/x-tex">\mathbb{R}, \mathbb{C}</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\mathbb{H}</annotation></semantics>.

7 of them are not purely even. Of these, 6 are Morita equivalent to the real Clifford algebras <semantics>Cl 1,Cl 2,Cl 3,Cl 5,Cl 6<annotation encoding="application/x-tex">Cl_1, Cl_2, Cl_3, Cl_5, Cl_6</annotation></semantics> and <semantics>Cl 7<annotation encoding="application/x-tex">Cl_7</annotation></semantics>. These are the super algebras generated by 1, 2, 3, 5, 6, or 7 odd square roots of -1.

Now you should have at least two questions:

  • What’s ‘Morita equivalence’? — and even if you know, why should it matter here? Two algebras are Morita equivalent if they have equivalent categories of representations. The same definition works for super algebras, though now we look at their representations on super vector spaces (<semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>-graded vector spaces). In physics, we’re especially interested in representations of algebras, so it often makes sense to count two as ‘the same’ if they’re Morita equivalent.

  • 1, 2, 3, 5, 6, and 7? That’s weird — why not 4? Well, Todd showed that <semantics>Cl 4<annotation encoding="application/x-tex">Cl_4</annotation></semantics> is Morita equivalent to the purely even super division algebra <semantics><annotation encoding="application/x-tex">\mathbb{H}</annotation></semantics>. So we already had that one on our list. Similarly, why not 0? <semantics>Cl 0<annotation encoding="application/x-tex">Cl_0</annotation></semantics> is just <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics>. So we had that one too.

Representations of Clifford algebras are used to describe spin-1/2 particles, so it’s exciting that 8 of the 10 associative real super division algebras are Morita equivalent to real Clifford algebras.

But I’ve already mentioned one that’s not: the complex numbers, <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics>, regarded as a purely even algebra. And there’s one more! It’s the complex Clifford algebra <semantics>l 1<annotation encoding="application/x-tex">\mathbb{C}\mathrm{l}_1</annotation></semantics>. This is the super algebra you get by taking the purely even algebra <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> and throwing in one odd square root of -1.

As soon as you hear that, you notice that the purely even algebra <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> is the complex Clifford algebra <semantics>l 0<annotation encoding="application/x-tex">\mathbb{C}\mathrm{l}_0</annotation></semantics>. In other words, it’s the super algebra you get by taking the purely even algebra <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> and throwing in no odd square roots of -1.

More connections

At this point things start fitting together:

  • You can multiply Morita equivalence classes of algebras using the tensor product of algebras: <semantics>[A][B]=[AB]<annotation encoding="application/x-tex">[A] \otimes [B] = [A \otimes B]</annotation></semantics>. Some equivalence classes have multiplicative inverses, and these form the Brauer group. We can do the same thing for super algebras, and get the super Brauer group. The super division algebras Morita equivalent to <semantics>Cl 0,,Cl 7<annotation encoding="application/x-tex">Cl_0, \dots , Cl_7</annotation></semantics> serve as representatives of the super Brauer group of the real numbers, which is <semantics> 8<annotation encoding="application/x-tex">\mathbb{Z}_8</annotation></semantics>. I explained this in week211 and further in week212. It’s a nice purely algebraic way to think about real Bott periodicity!

  • As we’ve seen, the super division algebras Morita equivalent to <semantics>Cl 0<annotation encoding="application/x-tex">Cl_0</annotation></semantics> and <semantics>Cl 4<annotation encoding="application/x-tex">Cl_4</annotation></semantics> are a bit funny. They’re purely even. So they serve as representatives of the plain old Brauer group of the real numbers, which is <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>.

  • On the other hand, the complex Clifford algebras <semantics>l 0=<annotation encoding="application/x-tex">\mathbb{C}\mathrm{l}_0 = \mathbb{C}</annotation></semantics> and <semantics>l 1<annotation encoding="application/x-tex">\mathbb{C}\mathrm{l}_1</annotation></semantics> serve as representatives of the super Brauer group of the complex numbers, which is also <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>. This is a purely algebraic way to think about complex Bott periodicity, which has period 2 instead of period 8.

Meanwhile, the purely even <semantics>,<annotation encoding="application/x-tex">\mathbb{R}, \mathbb{C}</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\mathbb{H}</annotation></semantics> underlie Dyson’s ‘three-fold way’, which I explained in detail here:

Briefly, if you have an irreducible unitary representation of a group on a complex Hilbert space <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics>, there are three possibilities:

  • The representation is isomorphic to its dual via an invariant symmetric bilinear pairing <semantics>g:H×H<annotation encoding="application/x-tex">g : H \times H \to \mathbb{C}</annotation></semantics>. In this case it has an invariant antiunitary operator <semantics>J:HH<annotation encoding="application/x-tex">J : H \to H</annotation></semantics> with <semantics>J 2=1<annotation encoding="application/x-tex">J^2 = 1</annotation></semantics>. This lets us write our representation as the complexification of a real one.

  • The representation is isomorphic to its dual via an invariant antisymmetric bilinear pairing <semantics>ω:H×H<annotation encoding="application/x-tex">\omega : H \times H \to \mathbb{C}</annotation></semantics>. In this case it has an invariant antiunitary operator <semantics>J:HH<annotation encoding="application/x-tex">J : H \to H</annotation></semantics> with <semantics>J 2=1<annotation encoding="application/x-tex">J^2 = -1</annotation></semantics>. This lets us promote our representation to a quaternionic one.

  • The representation is not isomorphic to its dual. In this case we say it’s truly complex.

In physics applications, we can take <semantics>J<annotation encoding="application/x-tex">J</annotation></semantics> to be either time reversal symmetry, <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, or charge conjugation symmetry, <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. Studying either symmetry separately leads us to Dyson’s three-fold way. Studying them both together leads to the ten-fold way!

So the ten-fold way seems to combine in one nice package:

  • real Bott periodicity,
  • complex Bott periodicity,
  • the real Brauer group,
  • the real super Brauer group,
  • the complex super Brauer group, and
  • the three-fold way.

I could throw ‘the complex Brauer group’ into this list, because that’s lurking here too, but it’s the trivial group, with <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> as its representative.

There really should be a better way to understand this. Here’s my best attempt right now.

The set of Morita equivalence classes of finite-dimensional real super algebras gets a commutative monoid structure thanks to direct sum. This commutative monoid then gets a commutative rig structure thanks to tensor product. This commutative rig — let’s call it <semantics><annotation encoding="application/x-tex">\mathfrak{R}</annotation></semantics> — is apparently too complicated to understand in detail, though I’d love to be corrected about that. But we can peek at pieces:

  • We can look at the group of invertible elements in <semantics><annotation encoding="application/x-tex">\mathfrak{R}</annotation></semantics> — more precisely, elements with multiplicative inverses. This is the real super Brauer group <semantics> 8<annotation encoding="application/x-tex">\mathbb{Z}_8</annotation></semantics>.

  • We can look at the sub-rig of <semantics><annotation encoding="application/x-tex">\mathfrak{R}</annotation></semantics> coming from semisimple purely even algebras. As a commutative monoid under addition, this is <semantics> 3<annotation encoding="application/x-tex">\mathbb{N}^3</annotation></semantics>, since it’s generated by <semantics>,<annotation encoding="application/x-tex">\mathbb{R}, \mathbb{C}</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\mathbb{H}</annotation></semantics>. This commutative monoid becomes a rig with a funny multiplication table, e.g. <semantics>=<annotation encoding="application/x-tex">\mathbb{C} \otimes \mathbb{C} = \mathbb{C} \oplus \mathbb{C}</annotation></semantics>. This captures some aspects of the three-fold way.

We should really look at a larger chunk of the rig <semantics><annotation encoding="application/x-tex">\mathfrak{R}</annotation></semantics>, that includes both of these chunks. How about the sub-rig coming from all semisimple super algebras? What’s that?

And here’s another question: what’s the relation to the 10 classical families of compact symmetric spaces? For the full answer to that, I suggest reading Gregory Moore’s Quantum symmetries and compatible Hamiltonians. But if you look at this chart by Ryu et al, you’ll see these families involve a nice interplay between <semantics>,<annotation encoding="application/x-tex">\mathbb{R}, \mathbb{C}</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\mathbb{H}</annotation></semantics>, which is what this story is all about:

The families of symmetric spaces are listed in the column “Hamiltonian”.

All this stuff is fitting together more and more nicely! And if you look at the paper by Freed and Moore, you’ll see there’s a lot more involved when you take the symmetries of crystals into account. People are beginning to understand the algebraic and topological aspects of condensed matter much more deeply these days.

The list

Just for the record, here are all 10 associative real super division algebras. 8 are Morita equivalent to real Clifford algebras:

  • <semantics>Cl 0<annotation encoding="application/x-tex">Cl_0</annotation></semantics> is the purely even division algebra <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics>.

  • <semantics>Cl 1<annotation encoding="application/x-tex">Cl_1</annotation></semantics> is the super division algebra <semantics>e<annotation encoding="application/x-tex">\mathbb{R} \oplus \mathbb{R}e</annotation></semantics>, where <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is an odd element with <semantics>e 2=1<annotation encoding="application/x-tex">e^2 = -1</annotation></semantics>.

  • <semantics>Cl 2<annotation encoding="application/x-tex">Cl_2</annotation></semantics> is the super division algebra <semantics>e<annotation encoding="application/x-tex">\mathbb{C} \oplus \mathbb{C}e</annotation></semantics>, where <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is an odd element with <semantics>e 2=1<annotation encoding="application/x-tex">e^2 = -1</annotation></semantics> and <semantics>ei=ie<annotation encoding="application/x-tex">e i = -i e</annotation></semantics>.

  • <semantics>Cl 3<annotation encoding="application/x-tex">Cl_3</annotation></semantics> is the super division algebra <semantics>e<annotation encoding="application/x-tex">\mathbb{H} \oplus \mathbb{H}e</annotation></semantics>, where <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is an odd element with <semantics>e 2=1<annotation encoding="application/x-tex">e^2 = 1</annotation></semantics> and <semantics>ei=ie,ej=je,ek=ke<annotation encoding="application/x-tex">e i = i e, e j = j e, e k = k e</annotation></semantics>.

  • <semantics>Cl 4<annotation encoding="application/x-tex">Cl_4</annotation></semantics> is <semantics>[2]<annotation encoding="application/x-tex">\mathbb{H}[2]</annotation></semantics>, the algebra of <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> quaternionic matrices, given a certain <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>-grading. This is Morita equivalent to the purely even division algebra <semantics><annotation encoding="application/x-tex">\mathbb{H}</annotation></semantics>.

  • <semantics>Cl 5<annotation encoding="application/x-tex">Cl_5</annotation></semantics> is <semantics>[2][2]<annotation encoding="application/x-tex">\mathbb{H}[2] \oplus \mathbb{H}[2]</annotation></semantics> given a certain <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>-grading. This is Morita equivalent to the super division algebra <semantics>e<annotation encoding="application/x-tex">\mathbb{H} \oplus \mathbb{H}e</annotation></semantics> where <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is an odd element with <semantics>e 2=1<annotation encoding="application/x-tex">e^2 = -1</annotation></semantics> and <semantics>ei=ie,ej=je,ek=ke<annotation encoding="application/x-tex">e i = i e, e j = j e, e k = k e</annotation></semantics>.

  • <semantics>Cl 6<annotation encoding="application/x-tex">Cl_6</annotation></semantics> is <semantics>[4][4]<annotation encoding="application/x-tex">\mathbb{C}[4] \oplus \mathbb{C}[4]</annotation></semantics> given a certain <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>-grading. This is Morita equivalent to the super division algebra <semantics>e<annotation encoding="application/x-tex">\mathbb{C} \oplus \mathbb{C}e</annotation></semantics> where <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is an odd element with <semantics>e 2=1<annotation encoding="application/x-tex">e^2 = 1</annotation></semantics> and <semantics>ei=ie<annotation encoding="application/x-tex">e i = -i e</annotation></semantics>.

  • <semantics>Cl 7<annotation encoding="application/x-tex">Cl_7</annotation></semantics> is <semantics>[8][8]<annotation encoding="application/x-tex">\mathbb{R}[8] \oplus \mathbb{R}[8]</annotation></semantics> given a certain <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>-grading. This is Morita equivalent to the super division algebra <semantics>e<annotation encoding="application/x-tex">\mathbb{R} \oplus \mathbb{R}e</annotation></semantics> where <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is an odd element with <semantics>e 2=1<annotation encoding="application/x-tex">e^2 = 1</annotation></semantics>.

<semantics>Cl n+8<annotation encoding="application/x-tex">Cl_{n+8}</annotation></semantics> is Morita equivalent to <semantics>Cl n<annotation encoding="application/x-tex">Cl_n</annotation></semantics> so we can stop here if we’re just looking for Morita equivalence classes, and there also happen to be no more super division algebras down this road. It is nice to compare <semantics>Cl n<annotation encoding="application/x-tex">Cl_n</annotation></semantics> and <semantics>Cl 8n<annotation encoding="application/x-tex">Cl_{8-n}</annotation></semantics>: there’s a nice pattern here.

The remaining 2 real super division algebras are complex Clifford algebras:

  • <semantics>l 0<annotation encoding="application/x-tex">\mathbb{C}\mathrm{l}_0</annotation></semantics> is the purely even division algebra <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics>.

  • <semantics>l 1<annotation encoding="application/x-tex">\mathbb{C}\mathrm{l}_1</annotation></semantics> is the super division algebra <semantics>e<annotation encoding="application/x-tex">\mathbb{C} \oplus \mathbb{C} e</annotation></semantics>, where <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is an odd element with <semantics>e 2=1<annotation encoding="application/x-tex">e^2 = -1</annotation></semantics> and <semantics>ei=ie<annotation encoding="application/x-tex">e i = i e</annotation></semantics>.

In the last one we could also say “with <semantics>e 2=1<annotation encoding="application/x-tex">e^2 = 1</annotation></semantics>” — we’d get something isomorphic, not a new possibility.

Ten dimensions of string theory

Oh yeah — what about the 10 dimensions in string theory? Are they really related to the ten-fold way?

It seems weird, but I think the answer is “yes, at least slightly”.

Remember, 2 of the dimensions in 10d string theory are those of the string worldsheet, which is a complex manifold. The other 8 are connected to the octonions, which in turn are connected to the 8-fold periodicity of real Clifford algebra. So the 8+2 split in string theory is at least slightly connected to the 8+2 split in the list of associative real super division algebras.

This may be more of a joke than a deep observation. After all, the 8 dimensions of the octonions are not individual things with distinct identities, as the 8 super division algebras coming from real Clifford algebras are. So there’s no one-to-one correspondence going on here, just an equation between numbers.

Still, there are certain observations that would be silly to resist mentioning.

by john (baez@math.ucr.edu) at July 20, 2014 10:10 AM

July 19, 2014

Christian P. Robert - xi'an's og

do and [mostly] don’t…

Cancun14Rather than staying in one of the conference hotels, I followed my habit of renting a flat by finding a nice studio in Cancún via airbnb. Fine except for having no internet connection. (The rental description mentioned “Wifi in the lobby”, which I stupidly interpreted as “lobby of the appartment”, but actually meant “lobby of the condominium building”… Could as well have been “lobby of the airport”.) The condo owner sent us a list of “don’t” a few days ago, some of which are just plain funny (or tell of past disasters!):

- don’t drink heavily
- don’t party or make noise
- don’t host visitors, day or night
- don’t bang the front door or leave the balcony door open when opening the front door
- don’t put cans or bottles on top of the glass cooktop
- don’t cook elaborate meals
- don’t try to fit an entire chicken in the oven
- don’t spill oil or wine on the kitchentop
- don’t cut food directly on the kitchentop
- don’t eat or drink while in bed
- avoid frying, curry, and bacon
- shop for groceries only one day at a time
- hot water may or may not be available
- elevator may or may not be available
- don’t bring sand back in the condo


Filed under: pictures, Travel Tagged: beach, Cancún, condo, flat, Mexico, rental

by xi'an at July 19, 2014 10:14 PM

Emily Lakdawalla - The Planetary Society Blog

Mars and Europa: Contrasts in Mission Planning
Several announcements for proposed missions to Mars and on the planning for a NASA return to Europa that highlight the contrasts in planning missions for these two high priority destinations.

July 19, 2014 07:08 PM

Peter Coles - In the Dark

The Thunder Shower

A blink of lightning, then
a rumor, a grumble of white rain
growing in volume, rustling over the ground,
drenching the gravel in a wash of sound.
Drops tap like timpani or shine
like quavers on a line.

It rings on exposed tin,
a suite for water, wind and bin,
plinky Poulenc or strongly groaning Brahms’
rain-strings, a whole string section that describes
the very shapes of thought in warm
self-referential vibes

and spreading ripples. Soon
the whispering roar is a recital.
Jostling rain-crowds, clamorous and vital,
struggle in runnels through the afternoon.
The rhythm becomes a regular beat;
steam rises, body heat—

and now there’s city noise,
bits of recorded pop and rock,
the drums, the strident electronic shock,
a vast polyphony, the dense refrain
of wailing siren, truck and train
and incoherent cries.

All human life is there
in the unconfined, continuous crash
whose slow, diffused implosions gather up
car radios and alarms, the honk and beep,
and tiny voices in a crèche
piercing the muggy air.

Squalor and decadence,
the rackety global-franchise rush,
oil wars and water wars, the diatonic
crescendo of a cascading world economy
are audible in the hectic thrash
of this luxurious cadence.

The voice of Baal explodes,
raging and rumbling round the clouds,
frantic to crush the self-sufficient spaces
and re-impose his failed hegemony
in Canaan before moving on
to other simpler places.

At length the twining chords
run thin, a watery sun shines out,
the deluge slowly ceases, the guttural chant
subsides; a thrush sings, and discordant thirds
diminish like an exhausted concert
on the subdominant.

The angry downpour swarms
growling to far-flung fields and farms.
The drains are still alive with trickling water,
a few last drops drip from a broken gutter;
but the storm that created so much fuss
has lost interest in us.

by Derek Mahon (b 1941)


by telescoper at July 19, 2014 01:08 PM

Jester - Resonaances

Weekend Plot: all of dark matter
To put my recent posts into a bigger perspective, here's a graph summarizing all of dark matter particles discovered so far via direct or indirect detection:

The graph shows the number of years the signal has survived vs. the inferred mass of the dark matter particle. The particle names follow the usual Particle Data Group conventions. The label's size is related to the statistical significance of the signal. The colors correspond to the Bayesian likelihood that the signal originates from dark matter, from uncertain (red) to very unlikely (blue). The masses of the discovered particles span impressive 11 orders of magnitude, although the largest concentration is near the weak scale (this is called the WIMP miracle). If I forgot any particle for which a compelling evidence exists, let me know, and I will add it to the graph.

Here are the original references for the Bulbulon, BoehmotCollaron, CDMesonDaemon, CresstonHooperon, Wenigon, Pamelon, and the mother of Bert and Ernie

by Jester (noreply@blogger.com) at July 19, 2014 12:17 PM

Jester - Resonaances

Follow up on BICEP
The BICEP2 collaboration claims the discovery of the primordial B-mode in the CMB at a very high confidence level.  Résonaances recently reported on the chinese whispers that cast doubts about the statistical significance of that result.  They were based in part on the work of Raphael Flauger and Colin Hill, rumors of which were spreading through email and coffee time discussions. Today Raphael gave a public seminar describing this analysis, see the slides and the video.

The familiar number r=0.2 for the CMB tensor-to-scalar ratio is based on the assumption of zero foreground contribution in the region of the sky observed by BICEP. To argue that foregrounds should not be a big effect, the BICEP paper studied several models to estimate the galactic dust emission. Of those, only the data driven models DDM1 and DDM2 were based actual polarization data inadvertently shared by Planck. However, even these models suggest that foregrounds are not completely negligible. For example, subtracting the foregrounds estimated via DDM2 brings the central value of r down to 0.16 or 0.12 depending how the model is used (cross-correlation vs. auto-correlation). If, instead,  the cross-correlated  BICEP2 and Keck Array data are used as an input, the tensor-to-scalar ratio can easily be below 0.1, in agreement with the existing bounds from Planck and WMAP.

Raphael's message is that, according to his analysis, the foreground emissions are larger than estimated by BICEP, and that systematic uncertainties on that estimate (due to incomplete information, modeling uncertainties, and scraping numbers from pdf slides) are also large. If that is true, the statistical significance of the primordial B-mode  detection is much weaker than what is being claimed by BICEP.

In his talk, Raphael described an independent and what is the most complete to date attempt to extract the foregrounds from existing data. Apart from using the same Planck's polarization fraction map as BICEP, he also included the Q and U all-sky map (the letters refer to how polarization is parameterized), and models of polarized dust emission based on  HI maps (21cm hydrogen line emission is supposed to track the galactic dust).  One reason for the discrepancy with the BICEP estimates could be that the effect of the Cosmic Infrared Background - mostly unpolarized emission from faraway galaxies - is non-negligible. The green band in the plot shows the polarized dust emission obtained from the  CIB corrected DDM2 model, and compares it to the original BICEP estimate (blue dashed line).

The analysis then goes on to extract the foregrounds starting from several different premises. All available datasets (polarization reconstructed via HI maps, the information scraped from existing Planck's polarization maps) seem to say a similar story: galactic foregrounds can be large in the region of interest and uncertainties are large.  The money plot is this one:

Recall that the primordial B-mode signal should show up at moderate angular scales with l∼100 (the high-l end is dominated by non-primordial B-modes from gravitational lensing). Given the current uncertainties, the foreground emission may easily account for the entire BICEP2 signal in that region. Again, this does not prove that tensor mode cannot be there. The story may still reach a happy ending, much like the one of  the discovery of accelerated expansion (where serious doubts about systematic uncertainties also were raised after the initial announcement). But the ball is on the BICEP side to convincingly demonstrate that foregrounds are under control.

Until that happens, I think their result does not stand.

by Jester (noreply@blogger.com) at July 19, 2014 12:15 PM

Jester - Resonaances

Another one bites the dust...
...though it's not BICEP2 this time :) This is a long overdue update on the forward-backward asymmetry of the top quark production.
Recall that, in a collision of a quark and an anti-quark producing a top quark together with its antiparticle, the top quark is more often ejected in the direction of the incoming quark (as opposed to the anti-quark). This effect can be most easily studied at the Tevatron who was colliding protons with antiprotons, therefore the direction of the quark and of the anti-quark could be easily inferred. Indeed, the Tevatron experiments observed the asymmetry at a high confidence level. In the leading order approximation, the Standard Model predicts zero asymmetry, which boils down to the fact that gluons mediating the production process couple with the same strength to left- and right-handed quark polarizations. Taking into account quantum corrections at 1 loop leads to a small but non-zero asymmetry.
Intriguingly, the asymmetry measured at the Tevatron appeared to be large, of order 20%, significantly more than the value  predicted by the Standard Model loop effects. On top of this, the distribution of the asymmetry as a function of the top-pair invariant mass, and the angular distribution of leptons from top quark decay were strongly deviating from the Standard Model expectation. All in all, the ttbar forward-backward anomaly has been considered, for many years, one of our best hints for physics beyond the Standard Model. The asymmetry could be interpreted, for example, as  being due to new heavy resonances with the quantum numbers of the gluon, which are predicted by models where quarks are composite objects. However, the story has been getting less and less  exciting lately. First of all, no other top quark observables  (like e.g. the total production cross section) were showing any deviations, neither at the Tevatron nor at the LHC. Another worry was that the related top asymmetry was not observed at the LHC. At the same time, the Tevatron numbers have been evolving in a worrisome direction: as the Standard Model computation was being refined the prediction was going up; on the other hand, the experimental value was steadily going down as more data were being added. Today we are close to the point where the Standard Model and experiment finally meet...

The final straw is two recent updates from Tevatron's D0 experiment. Earlier this year, D0 published the measurement  of  the forward-backward asymmetry of the direction of the leptons
from top quark decays. The top quark sometimes decays leptonically, to a b-quark, a neutrino, and a charged lepton (e+, μ+).  In this case, the momentum of the lepton is to some extent correlated with that of the parent top, thus the top quark asymmetry may come together with the lepton asymmetry  (although some new physics models affect the top and lepton asymmetry in a completely different way). The previous D0 measurement showed a large, more than 3 sigma, excess in that observable. The new refined analysis using the full dataset reaches a different conclusion: the asymmetry is Al=(4.2 ± 2.4)%, in a good agreement with the Standard Model.  As can be seen in the picture,  none of the CDF and D0 measurement of the lepton asymmetry in several  final states shows any anomaly at this point.  Then came the D0 update of the regular ttbar forward-backward asymmetry in the semi-leptonic channel. Same story here: the number went down from 20% down to Att=(10.6  ± 3.0)%, compared to the Standard Model prediction of 9%. CDF got a slightly larger number here, Att=(16.4 ± 4.5)%, but taken together the results are not significantly above the Standard Model prediction of Att=9%.

So, all the current data on the top quark, both from the LHC and from the Tevatron,  are perfectly consistent with the Standard Model predictions. There may be new physics somewhere at the weak scale, but we're not gonna pin it down by measuring the top asymmetry. This one is a dead parrot:



Graphics borrowed from this talk

by Jester (noreply@blogger.com) at July 19, 2014 12:14 PM

Jester - Resonaances

Weekend Plot: dream on
To force myself into a more regular blogging lifestyle, I thought it would be good to have a semi-regular column.  So I'm kicking off with the Weekend Plot series (any resemblance to Tommaso's Plot of the Week is purely coincidental). You understand the idea: it's weekend, people relax, drink, enjoy... and for all the nerds there's at least a plot.  

For a starter, a plot from the LHC Higgs Cross Section Working Group:

It shows the Higgs boson production cross section in proton-proton collisions as a function of center-of-mass energy. Notably, the plot extends as far as our imagination can stretch, that is up to a 100 TeV collider.  At 100 TeV the cross section is 40 times larger compared to the 8 TeV LHC.  So far we produced about 1 million Higgs bosons at the LHC and we'll probably make 20 times more in this decade. With a 100 TeV collider, 3 inverse attobarn of luminosity,  and 4 detectors  (dream on) we could produce 10 billion Higgs bosons and really squeeze the shit out of it.  For the Higgs production in association with a top-antitop quark pair the increase is even more dramatic: between 8 at 100 TeV the rate increases by a factor of 300 and ttH is upgraded to the 3rd largest production mode. Double Higgs production increases by a similar factor and becomes fairly common. So these theoretically interesting production processes  will be a piece of cake in the asymptotic future.

Wouldn't it be good?

by Jester (noreply@blogger.com) at July 19, 2014 12:13 PM

Jester - Resonaances

Weekend Plot: BaBar vs Dark Force
BaBar was an experiment studying 10 GeV electron-positron collisions. The collider is long gone, but interesting results keep appearing from time to time.  Obviously, this is not a place to discover new heavy particles. However, due to the large luminosity and clean experimental environment,  BaBar is well equipped to look for light and very weakly coupled particles that can easily escape detection in bigger but dirtier machines like the LHC. Today's weekend plot is the new BaBar limits on dark photons:

Dark photon is a hypothetical spin-1 boson that couples to other particles with the strength proportional to their electric charges. Compared to the ordinary photon, the dark one is assumed to have a non-zero mass mA' and the coupling strength suppressed by the factor ε. If ε is small enough the dark photon can escape detection even if mA' is very small, in the MeV or GeV range. The model was conceived long ago, but in the previous decade it has gained wider popularity as the leading explanation of the PAMELA anomaly.  Now, as PAMELA is getting older, she is no longer considered a convincing evidence of new physics. But the dark photon model remains an important benchmark - a sort of spherical cow model for light hidden sectors. Indeed, in the simplest realization, the model is fully described by just two parameters: mA' and ε, which makes it easy to present and compare results of different searches.

In electron-positron collisions one can produce a dark photon in association with an ordinary photon, in analogy to the familiar process of e+e- annihilation into 2 photons. The dark photon then decays to a pair of electrons or muons (or heavier charged particles, if they are kinematically available). Thus, the signature is a spike in the e+e- or μ+μ- invariant mass spectrum of γl+l- events. BaBar performed this search to obtain world's best limits on dark photons in the mass range 30 MeV - 10 GeV, with the upper limit on ε in the 0.001 ballpark. This does not have direct consequences for the explanation of the  PAMELA anomaly, as the model works with a smaller ε too. On the other hand, the new results close in on the parameter space where the minimal dark photon model  can explain the muon magnetic moment anomaly (although one should be aware that one can reduce the tension with a trivial modification of the model, by allowing the dark photon to decay into the hidden sector).

So, no luck so far, we need to search further. What one should retain is that finding new heavy particles and finding new light weakly interacting particles seems equally probable at this point :)

by Jester (noreply@blogger.com) at July 19, 2014 08:59 AM

July 18, 2014

Christian P. Robert - xi'an's og

Cancun, ISBA 2014 [½ day #2]

Cancun12

Half-day #2 indeed at ISBA 2014, as the Wednesday afternoon kept to the Valencia tradition of free time, and potential cultural excursions, so there were only talks in the morning. And still the core poster session at (late) night. In which my student Kaniav Kamari presented a poster on a current project we are running with Kerrie Mengersen and Judith Rousseau on the replacement of the standard Bayesian testing setting with a mixture representation. Being half-asleep by the time the session started, I did not stay long enough to collect data on the reactions to this proposal, but the paper should be arXived pretty soon. And Kate Lee gave a poster on our importance sampler for evidence approximation in mixtures (soon to be revised!). There was also an interesting poster about reparameterisation towards higher efficiency of MCMC algorithms, intersecting with my long-going interest in the matter, although I cannot find a mention of it in the abstracts. And I had a nice talk with Eduardo Gutierrez-Pena about infering on credible intervals through loss functions. There were also a couple of appealing posters on g-priors. Except I was sleepwalking by the time I spotted them… (My conference sleeping pattern does not work that well for ISBA meetings! Thankfully, both next editions will be in Europe.)

Great talk by Steve McEachern that linked to our ABC work on Bayesian model choice with insufficient statistics, arguing towards robustification of Bayesian inference by only using summary statistics. Despite this being “against the hubris of Bayes”… Obviously, the talk just gave a flavour of Steve’s perspective on that topic and I hope I can read more to see how we agree (or not!) on this notion of using insufficient summaries to conduct inference rather than trying to model “the whole world”, given the mistrust we must preserve about models and likelihoods. And another great talk by Ioanna Manolopoulou on another of my pet topics, capture-recapture, although she phrased it as a partly identified model (as in Kline’s talk yesterday). This related with capture-recapture in that when estimating a capture-recapture model with covariates, sampling and inference are biased as well. I appreciated particularly the use of BART to analyse the bias in the modelling. And the talk provided a nice counterpoint to the rather pessimistic approach of Kline’s.

Terrific plenary sessions as well, from Wilke’s spatio-temporal models (in the spirit of his superb book with Noel Cressie) to Igor Prunster’s great entry on Gibbs process priors. With the highly significant conclusion that those processes are best suited for (in the sense that they are only consistent for) discrete support distributions. Alternatives are to be used for continuous support distributions, the special case of a Dirichlet prior constituting a sort of unique counter-example. Quite an inspiring talk (even though I had a few micro-naps throughout it!).

I shared my afternoon free time between discussing the next O’Bayes meeting (2015 is getting very close!) with friends from the Objective Bayes section, getting a quick look at the Museo Maya de Cancún (terrific building!), and getting some work done (thanks to the lack of wireless…)


Filed under: pictures, Running, Statistics, Travel, University life Tagged: ABC, Bayesian tests, beach, Cancún, g-priors, ISBA 2014, Maya, Mexico, mixture estimation, O-Bayes 2015, posters, sunrise, Valencia conferences

by xi'an at July 18, 2014 10:14 PM

Emily Lakdawalla - The Planetary Society Blog

New Horizons to take new photos of Pluto and Charon, beginning optical navigation campaign
Technically, Pluto science observations don't begin for New Horizons until 2015, but the spacecraft will take a series of photos of Pluto and Charon from July 20 to 27 as it begins the first of four optical navigation campaigns.

July 18, 2014 10:06 PM

Geraint Lewis - Cosmic Horizons

Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation
I am exhausted after a month of travel, but am now back in a sunny, but cool, Sydney. It's feels especially chilly as part of my trip included Death Valley, where the temperatures were pushing 50 degrees C.

I face a couple of weeks of catch-up, especially with regards to some blog posts on my recent papers. Here, I am going to cheat and present two papers at once. Both papers are by soon-to-be-newly-minted Doctor, Foivos Diakogiannis. I hope you won't mind, as these papers are Part I and II of the same piece of work.

The fact that this work is spread over two papers tells you that it's a long and winding saga, but it's cool stuff as it does something that can really advance science - take an idea from one area and use it somewhere else.

The question the paper looks at sounds, on the face of it, rather simple. Imagine you you have a ball of stars, something like this, a globular cluster:
You can see where the stars are. Imagine that you can also measure the speeds of the stars. So, the questions is - what is the distribution of mass in this ball of stars? It might sound obvious, because isn't the mass the stars? Well, you have to be careful as we are seeing the brightest stars, and the fainter stars, are harder to see. Also, there may be dark matter in there.

So, we are faced with a dynamics problem, which means we want to find the forces; the force acting here is, of course, gravity, and so mapping the forces gives you the mass. And forces produces accelerations, so all we need is to measure these and... oh.. hang on. The Doppler Shift gives us the velocity, not the acceleration, and so we have wait (a long time) to measure accelerations (i.e. see the change of velocity over time). As they say in the old country, "Bottom".

And this has dogged astronomy for more than one hundred years. But there are some equations (which I think a lovely, but if you are not a maths fan, they may give you a minor nightmare) called the Jeans Equations. I won't pop them here, as there are lots of bits to them and it would take a blog post to explain them in detail.

But there are problems (aren't there always) and that's the assumptions that are made, and the key problem is degeneracies.

Degeneracies are a serious pain in science. Imagine you have measured a value in an experiment, let's say it's the speed of a planet (there will be an error associated with that measurement). Now, you have your mathematical laws that makes a prediction for the speed of the planet, but you find that your maths do not give you a single answer, but multiple answers that equally well explain the measurements. What's the right answer? You need some new (or better) observations to "break the degeneracies".

And degeneracies dog dynamical work. There is a traditional approach to modelling the mass distribution through the Jeans equations, where certain assumptions are made, but you are often worried about how justified your assumptions are. While we cannot remove all the degeneracies, we can try and reduce their impact. How? By letting the data point the way.

By this point, you may look a little like this

OK. So, there are parts to the Jeans equations where people traditionally put in functions to describe what something is doing. As an example, we might choose a density that has a mathematical form like
that tells us how the density change with radius (those in the know will recognise this as the well-known Navarro-Frenk-White profile. Now, what if your density doesn't look like this? Then you are going to get the wrong answers because you assumed it.

So, what you want to do is let the data choose the function for you. But how is this possible? How do you get "data" to pick the mathematical form for something like density? This is where Foivos had incredible insight and called on a completely different topic all together, namely Computer-Aided Design.

For designing things on a computer, you need curves, curves that you can bend and stretch into a range of arbitrary shapes, and it would be painful to work out the mathematical form of all of the potential curves you need. So, you don't bother. You use extremely flexible curves known as splines. I've always loved splines. They are so simple, but so versatile. You specify some points, and you get a nice smooth curve. I urge you to have a look at them.

For this work, we use b-splines and construct the profiles we want from some basic curves. Here's an example from the paper:
We then plug this flexible curve into the mathematics of dynamics. For this work, we test the approach by creating fake data from a model, and then try and recover the model from the data. And it works!
Although it is not that simple. A lot of care and thought has to be taken on just how you you construct the spline (this is the focus of the second paper), but that's now been done. We now have the mathematics we need to really crack the dynamics of globular clusters, dwarf galaxies and even our Milky Way.

There's a lot more to write on this, but we'll wait for the results to start flowing. Watch this space!

Well done Foivos! - not only on the paper, but for finishing his PhD, getting a postdoctoral position at ICRAR, but also getting married :)

Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation I: theoretical foundation

A widely employed method for estimating the mass of stellar systems with apparent spherical symmetry is dynamical modelling using the spherically symmetric Jeans equation. Unfortunately this approach suffers from a degeneracy between the assumed mass density and the second order velocity moments. This degeneracy can lead to significantly different predictions for the mass content of the system under investigation, and thus poses a barrier for accurate estimates of the dark matter content of astrophysical systems. In a series of papers we describe an algorithm that removes this degeneracy and allows for unbiased mass estimates of systems of constant or variable mass-to-light ratio. The present contribution sets the theoretical foundation of the method that reconstructs a unique kinematic profile for some assumed free functional form of the mass density. The essence of our method lies in using flexible B-spline functions for the representation of the radial velocity dispersion in the spherically symmetric Jeans equation. We demonstrate our algorithm through an application to synthetic data for the case of an isotropic King model with fixed mass-to-light ratio, recovering excellent fits of theoretical functions to observables and a unique solution. The mass-anisotropy degeneracy is removed to the extent that, for an assumed functional form of the potential and mass density pair (\Phi,\rho), and a given set of line-of-sight velocity dispersion \sigma_{los}^2 observables, we recover a unique profile for \sigma_{rr}^2 and \sigma_{tt}^2. Our algorithm is simple, easy to apply and provides an efficient means to reconstruct the kinematic profile.

and


Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation II: optimum smoothing and model validation

The spherical Jeans equation is widely used to estimate the mass content of a stellar systems with apparent spherical symmetry. However, this method suffers from a degeneracy between the assumed mass density and the kinematic anisotropy profile, β(r). In a previous work, we laid the theoretical foundations for an algorithm that combines smoothing B-splines with equations from dynamics to remove this degeneracy. Specifically, our method reconstructs a unique kinematic profile of σ2rr and σ2tt for an assumed free functional form of the potential and mass density (Φ,ρ) and given a set of observed line-of-sight velocity dispersion measurements, σ2los. In Paper I (submitted to MNRAS: MN-14-0101-MJ) we demonstrated the efficiency of our algorithm with a very simple example and we commented on the need for optimum smoothing of the B-spline representation; this is in order to avoid unphysical variational behaviour when we have large uncertainty in our data. In the current contribution we present a process of finding the optimum smoothing for a given data set by using information of the behaviour from known ideal theoretical models. Markov Chain Monte Carlo methods are used to explore the degeneracy in the dynamical modelling process. We validate our model through applications to synthetic data for systems with constant or variable mass-to-light ratio Υ. In all cases we recover excellent fits of theoretical functions to observables and unique solutions. Our algorithm is a robust method for the removal of the mass-anisotropy degeneracy of the spherically symmetric Jeans equation for an assumed functional form of the mass density.

by Cusp (noreply@blogger.com) at July 18, 2014 10:03 PM

Sean Carroll - Preposterous Universe

Galaxies That Are Too Big To Fail, But Fail Anyway

Dark matter exists, but there is still a lot we don’t know about it. Presumably it’s some kind of particle, but we don’t know how massive it is, what forces it interacts with, or how it was produced. On the other hand, there’s actually a lot we do know about the dark matter. We know how much of it there is; we know roughly where it is; we know that it’s “cold,” meaning that the average particle’s velocity is much less than the speed of light; and we know that dark matter particles don’t interact very strongly with each other. Which is quite a bit of knowledge, when you think about it.

Fortunately, astronomers are pushing forward to study how dark matter behaves as it’s scattered through the universe, and the results are interesting. We start with a very basic idea: that dark matter is cold and completely non-interacting, or at least has interactions (the strength with which dark matter particles scatter off of each other) that are too small to make any noticeable difference. This is a well-defined and predictive model: ΛCDM, which includes the cosmological constant (Λ) as well as the cold dark matter (CDM). We can compare astronomical observations to ΛCDM predictions to see if we’re on the right track.

At first blush, we are very much on the right track. Over and over again, new observations come in that match the predictions of ΛCDM. But there are still a few anomalies that bug us, especially on relatively small (galaxy-sized) scales.

One such anomaly is the “too big to fail” problem. The idea here is that we can use ΛCDM to make quantitative predictions concerning how many galaxies there should be with different masses. For example, the Milky Way is quite a big galaxy, and it has smaller satellites like the Magellanic Clouds. In ΛCDM we can predict how many such satellites there should be, and how massive they should be. For a long time we’ve known that the actual number of satellites we observe is quite a bit smaller than the number predicted — that’s the “missing satellites” problem. But this has a possible solution: we only observe satellite galaxies by seeing stars and gas in them, and maybe the halos of dark matter that would ordinarily support such galaxies get stripped of their stars and gas by interacting with the host galaxy. The too big to fail problem tries to sharpen the issue, by pointing out that some of the predicted galaxies are just so massive that there’s no way they could not have visible stars. Or, put another way: the Milky Way does have some satellites, as do other galaxies; but when we examine these smaller galaxies, they seem to have a lot less dark matter than the simulations would predict.

Still, any time you are concentrating on galaxies that are satellites of other galaxies, you rightly worry that complicated interactions between messy atoms and photons are getting in the way of the pristine elegance of the non-interacting dark matter. So we’d like to check that this purported problem exists even out “in the field,” with lonely galaxies far away from big monsters like the Milky Way.

A new paper claims that yes, there is a too-big-to-fail problem even for galaxies in the field.

Is there a “too big to fail” problem in the field?
Emmanouil Papastergis, Riccardo Giovanelli, Martha P. Haynes, Francesco Shankar

We use the Arecibo Legacy Fast ALFA (ALFALFA) 21cm survey to measure the number density of galaxies as a function of their rotational velocity, Vrot,HI (as inferred from the width of their 21cm emission line). Based on the measured velocity function we statistically connect galaxies with their host halos, via abundance matching. In a LCDM cosmology, low-velocity galaxies are expected to be hosted by halos that are significantly more massive than indicated by the measured galactic velocity; allowing lower mass halos to host ALFALFA galaxies would result in a vast overestimate of their number counts. We then seek observational verification of this predicted trend, by analyzing the kinematics of a literature sample of field dwarf galaxies. We find that galaxies with Vrot,HI<25 km/s are kinematically incompatible with their predicted LCDM host halos, in the sense that hosts are too massive to be accommodated within the measured galactic rotation curves. This issue is analogous to the "too big to fail" problem faced by the bright satellites of the Milky Way, but here it concerns extreme dwarf galaxies in the field. Consequently, solutions based on satellite-specific processes are not applicable in this context. Our result confirms the findings of previous studies based on optical survey data, and addresses a number of observational systematics present in these works. Furthermore, we point out the assumptions and uncertainties that could strongly affect our conclusions. We show that the two most important among them, namely baryonic effects on the abundances and rotation curves of halos, do not seem capable of resolving the reported discrepancy.

Here is the money plot from the paper:

toobigtofail

The horizontal axis is the maximum circular velocity, basically telling us the mass of the halo; the vertical axis is the observed velocity of hydrogen in the galaxy. The blue line is the prediction from ΛCDM, while the dots are observed galaxies. Now, you might think that the blue line is just a very crappy fit to the data overall. But that’s okay; the points represent upper limits in the horizontal direction, so points that lie below/to the right of the curve are fine. It’s a statistical prediction: ΛCDM is predicting how many galaxies we have at each mass, even if we don’t think we can confidently measure the mass of each individual galaxy. What we see, however, is that there are a bunch of points in the bottom left corner that are above the line. ΛCDM predicts that even the smallest galaxies in this sample should still be relatively massive (have a lot of dark matter), but that’s not what we see.

If it holds up, this result is really intriguing. ΛCDM is a nice, simple starting point for a theory of dark matter, but it’s also kind of boring. From a physicist’s point of view, it would be much more fun if dark matter particles interacted noticeably with each other. We have plenty of ideas, including some of my favorites like dark photons and dark atoms. It is very tempting to think that observed deviations from the predictions of ΛCDM are due to some interesting new physics in the dark sector.

Which is why, of course, we should be especially skeptical. Always train your doubt most strongly on those ideas that you really want to be true. Fortunately there is plenty more to be done in terms of understanding the distribution of galaxies and dark matter, so this is a very solvable problem — and a great opportunity for learning something profound about most of the matter in the universe.

by Sean Carroll at July 18, 2014 06:23 PM

Symmetrybreaking - Fermilab/SLAC

Scientists set aside rivalry to preserve knowledge

Scientists from two experiments have banded together to create a single comprehensive record of their work for scientific posterity.

Imagine Argentina and Germany, the 2014 World Cup finalists, meeting after the final match to write down all of their strategies, secrets and training techniques to give to the world of soccer.

This will never happen in the world of sports, but it just happened in the world of particle physics, where the goal of solving the puzzles of the universe belongs to all.

Two independent research teams from opposite sides of the Pacific Ocean that have been in friendly competition to discover why there is more matter than antimatter in the universe have just released a joint scientific memoir, The Physics of the B Factories.

The 900-page, three-inch-thick tome documents the experiments—BaBar, at the Department of Energy’s SLAC National Accelerator Laboratory in California, and Belle, at KEK in Tsukuba, Japan—as though they were the subject of a paper for a journal.

The effort took six years and involved thousands of scientists from all over the world

“Producing something like this is a massive undertaking but brings a lot of value to the community,” says Tim Nelson, a physicist at SLAC who was not involved in either experiment. “It’s a thorough summary of the B-factory projects, their history and their physics results. But more than that, it is an encyclopedia of elegant techniques in reconstruction and data analysis that are broadly applicable in high energy physics. It makes an excellent reference from which nearly any student can learn something valuable.”

BaBar and Belle were built to find the same thing: CP violation, a difference in the way matter and antimatter behave that contributes to the preponderance of matter in the universe. And they went about their task in essentially the same way: They collided electrons and their antimatter opposites, positrons, to create pairs of bottom and anti-bottom quarks. So many pairs, in fact, that the experiments became known as B factories—thus, the book title.

Both experiments were highly successful in their search, though what they found can’t account for the entire discrepancy. The experiments also discovered several new particles and studied rare decays.

In the process of finding CP violation they verified a theoretical model, called the CKM matrix, which describes certain types of particle decays. In 2008, Japanese theorists Makoto Kobayashi and Toshihide Maskawa—the “K” and the “M” of CKM—shared the Nobel Prize for their thus-verified model. The two physicists sent BaBar and Belle a thank-you note.

Meanwhile, Francois Le Diberder, the BaBar spokesperson at the time, had an idea.

“It’s Francois’ fault, really,” says Adrian Bevan, a physicist at Queen Mary University of London and long-time member of the BaBar collaboration. “In 2008 he said, ‘We should document the great work in the collaboration.’ The idea just resonated with a few of us. And then Francois said, ‘Let’s invite KEK, as it would be much better to document both experiments.’“

Bevan and a few like-minded BaBar members, such as Soeren Prell from Iowa State University, contacted their Belle counterparts and found them receptive to the idea. They recruited more than 170 physicists to help and spent six years planning, writing, editing and revising. Almost 2000 names appear in the list of contributors; five people, including Bevan, served as editors. Nobel laureates Kobayashi and Masakawa provided the foreward.

The book has many uses, according to Bevan: It’s a guide to analyzing Belle and BaBar data; a reference for other experiments; a teaching tool. Above all, it’s a way to keep the data relevant. Instead of becoming like obsolete alphabets for dead languages, as has happened with many old experiments, BaBar and Belle data can continue to be used for new discoveries. “This, along with long term data access projects, changes the game for archiving data,” Bevan says.

In what may or may not have been a coincidence, the completion of the manuscript coincided with the 50th anniversary of the discovery of CP violation. At a workshop organized to commemorate the anniversary, Bevan and his co-editors presented three specially bound copies of the book to three giants of the field: Nobel laureate James Cronin (pictured above, accepting his copy), one of the physicists who made that first discovery 50 years before, and old friends Kobayashi, who accepted in person, and Masakawa, who sent a representative.

Bevan jokes that Le Diberder cost them six years of hard labor, but the instigator of the project is unrepentant.

“Indeed, the idea is my fault,” Le Diberder, who is now at France’s Linear Accelerator Laboratory, says. “But the project itself got started thanks to Adrian and Soeren, who stepped forward to steward the ship. Once they gathered their impressive team they no longer needed my help except for behind-the-scenes tasks. They had the project well in hand.”

Bevan isn’t sure about the “well in hand” characterization. “It took a few years longer than we thought it would because we didn’t realize the scope of the thing,” Bevan says. “But the end result is spectacular.

“It’s War and Peace for physicists.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Lori Ann White at July 18, 2014 03:53 PM

CERN Bulletin

New procedure for declaring changes in family and personal situation

On taking up their appointment, Members of the Personnel (employed and associated) are required to provide official documents as evidence of their family situation. Any subsequent change in their personal situation, or that of their family members, must be declared in writing to the Organization within 30 calendar days.

 

As part of their efforts to simplify procedures, the Administrative Processes Section (DG-RPC-PA) and the HR and GS Departments have produced a new EDH form entitled “Change of family and personal situation", which must be used to declare the following changes:

  • birth or adoption of a child;
  • marriage;
  • divorce;
  • entry into a civil partnership officially registered in a Member State;
  • dissolution of such a partnership;
  • change of name;
  • change of nationality or new nationality.
     

Members of the Personnel must create the form themselves and provide the information required for the type of declaration concerned, indicating, if applicable, any benefit from an external source that they or their family members are entitled to claim that is of the same nature as a benefit provided for in the Organization’s Staff Regulations. They must also attach a scan of the original certificate corresponding to their declaration.

The form is sent automatically to the relevant Departmental Secretariat, or to the Users Office in the case of Users, Cooperation Associates and Scientific Associates, and is then handled by the services within the HR Department. The Member of the Personnel receives an EDH notification when the change in personal status has been recorded.

The information recorded remains confidential and can be accessed only by the authorised administrative services.

N.B.: If allowances and indemnities paid regularly are affected, the next payslip constitutes a contract amendment. In accordance with Article R II 1.15 of the Staff Regulations, Members of the Personnel are deemed to have accepted a contract amendment if they have not informed the Organization to the contrary within 60 calendar days of receiving it.

Further information can be found on the “Change of family situation" page of the Admin e-guide: https://admin-eguide.web.cern.ch/admin-eguide/famille/proc_change_famille.asp

Any questions about the procedure should be addressed to your Departmental Secretariat or the Users Office.

If you encounter technical difficulties with this new EDH document, please e-mail service-desk@cern.ch, explaining the problem.

The Administrative Processes Section (DG-RPC-PA)

July 18, 2014 03:07 PM

arXiv blog

The Growing Threat Of Network-Based Steganography

Hiding covert messages in plain sight is becoming an increasingly popular form of cyber attack. And security researchers are struggling to catch up.

Back in 2011, researchers at the Laboratory of Cryptography and System Security in Budapest, Hungary, discovered an unusual form of malicious software. This malware embeds itself in Microsoft Windows machines, gathers information particularly about industrial control systems and then sends it over the Internet to its command and control centre. After 36 days, the malware automatically removes itself, making it particularly hard to find.

July 18, 2014 02:36 PM

ZapperZ - Physics and Physicists

The Physics Of A Jumping Articulated Toy
Some time, it is just a pleasure to read about something that isn't too deep, and it is just fun!

This paper on EJP (which is available for free) describes the physics of a jumping kangaroo. The toy makes a complete sommersault as shown in the photo and in the video.

Abstract: We describe the physics of an articulated toy with an internal source of energy provided by a spiral spring. The toy is a funny low cost kangaroo which jumps and rotates. The study consists of mechanical and thermodynamical analyses that make use of the Newton and centre of mass equations, the rotational equations and the first law of thermodynamics. This amazing toy provides a nice demonstrative example of how new physics insights can be brought about when links with thermodynamics are established in the study of mechanical systems.

The authors may want to impart some deeper physical insight into understanding this, which may be true. But I like to take this just on face value. It is just a fun toy and a fun look at how it does what it does.

Zz.

by ZapperZ (noreply@blogger.com) at July 18, 2014 02:01 PM

astrobites - astro-ph reader's digest

Star Formation on a String

Title: A Thirty Kiloparsec Chain of “Beads on a String” Star Formation Between Two Merging Early Type Galaxies in the Core of a Strong-Lensing Galaxy Cluster
Authors: Grant R. Tremblay, Michael D. Gladders, Stefi A. Baum, Christopher P. O’Dea, Matthew B. Bayliss, Kevin C. Cooke, Håkon Dahle, Timothy A. Davis, Michael Florian, Jane R. Rigby, Keren Sharon, Emmaris Soto, Eva Wuyts
First Author’s Institution: European Southern Observatory, Germany
Paper Status: Accepted for Publication in ApJ Letters

fig1

Figure 1. Left: WFC3 image of a galaxy cluster lensing background galaxies. Right: A close up of the cluster, revealed to be two interacting galaxies and a chain of NUV emission indicating star formation.

Take a look at all that gorgeous science in Figure 1! No really, look: that’s a lot of science in one image. Okay, what is it you’re looking at? First, those arcs labeled in the image on the left are galaxies at high redshift being gravitationally lensed by the cluster in the middle (which has the wonderful name SDSS J1531+3414). Very briefly, gravitational lensing is when a massive object (like a galaxy cluster) bends the light of a background object (like these high redshift galaxies), fortuitously focusing the light towards the observer. It’s a chance geometric alignment that lets us learn about distant, high-redshift objects. The lensing was the impetus for these observations, taken by Hubble’s Wide Field Camera 3 (WFC3) in four different filters across the near ultraviolet (NUV, shown in blue), optical (two filters, shown in green and orange), and near infrared (yellow). But what fascinated the authors of this paper is something entirely different happening around that central cluster. The image on the right is a close-up of the cluster with no lensing involved at all. The cluster is actually two elliptical galaxies in the process of merging together, accompanied by a chain of bright NUV emission. NUV emission is associated with ongoing star formation, which is rarely seen in elliptical galaxies (ellipticals are old, well evolved galaxies, which means they’re made mostly of older stellar populations and lack significant star formation; they’re often called “red and dead” for this reason). Star formation is however expected around merging galaxies (even ellipticals) as gas gets stirred up, and the striking “beads on a string” morphology is often seen in spiral galaxy arms and arms stretching between interacting galaxies. But the “beads” shape is hard to explain here, mostly because of the orientation (look how it’s not actually between the galaxies, but off to one side) and the fact that this is possibly the first time it has been observed around giant elliptical galaxies.

fig3

Figure 2. Left: SDSS spectrum of the central galaxies, where all spectral features appear at uniform positions–no differential redshift is evident. Right: Follow-up observations of the central galaxies (one in red and one in green) with NOT. Here a small offset is seen, on the order of ~280 km/sec, which is small given the overall redshift of z=0.335.

So what’s going on in this cluster? First, the authors made sure the central two galaxies are actually interacting, and that the star formation is also related. It’s always important to remember that just because two objects appear close together in an image doesn’t necessarily mean they’re close enough to interact. Space is three dimensional, while images show us only 2D representations. Luckily, these targets all have spectroscopy from the Sloan Digital Sky Survey (SDSS), which measures a few different absorption lines and gives the same redshift for all of the components: the two interacting galaxies, and the star formation regions (see Figure 2). Furthermore the authors have follow-up spectroscopy from the Nordic Optical Telescope (NOT), which confirms the SDSS results. So they’re definitely all part of one big, interacting system.

Hα (the 3-2 transition of hydrogen) indicates ongoing star formation, so the authors measure the Hα luminosity of the NUV-bright regions to calculate a star formation rate (SFR). Extinction due to dust and various assumptions underlying the calculation mean the exact SFR is difficult to pin down, but should be between ~5-10 solar masses per year. From that number, it’s possible to estimate the molecular gas mass in the regions. This estimate basically says that if you know how fast stars are produced (the SFR), then you know roughly how much fuel is around (fuel being the cold gas). This number turns out to be about 0.5-2.0 × 1010 solar masses. The authors tried to verify this observationally by observing the CO(1-0) transition (a tracer of cold molecular gas), but received a null detection. That’s okay, as this still puts an upper limit on the gas of 1.0 × 1010 solar masses, which is both within their uncertainties and a reasonable amount of cold gas, given the mass of the central galaxy pair (but for more information on gas in elliptical galaxies, see this astrobite!).

The point is that there’s definitely a lot of star formation happening happening around these galaxies, and while star formation is expected around mergers, it’s not clear that this particular pattern of star formation has ever been seen around giant ellipticals before. The authors suggest that’s because this is a short-lived phenomenon, and encourage more observations. Specifically, they point out that Gemini GMOS observations already taken will answer questions about gas kinematics, that ALMA has the resolution to ascertain SFRs and molecular gas masses for the individual “beads” of star formation, and that Chandra could answer questions about why the star formation is happening off-center from the interacting galaxies. If the gas is condensing because it’s been shocked, that will show up in X-ray observations, but it would be expected between the galaxies, not off to the side as in this case. Maybe some viscous drag is causing a separation between the gas and the stars? There’s clearly a lot to learn from this system, so keep an eye out for follow-up work.

by Korey Haynes at July 18, 2014 01:40 PM

CERN Bulletin

Meeting staff representatives of the European Agencies
The AASC (Assembly of Agency Staff Committee) held its 27th Meeting of the specialized European Agencies on 26 and 27 May on the premises of the OHIM (Office for Harmonization in the Internal Market) in Alicante, Spain. Two representatives of the CERN Staff Association, in charge of External Relations, attended as observers. This participation is a useful complement to regular contacts we have with FICSA (Federation of International Civil Servants' Associations), which groups staff associations of the UN Agencies, and the annual CSAIO conferences (Conference of Staff Associations of International Organizations), where each Autumn representatives of international organizations based in Europe meet to discuss themes of common interest to better promote and defend the rights of the international civil servants. All these meetings allow us to remain informed on items that are directly or indirectly related to employment and social conditions of our colleagues in other international and European organizations. The AASC includes representatives of 35 specialized Agencies of the European Union. Meetings such as the one in Alicante provide an opportunity to discuss the difficulties that staffs of these Agencies encounter in different areas, such as, health insurance, recognition of the activities of the staff representatives in each Agency, attacks by Members States on social and employment conditions, or the lack of coherence between the different Agencies. These meetings are also an ideal forum for the exchange of information, and an opportunity to define common positions and coordinate joint actions. The need to encourage the activities of staff representation in order to create an effective counterweight to the European administration was stressed. In Alicante, on the morning of the first day, the discussions concerned the recent decisions of the European Commission in Brussels on the reform of the Statute of the European public service and its impact on the Agencies. Indeed, its implementation is complicated, if not impossible, and often made in analogy with the one in the Commission, for example, for social conditions or pensions. During the afternoon session a consultant in communication spoke on the theme "facilitating communication in large groups." Based on this presentation, the next day the discussions took place in several small parallel workshops, each on a particular theme, such as: recruitment, contract renewals, flexitime, and the importance of staff representation and its means of action within the Agencies. Many interesting ideas came out of these workshops, whose main purpose was to stimulate the active participation of people present at gatherings with a large number of participants, such as in Alicante where we were fifty staff representatives. An essential point to take away from this meeting is that the implementation of the recent reform of the Statute of the European civil service is rather unfavourable for the staff, especially for new recruits. This fact gives rise to reactions from staff representatives in each Agency. To improve the social dialogue the AASC Secretariat was given the task to prepare a resolution calling on the Commission and the Administrations of the Agencies to involve staff representatives from an early stage in the drafting of proposals for changes to employment conditions. This resolution was sent to E.U. leaders in early July. A follow-up will take place at the next AASC Meeting in the Autumn.

by Staff Association at July 18, 2014 09:55 AM

CERN Bulletin

Tommaso Dorigo - Scientificblogging

The SUSY-Inspiring LHC WW Excess May Be Due To Theoretical Errors
A timely article discussing the hot topic of the production rate of pairs of vector bosons in proton-proton collisions has appeared on the Cornell arxiv yesterday. As you might know, both the ATLAS and CMS collaborations, who study the 8-TeV (and soon 13-TeV) proton-proton collisions delivered by the Large Hadron Collider at CERN, have recently reported an excess of events with two W bosons. The matter is discussed in a recent article here.

read more

by Tommaso Dorigo at July 18, 2014 08:57 AM

Peter Coles - In the Dark

Sleep well last night?

We had a spectacular thunderstorm over Brighton last night. I do love a good thunderstorm. Although I enjoyed the show, I didn’t get much sleep. Judging by the following graphic from BBC Weather, I’m not the only one…

Lightning


by telescoper at July 18, 2014 08:30 AM

John Baez - Azimuth

The Harmonograph

Anita Chowdry is an artist based in London. While many are exploring electronic media and computers, she’s going in the opposite direction, exploring craftsmanship and the hands-on manipulation of matter. I find this exciting, perhaps because I spend most of my days working on my laptop, becoming starved for richer sensations. She writes:

Today, saturated as we are with the ephemeral intangibility of virtual objects and digital functions, there is a resurgence of interest in the ingenious mechanical contraptions of pre-digital eras, and in the processes of handcraftsmanship and engagement with materials. The solid corporality of analogue machines, the perceivable workings of their kinetic energy, and their direct invitation to experience their science through hands-on interaction brings us back in touch with our humanity.

The ‘steampunk’ movement is one way people are expressing this renewed interest, but Anita Chowdry goes a bit deeper than some of that. For starters, she’s studied all sorts of delightful old-fashioned crafts, like silverpoint, a style of drawing used before the invention of graphite pencils. The tool is just a piece of silver wire mounted on a writing implement; a bit of silver rubs off and creates a gray line. The effect is very subtle:

In January she went to Cairo and worked with a master calligrapher, Ahmed Fares, to recreate the title page of a 16th-century copy of Avicenna’s Canon of Medicine, or al-Qanun fi’l Tibb:

This required making gold ink:

The secret is actually pure hard work; rubbing it by hand with honey for hours on end to break up the particles of gold into the finest powder, and then washing it thoroughly in distilled water to remove all impurities.

The results:

I met her in Oxford this March, and we visited the Museum of the History of Science together. This was a perfect place, because it’s right next to the famous Bodleian, and it’s full of astrolabes, sextants, ancient slide rules and the like…

… and one of Anita Chowdry’s new projects involves another piece of romantic old technology: the harmonograph!

The harmonograph

A harmonograph is a mechanical apparatus that uses pendulums to draw a geometric image. The simplest so-called ‘lateral’ or ‘rectilinear’ harmonograph uses two pendulums: one moves a pen back and forth along one axis, while the other moves the drawing surface back and forth along a perpendicular axis. By varying their amplitudes, frequencies and the phase difference, we can get quite a number of different patterns. In the linear approximation where the pendulums don’t swing to high, we get Lissajous curves:

x(t) = A \sin(a t + \delta)

y(t) = B \sin(b t)

For example, when the amplitudes A and B are both 1, the frequencies are a = 3 and b = 4, and the phase difference \delta is \pi/2, we get this:

Harmonographs don’t serve any concrete practical purpose that I know; they’re a diversion, an educational device, or a form of art for art’s sake. They go back to the mid-1840s.

It’s not clear who invented the harmonograph. People often credit Hugh Blackburn, a professor of mathematics at the University of Glasgow who was a friend of the famous physicist Kelvin. He is indeed known for studying a pendulum hanging on a V-shaped string, in 1844. This is now called the Blackburn pendulum. But it’s not used in any harmonograph I know about.

On the other hand, Anita Chowdry has a book called The Harmonograph. Illustrated by Designs actually Drawn by the Machine, written in 1893 by one H. Irwine Whitty. This book says the harmonograph

was first constructed by Mr. Tisley, of the firm Tisley and Spiller, the well-known opticians…

So, it remains mysterious.

The harmonograph peaked in popularity in the 1890s. I have no idea how popular it ever was; it seems a rather cerebral form of entertainment. As the figures from Whitty’s book show, it was sometimes used to illustrate the Pythagorean theory of chords as frequency ratios. Indeed, this explains the name ‘harmomograph’:

At left the frequencies are exactly a = 3, b = 2, just as we’d have in two notes making a major fifth. Three choices of phase difference are shown. In the pictures at right, actually drawn by the machine, the frequencies aren’t perfectly tuned, so we get more complicated Lissajous curves.

How big was the harmonograph craze, and how long did it last? It’s hard for me to tell, but this book published in 1918 gives some clues:

• Archibald Williams, Things to Make: Home-made harmonographs (part 1, part 2, part 3), Thomas Nelson and Sons, Ltd., 1918.

It discusses the lateral harmonograph. Then it treats Joseph Goold’s ‘twin elliptic pendulum harmonograph’, which has a pendulum free to swing in all directions connected to a pen, and second pendulum free to swing in all directions affecting the motion of the paper. It also shows a miniature version of the same thing, and how to build it yourself. It explains the connection with harmony theory. And it explains the value of the harmonograph:

Value of the harmonograph

A small portable harmonograph will be found to be a good means of entertaining friends at home or elsewhere. The gradual growth of the figure, as the card moves to and fro under the pen, will arouse the interest of the least scientifically inclined person; in fact, the trouble is rather to persuade spectators that they have had enough than to attract their attention. The cards on which designs have been drawn are in great request, so that the pleasure of the entertainment does not end with the mere exhibition. An album filled with picked designs, showing different harmonies and executed in inks of various colours, is a formidable rival to the choicest results of the amateur photographer’s skill.

“In great request”—this makes it sound like harmonographs were all the rage! On the other hand, I smell a whiff of desperate salesmanship, and he begins the chapter by saying:

Have you ever heard of the harmonograph? If not, or if at the most you have very hazy ideas as to what it is, let me explain.

So even at its height of popularity, I doubt most people knew what a harmonograph was. And as time passed, more peppy diversions came along and pushed it aside. The phonograph, for example, began to catch on in the 1890s. But the harmonograph never completely disappeared. If you look on YouTube, you’ll find quite a number.

The harmonograph project

Anita Chowdry got an M.A. from Central Saint Martin’s college of Art and Design. That’s located near St. Pancras Station in London.

She built a harmonograph as part of her course work, and it worked well, but she wanted to make a more elegant, polished version. Influenced by the Victorian engineering of St. Pancras Station, she decided that “steel would be the material of choice.”

So, starting in 2013, she began designing a steel harmonograph with the help of her tutor Eleanor Crook and the engineering metalwork technician Ricky Lee Brawn.

Artist and technician David Stewart helped her make the steel parts. Learning to work with steel was a key part of this art project:

The first stage of making the steel harmonograph was to cut out and prepare all the structural components. In a sense, the process is a bit like tailoring—you measure and cut out all the pieces, and then put them together an a logical order, investing each stage with as much care and craftsmanship as you can muster. For the flat steel components I had medium-density fibreboard forms cut on the college numerical control machine, which David Stewart used as patterns to plasma-cut the shapes out of mild carbon-steel. We had a total of fifteen flat pieces for the basal structure, which were to be welded to a large central cylinder.

My job was to ‘finish’ the plasma-cut pieces: I refined the curves with an angle-grinder, drilled the holes that created the delicate openwork patterns, sanded everything to smooth the edges, then repeatedly heated and quenched each piece at the forge to darken and strengthen them. When Dave first placed the angle-grinder in my hands I was terrified—the sheer speed and power and noise of the monstrous thing connecting with the steel with a shower of sparks had a brutality and violence about it that I had never before experienced. But once I got used to the heightened energy of the process it became utterly enthralling. The grinder began to feel as fluent and expressive as a brush, and the steel felt responsive and alive. Like all metalwork processes, it demands a total, immersive concentration—you can get lost in it for hours!

Ricky Lee Brawn worked with her to make the brass parts:

Below you can see the brass piece he’s making, called a finial, among the steel legs of the partially finished harmonograph:

There are three legs, each with three feet.

The groups of three look right, because I conceived the entire structure on the basis of the three pendulums working at angles of 60 degrees in relation to one another (forming an equilateral triangle)—so the magic number is three and its multiples.

With three pendulums you can generate more complicated generalizations of Lissajous curves. In the language of music, three frequencies gives you a triplet!

Things become still more complex if we leave the linear regime, where motions are described by sines and cosines. I don’t understand Anita Chowdry’s harmonograph well enough to know if nonlinearity plays a crucial role. But it gives patterns like these:

Here is the completed harmonograph, called the ‘Iron Genie’, in action in the crypt of the St. Pancras Church:

And now, I’m happy to say, it’s on display at the Museum of the History of Science, where we met in Oxford. If you’re in the area, give it a look! She’s giving free public talks about it at 3 pm on

• Saturday July 19th
• Saturday August 16th
• Saturday September 20th

in 2014. And if you can’t visit Oxford, you can still visit her blog!

The mathematics

I think the mathematics of harmonographs deserves more thought. The basic math required for this was developed by the Irish mathematician William Rowan Hamilton around 1834. Hamilton was just the sort of character who would have enjoyed the harmonograph. But other crucial ideas were contributed by Jacobi, Poincaré and many others.

In a simple ‘lateral’ device, the position and velocity of the machine takes 4 numbers to describe: the two pendulum’s angles and angular velocities. In the language of classical mechanics, the space of states of the harmonograph is a 4-dimensional symplectic manifold, say X. Ignoring friction, its motion is described by Hamilton’s equations. These equations can give behavior ranging from completely integrable (as orderly as possible) to chaotic.

For small displacements our lateral harmonograph about the state of rest, I believe its behavior will be completely integrable. If so, for any initial conditions, its motion will trace out a spiral on some 2-dimensional torus T sitting inside X. The position of pen on paper provides a map

f : X \to \mathbb{R}^2

and so the spiral is mapped to some curve on the paper!

We can ask what sort of curves can arise. Lissajous curves are the simplest, but I don’t know what to say in general. We might be able to understand their qualitative features without actually solving Hamilton’s equations. For example, there are two points where the curves seem to ‘focus’ here:

That’s the kind of thing mathematical physicists can try to understand, a bit like caustics in optics.

If we have a ‘twin elliptic pendulum harmonograph’, the state space X becomes 8-dimensional, and T becomes 4-dimensional if the system is completely integrable. I don’t know the dimension of the state space for Anita Chowdry’s harmonograph, because I don’t know if her 3 pendulum can swing in just one direction each, or two!

But the big question is whether a given harmonograph is completely integrable… in which case the story I’m telling goes through… or whether it’s chaotic, in which case we should expect it to make very irregular pictures. A double pendulum—that is, a pendulum hanging on another pendulum—will be chaotic if it starts far enough from its point of rest.

Here is a chaotic ‘double compound pendulum’, meaning that it’s made of two rods:

Acknowledgements

Almost all the pictures here were taken by Anita Chowdry, and I thank her for letting me use them. The photo of her harmonograph in the Museum of the History of Science was taken by Keiko Ikeuchi, and the copyright for this belongs to the Museum of the History of Science, Oxford. The video was made by Josh Jones. The image of a Lissajous curve was made by Alessio Damato and put on Wikicommons with a Creative Commons Attribution-Share Alike license. The double compound pendulum was made by Catslash and put on Wikicommons in the public domain.


by John Baez at July 18, 2014 01:00 AM

July 17, 2014

The Great Beyond - Nature blog

Mosquitoes transmit chikungunya in continental US

Posted on behalf of Mark Zastrow.

Two people have acquired the mosquito-borne chikungunya virus in the continental United States, the state of Florida’s Department of Health announced today. The cases, one in Miami-Dade County and another in Palm Beach County, confirm that the virus has infected US mosquitoes.

Chikungunya is an illness marked mainly by discomfort: a high fever, rashes, and severe joint, back and muscle pain. It is rarely fatal, and most recover within days or weeks. However, joint pain can sometimes persist for months. Chikungunya cannot be transmitted from person to person; it can be contracted only from a mosquito.

The United States is only the latest destination for the globetrotting virus. First described in the 1950s in East Africa, it has spread throughout central and southern Africa, India and Southeast Asia, generally through the mosquito Aedes aegypti. But a mutation that is suspected to have occurred in a 2005–06 outbreak on Réunion Island appears to have allowed it to infect Aedes albopictus, also known as the Asian tiger mosquito. This enabled the virus to spread as far north as Italy in 2007.

Previously, the only reported chikungunya cases in the United States had been in people returning from abroad — mostly from the 23 countries in the Caribbean, South America and Central America, where the virus has established itself since reaching the Western Hemisphere in December. The number of cases imported to the United States so far this year has spiked to 243 from an average of 28 annually since 2006.

The US Centers for Disease Control and Prevention (CDC) said in a statement that it expects chikungunya to continue to crop up, with only sporadic cases of local transmission initiated by travellers returning to the United States from abroad. But the CDC said as imported cases rise, so does the likelihood of local outbreaks. They could appear anywhere the Asian tiger mosquito does — as far west as Texas, and, in the north, from Minnesota to New Jersey.

by Matthew Crenson at July 17, 2014 10:08 PM

ZapperZ - Physics and Physicists

Three US Dark Matter Projects Get Funding Approval
The US Dept. of Energy and National Science Foundation have jointly approved the funding of three dark matter search projects. These projects were selected based on the recommendation of the P5 panel, which released its report earlier this year.

Two key US federal funding agencies – the Department of Energy's Office of High Energy Physics and the National Science Foundation's Physics Division – have revealed the three "second generation" direct-detection dark-matter experiments that they will support. The agencies' programme will include the Super Cryogenic Dark Matter Search-SNOLAB (SuperCDMS), the LUX-ZEPLIN (LZ) experiment and the next iteration of the Axion Dark Matter eXperiment (ADMX-Gen2). 

Certainly, with High Energy Physics funding in the US being squeezed and shrinking each year, this is the best outcome on funding for the search of dark matter experiments.

Zz.

by ZapperZ (noreply@blogger.com) at July 17, 2014 08:29 PM

Clifford V. Johnson - Asymptotia

Yes Amazon, I am Interested…
... in my own book! This is one of the more amusing emails I've received in recent days. Apparently there is no algorithm that checks you are not recommending to an author a copy of their own book. amazon_recommends_cvj And no, I've no idea why this version is so expensive. Did they print this one with gold leaf illumination on the first letter of each chapter? -cvj Click to continue reading this post

by Clifford at July 17, 2014 08:02 PM

The Great Beyond - Nature blog

Google maps methane leaks

Posted on behalf of Mark Zastrow.

Google’s fleet of city-mapping cars are now working to measure urban natural gas leaks.

The technology giant’s collaboration with the Environmental Defense Fund (EDF), announced on 16 July, equips Google’s Street View cars with sensors to detect methane leaking from ageing city pipes, through city streets and into the atmosphere. The sensors were developed by researchers at Colorado State University in Fort Collins.

The project has released online methane maps for Boston, Massachusetts; Staten Island in New York; and Indianapolis, Indiana. The team found thousands of leaks in Boston and Staten Island at a rate of roughly one per every mile (1.6 kilometres) of road driven, whereas Indianapolis’s roads are leaking only once every 200 miles (322 kilometres) — a sign of newer infrastructure.

These leaks are too small to be a health or explosion risk, but they are also a growing climate concern; methane is 86 times more powerful as a greenhouse gas than carbon dioxide, over a 20-year period, according to the Intergovernmental Panel on Climate Change. Massachusetts passed legislation in June that requires utilities to speed up their pipe replacement, and California is considering following suit.

“We think [this technique] will offer a new way for utilities and regulators to evaluate their ongoing leak detection and repair programs,” says Mark Brownstein, the leader of the EDF’s natural gas efforts. The group says that utilities could reduce their emissions 2–3 times faster by prioritizing those larger leaks.

But just how utilities might actually use the data in practice remains to be seen, says Susan Fleck, vice-president of pipeline safety for National Grid, a London-based private utility that is also collaborating on the mapping project. “You know, I just have to be really up front about this. This is a pilot programme, right? So it’s kind of hard to say exactly how this is going to work out,” she told reporters.

Nathan Phillips, an ecologist at Boston University in Massachusetts, is sceptical that the data will help utility operators identify specific leaks. Phillips’ team pioneered car-borne urban methane mapping in Boston and has a separate project funded by the EDF, but is not involved with the Google effort. He points out that utilities already know from their own records where the ageing and leak-prone cast iron pipes are. “It doesn’t take a lot of guesswork to say, ‘There’s a 120 year old pipe running under this street, it’s probably a leaky street.’”

What excites him are the project’s plans to go nationwide and the opportunity to compare data across cities. Already, he says he is struck by how few leaks Indianapolis has compared to Boston and Staten Island, a sign of its newer infrastructure. He wonders if publicly owned gas networks (such as that of Indianapolis) will prove less leaky than those owned by private companies that need to balance expansion and market share with maintenance of existing infrastructure; it’s the type of question ‘Big Data’ could answer.

“That’s the beauty of what Google and EDF are getting into,” he says. “This is just a kind of teaser.”

by Lauren Morello at July 17, 2014 04:23 PM

Axel Maas - Looking Inside the Standard Model

Why continue into the beyond?
I have just returned from a very excellent 37th International Conference on High-Energy Physics. However, as splendid as the event itself was, it was in a sense bad news: No results which hint at anything substantial beyond the standard model, except for the usual suspect statistical fluctuations. This does not mean that there is nothing - we know there is more for many reasons. But in an increasingly frustrating sequence of years all our observational and experimental results keep pushing it beyond our reach. Even for me as a theorist there is just not enough substantial information to be able to do more than just vague speculation of what could be.

Nonetheless, I just wrote that I want to venture into this unknown beyond, and in force. Hence it is reasonable - in fact necessary - to pose the question: Why? If I do not know and have too little information, is there any chance to hit the right answer? The answer to this: Probably not. But...

Actually, there are two buts. One is simply curiosity. I am a theorist, and I can always pose the question how does something work, even without having a special application or situation in mind. Though this may just end up as nothing, it would not be the first time that the answer to a question has been discovered long before the question. In fact, the single most important building block of the standard-model, so-called Yang-Mills theory, has been discovered by theorists almost a decade before it was recognized to be the key to explain the experimental results.

But this is not the main reason for me to venture into this direction. The main reason has to do with the experience I made with Higgs physics - that despite appearance there is often a second layer to the theory. Such a second layer has in this case shifted the perception of how things we describe in theory correlate with the things we see in experiment. Since many proposed theories beyond the standard model, especially such as have caught my interest, are extensions of the Higgs of the standard model. It thus stands to reason that similar statements hold true in their cases. However, whether they hold true, and how they work cannot be fathomed without looking at theses theories. And that is what I want to do.

Why should one do this? Such subtle questions seem to be at first not really related to experiment. But understanding how a theory really works should also give us a better idea of what kind of observations such a theory can actually deliver. And now it becomes very interesting for an experiment. Since we do at the current time not know what to expect, we need to think about what we could expect. This is especially important as to look in every corner requires much more resources than available to us in the foreseeable future. Hence, any insights into what kind of experimental results a theory can yield is very important to select where to focus.

Of course, my research alone will not be sufficient to do this. Since it easily can be that I am looking at the 'wrong' theory, it would not be a good idea to put too much effort in it. But, when there are many theoreticians working on many theories, and many theories all say that it is a good idea to look into a particular direction: Then we have a guidance for where to look. Then there seems to be something special in this direction. And if not, then we have excluded a lot of theories in one go.

As one person in a discussion session (I could not figure out who precisely) has put it aptly at the conference: "The time of guaranteed discoveries is over.". This means that now that we have all pieces of the standard model, we cannot expect to find a new piece any time soon. All our indirect results even tell us that the next piece will be much harder to find. Hence, we are facing a situation as was last seen in physics in the second half of the 19th century and beginning 20th century: There are only some hints that something does not fit. And now we have to go looking, without knowing in advance how far we will have to walk. Or in which direction. This is probably more of an adventure than the last decades, where things where essentially happening on schedule. But is also requires more courage, since there will be much more dead ends (or worse) available.

by Axel Maas (noreply@blogger.com) at July 17, 2014 03:18 PM

The Great Beyond - Nature blog

Mars rover facing harshest journey yet

After travelling 8.5 kilometres on Mars, NASA’s Curiosity rover is now facing some of the most dangerous terrain it has ever encountered.

The car-sized rover is currently crossing a stretch of hard, rocky ground of the sort that previously dented and punctured its aluminium wheels. Winds at Gale Crater, Curiosity’s landing site, have whittled and sharpened rocks into piercing points unlike that seen by NASA’s three earlier Mars rovers. Curiosity needs to travel about 200 metres of this sharp ‘caprock’ before it can descend into a sandy, more wheel-friendly depression dubbed Hidden Valley.

A puncture (centre right) in one of Curiosity's wheels. The sequence of cutouts in the lower right are deliberate and imprint 'JPL' in Morse code as the wheels roll across the Martian surface.

A puncture (centre right) in one of Curiosity’s wheels. (The sequence of cutouts at lower right are deliberate and imprint ‘JPL’ in Morse code as the wheels roll across the Martian surface.)

NASA/JPL-Caltech/MSSS

“This is awful stuff,” says John Grotzinger, the mission’s chief scientist and a geologist at the California Institute of Technology (Caltech) in Pasadena. He spoke on 16 July in a public lecture associated with a week-long Mars conference on the Caltech campus.

Grotzinger and his team of scientists and engineers have spent much of the past few months figuring out a way to get Curiosity closer to its ultimate target — a 5-kilometre-high mountain known as Mount Sharp — without destroying its wheels along the way. The problem became apparent last December, when Curiosity sent back close-up images of its wheels that revealed more wear and tear than engineers were expecting. Over the next few months, the wheels rapidly deteriorated. One ripped across nearly half of its width in a giant gash. “When you have a metal wheel and you can see the planet through it, that’s not a good thing,” says Grotzinger.

Each of the rover’s six wheels is machined from a single piece of aluminium, measures 40 centimetres across and weighs just 3 kilograms. That size saved weight at launch, but means that the aluminium skin — just three-quarters of a millimetre thick — is prone to tearing, says rover driver Chris Roumeliotis, of NASA’s Jet Propulsion Laboratory (JPL) in Pasadena.

The damage was particularly bad on the rover’s pairs of front and middle wheels. To figure out why, mission engineers hauled out a mockup of Curiosity and rolled it over piles of sharp rocks in the ‘Mars yard’ test site at the JPL.

In one particularly gruesome test, the wheels went over a sharp metal point nicknamed the Impaler. “Hearing the aluminium crack and puncture like that just gives me chills,” says Roumeliotis.

Soon the team figured out that Curiosity could minimize damage by driving backwards over sharp rocks, which lessened the load on the wheels just as pivoting from pushing to pulling luggage changes the stress. The rover has been scuttling along mostly in reverse ever since.

But the team cannot avoid the fact that sharp rocks must be crossed. Using images from orbiting spacecraft, Grotzinger and his colleagues have mapped out ten types of terrain, colour-coded from green (kind to wheels) to an extreme red (full of pointy rocks). At times, they have opted to take the long way between two locations to cross over the least-damaging terrain possible.

But there was no avoiding the fact that Curiosity had to cross a swath of red at a place called  Zabriskie Plateau. Next week, rover planners will send it slowly rolling over the last stretch of dangerous caprock and descending into Hidden Valley below. “We will go in and out of the valleys, trying to work at the interface between the wheel-damaging caprock and where we would like to be,” says Grotzinger.

That could take a while. Curiosity still has 3.5 kilometres to travel just to make it to the base of Mount Sharp.

 

 

 

by Alexandra Witze at July 17, 2014 01:51 PM

Peter Coles - In the Dark

Romance – from the Gadfly

It’s too hot today to stay inside blogging at lunchtime, so here’s some lovely music from the Gadfly Suite by Dmitri Shostakovich. I’ve been called a Gadfly myself from time to time, but I’m also partial to a bit of romance now and then….


by telescoper at July 17, 2014 12:42 PM

astrobites - astro-ph reader's digest

How does structure grow? Understanding the Meszaros effect

Title: The Behaviour of Point Masses in an Expanding Cosmological Substratum
Author: Peter Meszaros
First Author’s Institution: Institute of Astronomy, Cambridge (at time of publication), Pennsylvania State University (now)
Paper Status: Published in Astronomy and Astrophysics in 1974

Why are we here? How did we get here? In particular, how did our galaxy, and the many others like it in the Universe, form?  The consensus picture is that inflation stretched quantum mechanical fluctuations in the incredibly early Universe onto large scales, and that at the end of inflation these perturbations in the density of matter seeded subsequent structure formation. Regions more dense than the average exerted more gravity on material around them, accreting that material and eventually growing into the galaxies we see today.

 

How did this growth process happen?  Well, as we’ve said, gravity is important.  Pressure is also, in some cases, important: the pressure of the material falling inward onto overdense regions can in principle stop it from being accreted.  It turns out that this is only the case on smaller scales than those we focus on today, so put it out of mind for now.  Finally, there’s the expansion of the Universe.  General relativity relates the total energy density of the Universe to its expansion rate through the Friedmann equation, and basically says the more energy density you have, the faster your Universe expands.

 

Now, to grow a density perturbation, you must compress more and more matter into a given region: this increases the density there.  If the Universe is trying to stretch as you do this, that makes it harder to squeeze more material into the region.  This point is the key idea of the classic paper we discuss today, written by Peter Meszaros in 1974.  This paper describes the growth of perturbations in an expanding background, and is the origin of the Meszaros effect, usually a subsection in every cosmology textbook’s chapter on structure formation.

 

Meszaros derived a single equation governing the growth of perturbations in an expanding background.  It has 3 terms: the first is an acceleration, describing the rate of change of the growth of the perturbation.  The second is a forcing term, just like in the standard harmonic oscillator problem: it is proportional to the size of the perturbation.  physically, this means a bigger perturbation grows faster—not surprising, as such a perturbation exerts stronger gravity on the particles around it. What is important about this second term is that it is only proportional to the density in the perturbed component.  In other words, if you have a region with only extra matter, the forcing only cares about that extra matter, not whatever else might be in that region (as long as whatever else is in the region has density at its average for the Universe).  In particular, you can throw in as many photons or neutrinos or gravitational waves as you like in the Universe, but they will not help a perturbation to the matter grow any faster as long as they are at their Universe-averaged values.

 

Now we come to the third term in the Meszaros equation.  This describes how the expansion of the Universe slows the growth of perturbations; it is usually called the “Hubble friction” or “Hubble drag,” because the Hubble parameter enters into it.  What is vital here is that all of the stuff in the Universe contributes to the Hubble drag—so the more total energy density in the Universe, the more pronounced the damping effect on perturbations’ growth.  In particular, you can add as many photons or neutrinos or gravitational waves as you like to the energy density, and while they won’t help the perturbation grow, they will help the Universe expand—and so actually slow the perturbation’s growth down.

The CDM, (cold dark matter, dark blue) is over dense, and therefore expands more slowly than the background universe in light yellow (with light blue arrows).  Thus from the perspective of an observer comoving with the background Universe's expansion, the CDM appears to contract.  But it is harder for the CDM to accrete matter because the background's expansion is pulling matter in the opposite direction.

The CDM, (cold dark matter, dark blue) is over dense, and therefore expands more slowly than the background universe in light yellow (with light blue arrows). Thus from the perspective of an observer comoving with the background Universe’s expansion, the CDM appears to contract. But it is harder for the CDM to accrete matter because the background’s expansion is pulling matter in the opposite direction.

This was Meszaros’ key insight.  Indeed, he was a bit more specific: he pointed out that if galaxies in a cluster are to be gravitationally bound to each other, there cannot be much mass in the Universe hidden in photons, gravitational waves, or neutrinos.  If the density of these exceeded that of the matter, then the clusters would continue to expand with the background Universe.

 

Today, Meszaros’ paper is important not primarily because of this specific point, but rather because it turns out to describe well the growth of perturbations in the first 200,000 or so years of the Universe, when radiation dominated the energy density but matter perturbations were trying to grow.  The Meszaros effect is the fact that the radiation’s dominance over the matter slowed the growth of structure over what it would have been had the Universe been matter-dominated.

 

by Zachary Slepian at July 17, 2014 06:20 AM

July 16, 2014

astrobites - astro-ph reader's digest

Mercury’s surprising density: What about magnets?

Title: Explaining Mercury’s Density Through Magnetic Erosion
Authors: Alexander Hubbard
First Author’s Institution: Department of Astrophysics, American Museum of Natural History
Paper Status: Accepted to Icarus

Mercury is an intriguing little planet. It’s small, hot, and in orbital resonance with the sun. It’s also really, really dense, much denser than the other rocky planets in the solar system. As all the rocky planets formed out of the same protoplanetary dust, you’d think they’d be of roughly the same composition, but whereas the Earth and Venus are about 30% iron by mass, Mercury seems to be more like 70%. That’s a lot, and it complicates our understanding of how planets form. Mercury’s weirdness points to weirdnesses in planet formation that we haven’t yet sussed out.

mercury core

A rough comparison of Earth’s and Mercury’s composition. (NASA)

Astronomers have proposed a variety of theories to account for Mercury’s high iron content. Maybe something happened after the planet formed that caused a bunch of its less dense silicates (i.e. rocky material) to evaporate. Maybe a giant collision knocked off a hunk of Mercury’s rocky outer layers, leaving behind iron in a new high proportion.

Or, as a today’s paper proposes: What about magnets?

Hubbard sees a need for a new explanation for Mercury’s density as the MESSENGER mission’s measurement of Mercury’s K/Th (Potassium to Thorium) ratio [pdf] contradicts a leading model for Mercury’s formation. The K/Th ratio is representative of the ratio of silicates to metals in a planet’s composition, and indicates the conditions of a planet’s formation. If Mercury ended up dense because its silicates evaporated post-formation, you would expect it to have a low K/Th ratio: potassium is more volatile than silicon, so if conditions allowed for silicon to evaporate, the potassium would have gone, too. Instead, MESSENGER found a relatively high K/Th ratio, in step with that of the other rocky planets. This contradicts the volatility/evaporation model.

MESSENGER visits Mercury (artist's conception)

MESSENGER visits Mercury and tells you your volatile evaporation theory is wrong. (artist’s conception via NASA)

Naked-eye observation seems to contradict the giant impact model. Earth is thought to have survived a giant impact; the evidence is our (relatively large) moon. This model suggests that a similar event could have relieved Mercury of a significant amount of its early (silicate) surface. But then where did all of that matter go? Earth’s lost material stayed nearby. But there’s no evidence of another Mercury’s worth of material hanging out nearby, in the form of a moon for Mercury or other debris, we can probably rule this theory out.

One model Hubbard leaves in the running is photophoresis, by which light unevenly heats dust particles and causes them to migrate to cooler regions. This astrobite provides good coverage of that idea. However, photophoresis could only be active in surface layers of the protoplanetary disk. The idea isn’t discounted, but it certainly leaves room for another mechanism.

Which brings us to magnetism. It seems like a logical consideration when you’re wondering how a lot of iron all got in one place. But no one was herding planetary raw materials with a giant horseshoe-shaped magnet. (Weirdly, a search of NASA’s image collection turned up no artist’s conception for that one.) Enough complicated, competing forces are at play in the protoplanetary nebula that a magnetic model needs rigorous testing.

Hubbard establishes that conditions in Mercury’s region of the stellar nebula would have been sufficient to magnetically saturate iron. One barrier to dust particle agglomeration in the hot inner regions of the disk is that tiny dust particles in ionized gas tend to accumulate a negative charge: they repel one another. But the ability of metal-rich grains to rearrange their charges helps them overcome this charge barrier. This favors the growth of metal-rich grains over silicates.

However, once grains are large enough to be knocked around by disk turbulence (which isn’t very large at all—this happens when the dust grains are as small as a micron across) the charge barrier becomes irrelevant, and dust is knocking into all the other dust, forming larger and larger particles. The silicates get in on the action. There needs to be a way for them to be removed.

If iron-rich particles are magnetized, their interactions not only lead to faster coagulation (as magnetic attraction helps them stick to one another) but also increase the velocity of their impacts. Hubbard finds that these collisions are sufficiently powerful to shatter and erode the silicates mixed in with (or surrounding the iron). The silicates get knocked off and the iron stays stuck together, resulting in the evolution of very iron-rich particles that would eventually form the Mercury we know and love today.

Since Mercury’s composition is unique among the rocky planets, the mechanism that Hubbard proposes needs to be able to work only very close to the Sun, in the region where Mercury formed.  If the mechanism would seem to work elsewhere, it would be invalidated by the evidence of the other not-particularly iron-rich planets. Since magnetization of the iron particles drives the preferential collisions, Hubbard looked at the factors that determine whether iron is magnetized: its Curie temperature and the presence of a magnetic field. He finds a narrow window at which it is hot enough for Magneto-Rotational Instability to amplify the magnetic field but cool enough to be below iron’s Curie temperature. A sweet spot for iron-rich planet formation, right where iron-rich Mercury came to be.

Planet formation is an extraordinarily complex process that we understand through theory and models built to explain what we can see. The flood of exoplanet discovery in the last few years has expanded our sample size and diversified the planetary systems our models need to explain. But there’s still also plenty of mysterious, fascinating weirdness close to home.

by Jaime Green at July 16, 2014 10:21 PM

Clifford V. Johnson - Asymptotia

Dark Energy Discussion
dark_energy_discussionI was sent an interesting link a while ago* that I thought I would share with you. It is a really good discussion about Dark Energy - what do we think it is, why we think it exists, why some think it does not, and how to move forward with the discussion of what is, after all apparently *most* of our universe. It is a panel discussion that was hosted by the Institute for Arts and Ideas (which I *love* the idea of!). The discussion is described on the site as follows:
Dark energy is supposed to make up two-thirds of the universe. But troublingly CERN has yet to find any evidence. Have we got our story of the universe wrong - might dark energy be the aether of our time? Do we need a new account of the universe, or is it too soon for such radical solutions? The Panel The BBC's Sue Nelson asks Templeton Prize winning cosmologist George Ellis, Cambridge physicist David Tong and mathematician Peter Cameron to seek the invisible.
Ok, the "troublingly CERN has yet to find any evidence" part puzzles me a bit, since nobody's really expecting CERN to find any evidence of it, in any large scale experiments that I'm aware of (please correct me if I am wrong)... Is the writer of the abstract confusing Dark Energy and Dark Matter? Even then I think it is an odd phrase to lead with, especially if you don't mention the huge amount of evidence from astronomy in the same footing... but I imagine the abstract was maybe not written by a physicist? Nevertheless, I strongly recommend it as a thought-provoking discussion, and you can find it embedded below. Do also check out their many other interesting [...] Click to continue reading this post

by Clifford at July 16, 2014 08:42 PM

Symmetrybreaking - Fermilab/SLAC

Science inspires at Sanford Lab’s Neutrino Day

Science was the star at an annual celebration in Lead, South Dakota.

At the Sanford Underground Research Facility’s seventh annual Neutrino Day last Saturday, more than 800 visitors of all ages and backgrounds got a glimpse of the high-energy physics experiments underway a mile below the streets of Lead, South Dakota.

After decades as a mining town, Lead has transformed in recent years into a science town. From within America’s largest and deepest underground mine, where hundreds of miners once pulled gold from the earth, more than a hundred scientists now glean insights into the mysteries of the universe.

“We don’t have a lot of vendors or food at Neutrino Day. It’s all science,” says Constance Walter, Sanford Lab’s communications director. “Our hope is that even people who didn’t before have a real interest in science will get excited. We want them to understand what we’re doing at Sanford Lab and the impact it can have on the region.”

This year, the festivities included tours of the above-ground facilities, live video chats with scientists and rescue personnel a mile underground (pictured above), a planetarium presentation, and hands-on science demos including the opportunity for kids to build battery-operated robots, use air pressure to change the size of marshmallows and learn about circuits using a conductive dough.

Science lectures also drew large crowds. Tennessee Technological University Professor Mary Kidd introduced attendees to the Majorana Demonstrator, which seeks to determine whether the neutrino is its own antiparticle and offer insight into the mass of neutrinos. Brookhaven National Laboratory physicist Milind Diwan wowed the crowd with his descriptions of the strange behavior of neutrinos and their many mysteries. And, in the keynote presentation, cosmologist Joel Primack and cultural philosopher Nancy Ellen Abrams discussed some of the most mindboggling unknowns in the universe—including the nature of dark matter and dark energy.

The highlight for 8th-grader Zoe Keehn was, without a doubt, a production put on by more than 30 local schoolchildren. Keehn played Hannah, the lead role in the NASA-sponsored Space School Musical. Hannah’s science project, a model of the solar system, is due tomorrow but it’s already past her bedtime. As she works to get it finished, Hannah’s friends—our solar system’s planets, moons, meteors, comets and asteroids—come out to help her with fun facts and information in the form of song.

“I’ve been in a lot of plays and musicals, and it was fun to be in a science one,” Keehn says. “I especially liked my S-P-A-C-E song. It goes, ‘The only place for me, a place I can be free, S-P-A-C-E, that’s where I’ve got to be.’”

Karen Everett, who as the executive director of the Lead-Deadwood Art Center came up with the idea of producing Space Science Musical for Neutrino Day, says that the musical was a big hit. “People just loved it,” she says. “Through art, we can educate people about science.”

For a town that lost its main source of income when the mine shut down in 2003, the lab—and Neutrino Day—also offers a much-needed economic stimulus.

“With a little over 3000 people, Lead is a small town and one that’s been transitioning from its 125-year-old mining economy,” says Everett. “It was great to see so many people in town, enjoying the event, eating at local restaurants and generally just coming out. It was a great boost for us all.”

Walter sees it as a two-way street. “I love Neutrino Day because I see people—especially kids—who are excited to learn about what we do,” she says. “As the kids get excited, so do their parents and their teachers. And that’s so great to see. We need the support of the community for the laboratory to thrive and be successful.” 

by Kelen Tuttle at July 16, 2014 04:13 PM

Jon Butterworth - Life and Physics

Is there a shadow universe?

IMG1760 Last October, with the “Through the Wormhole with Morgan Freeman” team, I spent hours on the London Eye talking about spin,  and nearby cafes drawing Feynman diagrams with sugar….IMG1759.  and apparently that episode, called “Is there a shadow universe” is on tonight. I wonder what, if any, footage made it in? I hope it’s good.


Filed under: Uncategorized

by Jon Butterworth at July 16, 2014 02:55 PM

arXiv blog

Taxi Trajectories Reveal City's Most Important Crossroads

The data from GPS navigating equipment is revealing the most important junctions in traffic-clogged megacities.


Here’s an interesting question: how do you identify the most important junctions in a city? One way it is to measure the origin, route, and destination of each road trip through a city and then work out where they cross.

July 16, 2014 02:13 PM

The n-Category Cafe

Math and Mass Surveillance: A Roundup

The Notices of the AMS has just published the second in its series “Mathematicians discuss the Snowden revelations”. (The first was here.) The introduction to the second article cites this blog for “a discussion of these issues”, but I realized that the relevant posts might be hard for visitors to find, scattered as they are over the last eight months.

So here, especially for Notices readers, is a roundup of all the posts and discussions we’ve had on the subject. In reverse chronological order:

by leinster (tom.leinster@ed.ac.uk) at July 16, 2014 01:30 AM

July 15, 2014

ZapperZ - Physics and Physicists

Quantum Criticality Experimentally Confirmed
A new experimental result has confirmed quantitatively the presence of a quantum critical point.

The researchers experimentally confirmed the predicted linear evolution of the gap with the magnetic field, which allowed them to pinpoint the location of the quantum critical point. At the critical field, the observable is expected to display a power-law temperature dependence, another hallmark of quantum criticality, with a characteristic power of -3/4 in this case—precisely what they observed. Even more, a rigorous experimental analysis allowed them to estimate the prefactor to this behavior, which they found to correspond nicely to the theoretically predicted one. And finally, they observed this behavior to persist to as high a temperature as almost half of the exchange coupling, which sets the global energy scale of the problem. This answers an essential question about how far away from the absolute zero temperature quantum criticality reaches or how measurable it really is. The experiment constitutes the first quantitative confirmation of the quantum critical behavior predicted by any of the few existing theories.

Nice! Not surprisingly (at least, not to me), the clearest confirmation of this exotic quantum phenomenon is first found in a condensed matter system.

A few background reading for those who want to have more info on quantum criticality can be found here and here.

Zz.

by ZapperZ (noreply@blogger.com) at July 15, 2014 11:18 PM

Tommaso Dorigo - Scientificblogging

The Spam Of Physicists' Mailboxes
I guess every profession has its own kind of personalized spam. Here is a couple of recent samples from my own:

  • From a Fermilab address: "According to the TRAIN database training for course FN000508 / CR - Workplace Violence and Active Shooter/Active Threat Awareness Training expired on 07/01/2014. Please make arrangements to take this class. If this training is no longer required then you or your supervisor should complete the Individual Training Needs Assessment [...]"
Note that
(1) I am not a user any longer, so their database is at fault. They still send out these notifications anyway.

read more

by Tommaso Dorigo at July 15, 2014 08:47 PM

Symmetrybreaking - Fermilab/SLAC

The machine learning community takes on the Higgs

Detecting new physics isn’t quite like detecting cat videos—yet.

Scientists have created a contest that invites anyone to use machine learning—the kind of computing that allows Facebook to spot your friends in photos and Netflix to recommend your next film—to search for the Higgs boson.

More than 1000 individuals have already joined the race. They’re vying for prizes up to $7000, but according to contest organizers, the real winner might be the particle physics community, whose new connections with the world of data science could push them toward new methods of discovery.

The contest works like this: Participants receive data from 800,000 simulated particle collisions from the ATLAS experiment at the Large Hadron Collider. The collisions can be sorted into two groups: those with a Higgs boson and those without.

The data for each collision contains 30 details—including variables such as the energy and direction of the particles coming out of it. Contestants receive all of these details, but only 250,000 of the collisions are labeled “Higgs” or “non-Higgs.”

They must use this labeled fraction to train their algorithms to find patterns that point to the Higgs boson. When they’re ready, they unleash the algorithms on the unlabeled collision data and try to figure out where the Higgs is hiding.

Contestants submit their answers online to Kaggle, a company that holds the answer key. When Kaggle receives a submission, it grades, in real time, just a portion of it—to prevent people from gaming the system—and then places the contestant on its public leaderboard.

At the end of the Higgs contest, Kaggle will reveal whose algorithm did the best job analyzing the full dataset. The top three teams will win $7000, $4000 and $3000. In addition, whoever has the most useable algorithm will be invited to CERN to see the ATLAS detector and discuss machine learning with LHC scientists.

The contest was conceived of by a six-person group led by two senior researchers at France’s national scientific research center, CNRS: physicist David Rousseau, who served from 2010 to 2012 as software coordinator for the ATLAS experiment, and machine-learning expert Balázs Kégl, who since 2007 has been looking for ways to bring machine learning into particle physics.

The company running the contest, Kaggle, based in San Francisco, holds such challenges for research institutions and also businesses such as Liberty Mutual, Allstate, Merck, MasterCard and General Electric. They have asked data scientists to foresee the creditworthiness of loan applicants, to predict the toxicity of molecular compounds and to determine the sentiment of lines from movie reviews on the film-rating site Rotten Tomatoes.

Kaggle contests attract a mixed crowd of professional data scientists looking for fresh challenges, grad students and postdocs looking to test their skills, and newbies looking to get their feet wet, says Joyce Noah-Vanhoucke, Kaggle data scientist and head of competitions.

“We’re trying to be the home of data science on the internet,” she says.

Often contestants play for cash, but they have also competed for the chance to interview for data scientist positions at Facebook, Yelp and Walmart.

Kaggle is currently running about 20 contests on its site. Most of them will attract between 300 and 500 teams, Noah-Vanhoucke says. But the Higgs contest, which does not end until September, has already drawn almost 970. Names appear and drop off of the leaderboard every day.

“People love this type of problem,” Noah-Vanhoucke says. “It captures their imagination.”

A couple of the top contenders are physicists, but most come from outside the particle physics community. The team spent about 18 months working on organizing the contest in the hopes that it would create just this kind of crossover, Rousseau says.

“If due to this challenge physicists of the collaboration discover they have a friendly machine learning expert in the lab next door and they try to work together, that’s even better than just getting a new algorithm.”

Machine learning—known in physics circles as multivariate analysis—played a small role in the 2012 discovery of the Higgs. But physics is still about 15 years behind the cutting edge in this area, Kégl says. And it could be just what the science needs.

Artwork by: Sandbox Studio, Chicago

Until a couple of years ago, the Higgs was the last undiscovered particle of the Standard Model of particle physics.

“Physics is getting to a place where they’ve discovered everything they were looking for,” Kégl says.

Questions still remain, of course. What is dark matter? What is dark energy? Why is gravity so weak? Why is the Higgs so light?

“But the Higgs is a very specific, predicted thing,” Kégl says. “Physicists knew if it had this mass, it would decay in this way.

“Now they’re looking for stuff they don’t know. I’m really interested in methods that can find things that are not modeled yet.”

In 2012, the Google X laboratory programmed 1000 computers to look through 10 million randomly selected thumbnail images from YouTube videos. This was an example of unsupervised machine learning: The computers had no answer code; they weren’t given any goal other than to search for patterns.

And they found them. They grouped photos by categories such as human faces, human bodies—and cats. People teased that Google had created the world’s most complicated cat video detector. But joking aside, it was an impressive example of the ability of a machine to quickly organize massive amounts of data.

Physicists already research in a similar way, sorting through huge amounts of information in search of patterns. The clue to their next revolutionary discovery could lie in an almost unnoticeable deviation from the expected. Machine learning could be an important tool in finding it.

Physicists shouldn’t consider this a threat to job security, though. In the case of the Higgs contest, scientists needed to greatly simplify their data to make it possible for algorithms to handle it.

“A new algorithm would be a small piece of a full physics analysis,” Rousseau says. “In the future, physics will not be done by robots.”

He hopes they might help, though. The team is already planning the next competition.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at July 15, 2014 06:41 PM

July 14, 2014

astrobites - astro-ph reader's digest

Can gamma ray bursts be used as standard candles?

Title: Gamma-ray burst supernovae as standardizable candles
Authors: Z. Cano
First Author’s institution: Centre for Astrophysics and Cosmology, Science Institute, University of  Iceland, Reykjavik, Iceland.

Gamma ray bursts (GRBs) are among the most energetic and explosive events in the Universe. Although the exact mechanism underlying a GRB is still uncertain, it has been proposed that these explosions are the result of a black hole forming in the midst of a collapsing star, and the resulting outflowing jets producing gamma rays through collisions with intervening stellar material. Many GRB events are also observed to occur with a companion supernova (also known as a hypernova). These GRB-SNe pairs hint that there may be a common physical mechanism powering these types of events.

Objects such as Type Ia supernovae have traditionally served as standard candles for distance measurements due to their relatively predictable luminosities. Given their extraordinarily large luminosities, abundance, and cosmological distances, GRBs might also seem to be good standard candles. However, the irregularity of GRB light curves have excluded these objects from being used as standard candles in the past. In contrast, the light curves of Type Ia supernova are remarkably consistent, and there are well constrained relations between these light curves and the intrinsic luminosity of the explosion.

This paper looks at eight different GRB-SNe events. To examine the consistancy of these light curves across these GRBs, the author calculates a luminosity parameter (denoted k) and a width/shape parameter (denoted s) of these GRB-SNe light curves relative to a template SNe light curve (SN 1998bw). This is done after correcting for extinction along the line of sight and subtracting the emission from the host galaxy. By doing a best fit and performing a correlation analysis between k and s, the paper finds that there is a statistically significant correlation between the two (Fig. 1). The author also examines an additional GRB without a supernova counterpart, and finds that this event has k and s parameters that also fit into the correlation. However, given that this is just a single data point, it is difficult to draw any conclusions about the applicability of this correlation towards GRBs without corresponding SNe.

Fig. 1:

Fig. 1: A correlation between the luminosity (k) and stretch (s) factors of the GRB-SNe pairs examined in this paper. The colors indicate data taken in different color filters, and the dashed line indicates the uncertainty of the best-fit line. The histograms on the bottom show the distribution of the best-fit parameters from a Monte Carlo simulation.

The author suggests that there may be a physical explanation behind this correlation. Since the gamma rays in a GRB are produced in beamed, relativistic jets, we are viewing these GRB-SNe pairs from the same orientation (i.e. along the jets). Hence, the correlation between luminosity and light curve shape might arise from the fact that the ejecta geometry is similar across objects. This correlation could also suggest some sort of connection between the luminosity of the explosion and nucleosynthesis mechanisms: since the light curves of GRB-SNe events are additionally powered by the radioactive decay of 56-Ni decaying into 56-Fe (which also has some consistency across different events), the similarities in the physics between Type Ia SNe and these GRB-SNe pairs could indicate why the latter might be useful as standard candles.

In summary, this correlation between the brightness and shape of a GRB light curve suggests that there is some promise for GRBs to be used as standard candles. However, given the small sample size used in this study, it is difficult to make generalizations about the applicability of this method. To test this idea further, more observations of GRB-SNe pairs are needed in the future.

by Anson Lam at July 14, 2014 07:25 PM

Quantum Diaries

US reveals its next generation of dark matter experiments

This article appeared in symmetry on July 11, 2014.

Together, the three experiments will search for a variety of types of dark matter particles. Photo: NASA

Together, the three experiments will search for a variety of types of dark matter particles. Photo: NASA

Two US federal funding agencies announced today which experiments they will support in the next generation of the search for dark matter.

The Department of Energy and National Science Foundation will back the Super Cryogenic Dark Matter Search-SNOLAB, or SuperCDMS; the LUX-Zeplin experiment, or LZ; and the next iteration of the Axion Dark Matter eXperiment, ADMX-Gen2.

“We wanted to pool limited resources to put together the most optimal unified national dark matter program we could create,” says Michael Salamon, who manages DOE’s dark matter program.

Second-generation dark matter experiments are defined as experiments that will be at least 10 times as sensitive as the current crop of dark matter detectors.

Program directors from the two federal funding agencies decided which experiments to pursue based on the advice of a panel of outside experts. Both agencies have committed to working to develop the new projects as expeditiously as possible, says Jim Whitmore, program director for particle astrophysics in the division of physics at NSF.

Physicists have seen plenty of evidence of the existence of dark matter through its strong gravitational influence, but they do not know what it looks like as individual particles. That’s why the funding agencies put together a varied particle-hunting team.

Both LZ and SuperCDMS will look for a type of dark matter particles called WIMPs, or weakly interacting massive particles. ADMX-Gen2 will search for a different kind of dark matter particles called axions.

LZ is capable of identifying WIMPs with a wide range of masses, including those much heavier than any particle the Large Hadron Collider at CERN could produce. SuperCDMS will specialize in looking for light WIMPs with masses lower than 10 GeV. (And of course both LZ and SuperCDMS are willing to stretch their boundaries a bit if called upon to double-check one another’s results.)

If a WIMP hits the LZ detector, a high-tech barrel of liquid xenon, it will produce quanta of light, called photons. If a WIMP hits the SuperCDMS detector, a collection of hockey-puck-sized integrated circuits made with silicon or germanium, it will produce quanta of sound, called phonons.

“But if you detect just one kind of signal, light or sound, you can be fooled,” says LZ spokesperson Harry Nelson of the University of California, Santa Barbara. “A number of things can fake it.”

SuperCDMS and LZ will be located underground—SuperCDMS at SNOLAB in Ontario, Canada, and LZ at the Sanford Underground Research Facility in South Dakota—to shield the detectors from some of the most common fakers: cosmic rays. But they will still need to deal with natural radiation from the decay of uranium and thorium in the rock around them: “One member of the decay chain, lead-210, has a half-life of 22 years,” says SuperCDMS spokesperson Blas Cabrera of Stanford University. “It’s a little hard to wait that one out.”

To combat this, both experiments collect a second signal, in addition to light or sound—charge. The ratio of the two signals lets them know whether the light or sound came from a dark matter particle or something else.

SuperCDMS will be especially skilled at this kind of differentiation, which is why the experiment should excel at searching for hard-to-hear low-mass particles.

LZ’s strength, on the other hand, stems from its size.

Dark matter particles are constantly flowing through the Earth, so their interaction points in a dark matter detector should be distributed evenly throughout. Quanta of radiation, however, can be stopped by much less significant barriers—alpha particles by a piece of paper, beta particles by a sandwich. Even gamma ray particles, which are harder to stop, cannot reach the center of LZ’s 7-ton detector. When a particle with the right characteristics interacts in the center of LZ, scientists will know to get excited.

The ADMX detector, on the other hand, approaches the dark matter search with a more delicate touch. The dark matter axions ADMX scientists are looking for are too light for even SuperCDMS to find.

If an axion passed through a magnetic field, it could convert into a photon. The ADMX team encourages this subtle transformation by placing their detector within a strong magnetic field, and then tries to detect the change.

“It’s a lot like an AM radio,” says ADMX-Gen2 co-spokesperson Gray Rybka of the University of Washington in Seattle.

The experiment slowly turns the dial, tuning itself to watch for one axion mass at a time. Its main background noise is heat.

“The more noise there is, the harder it is to hear and the slower you have to tune,” Rybka says.

In its current iteration, it would take around 100 years for the experiment to get through all of the possible channels. But with the addition of a super-cooling refrigerator, ADMX-Gen2 will be able to search all of its current channels, plus many more, in the span of just three years.

With SuperCDMS, LZ and ADMX-Gen2 in the works, the next several years of the dark matter search could be some of its most interesting.

Kathryn Jepsen

by Fermilab at July 14, 2014 02:30 PM

arXiv blog

Forget the Wisdom of Crowds; Neurobiologists Reveal the Wisdom of the Confident

The wisdom of crowds breaks down when people are biased. Now researchers have discovered a simple method of removing this bias–just listen to the most confident.

 

July 14, 2014 02:00 PM

Quantum Diaries

Deux petites anomalies remarquées

La 37ème Conférence internationale de physique des hautes énergies vient de se terminer à Valence, en Espagne. Cette année, pas de grande surprise : aucun nouveau boson, aucun signe de nouvelles particules ou phénomènes révélant la nature de la matière sombre ou l’existence de nouvelles théories comme la supersymétrie. Mais comme toujours, quelques petites anomalies ont capté l’attention.

Les chercheur-e-s s’intéressent particulièrement à toute déviation par rapport aux prédictions théoriques car ces petites anomalies pourraient révéler l’existence d’une “nouvelle physique”. Cela permettrait de découvrir des indices d’une théorie plus inclusive puisque tout le monde réalise que le modèle théorique actuel, le Modèle standard, a ses limites et doit être remplacé par une théorie plus complète.

Mais il faut se méfier. Tous les physiciens et physiciennes le savent bien : de petits écarts apparaissent souvent et disparaissent tout aussi vite. Toutes les mesures faites en physique suivent des lois statistiques. Des déviations d’un écart-type entre les valeurs mesurées expérimentalement et celles prédites par la théorie sont observées dans trois mesures sur dix. De plus grands écarts sont moins communs, mais toujours possibles. Une déviation de deux écarts-types se produit dans 5% des mesures, et trois écarts-types, 1%. Il y a aussi des erreurs systématiques reliées aux instruments de mesure. Ces erreurs ne sont pas de nature statistiques mais peuvent être réduites avec une connaissance accrue du détecteur. L’erreur expérimentale associée à chaque résultat correspond à un écart-type. Voici à titre d’exemple deux petites anomalies rapportées durant la conférence et qui ont attiré l’attention cette année.

La Collaboration ATLAS a montré un résultat préliminaire sur la production d’une paire de bosons W. La mesure de ce taux permet d’effectuer des vérifications détaillées du Modèle puisque les théoricien–ne-s peuvent prévoir combien de fois des paires de bosons W sont produites quand les protons entrent en collision dans Grand collisionneur de hadrons (LHC). Le taux de production dépend de l’énergie dégagée pendant ces collisions. Jusqu’ici, on peut faire deux mesures puisque le LHC a fonctionné à deux énergies différentes, soit 7 et 8 TeV.

Les expériences CMS et ATLAS avaient déjà publié leurs résultats basés sur les données recueillis à 7 TeV. Les taux mesurés excédaient légèrement les prédictions théoriques mais restaient tout de même à l’intérieur des marges d’erreur expérimentale avec des déviations de 1.0 et 1.4 écart-type, respectivement. CMS avait aussi publié des résultats basés sur environ 20% de toutes les données accumulées à 8 TeV. Le taux mesuré excédait légèrement la prédiction théorique par 1.7 écart-type. Le dernier résultat d’ATLAS ajoute un élément supplémentaire au tableau. Il est basé sur l’ensemble des données recueillies à 8 TeV. ATLAS obtient une déviation un peu plus forte pour le taux de production de deux bosons W à 8 TeV avec une déviation de 2.1 écarts-types par rapport à la prédiction théorique.

WWResultsLes quatre mesures expérimentales du taux de production de paires de bosons W (points noirs) avec l’incertitude expérimentale (barre horizontale) aussi bien que la prédiction théorique actuelle (triangle bleu) avec sa propre incertitude (bande bleue). On peut voir que toutes les mesures sont plus élevées que les prédictions actuelles, suggérant que le calcul théorique actuel n’inclut pas tout.

Chacune de ces quatre mesures est en bon accord avec la valeur théorique mais le fait qu’elles excèdent toutes cette prédiction commence à attirer l’attention. Très probablement, cela signifie que les théoriciens n’ont pas encore pris en compte toutes les petites corrections exigées par le Modèle standard pour déterminer ce taux suffisamment précisément. C’est un peu comme si on oubliait de noter quelques petites dépenses dans son budget, menant à un déficit non expliqué à la fin du mois. Il pourrait aussi y avoir des facteurs communs dans les incertitudes expérimentales, qui réduiraient l’importance globale de cette anomalie. Mais si les prédictions théoriques demeurent ce qu’elles sont, même en rajoutant toutes les petites corrections possibles, cela indiquerait l’existence de nouveaux phénomènes, ce qui serait passionnant. Il faudra alors surveiller l’évolution de cette mesure après la remise en marche du LHC en 2015 à plus haute énergie, soit 13 TeV.

La Collaboration CMS a présenté elle aussi un résultat intrigant. Un groupe de chercheur-e-s a trouvé quelques événements compatibles avec l’observation d’une désintégration d’un boson de Higgs en un tau et un muon. De telles désintégrations sont interdites dans le Modèle standard puisqu’elles enfreignent la conservation de la « saveur » leptonique. Il y a trois saveurs ou types de leptons chargés (une catégorie de particules fondamentales) : l’électron, le muon et le tau. Chacun vient avec son propre type de neutrinos. Dans toutes les observations faites jusqu’à présent, les leptons sont toujours produits soit avec leur propre neutrino, soit avec leur antiparticule. La désintégration d’un boson de Higgs en leptons devrait donc toujours produire un lepton chargé et son antiparticule, mais jamais deux leptons chargés de saveur différente. Il est tout simplement interdit d’enfreindre cette règle à l’intérieur du cadre du Modèle standard.

Il faudra vérifier tout cela avec plus de données, ce qui sera possible après la reprise du LHC l’année prochaine. Mais d’autres modèles de « nouvelle physique » permettent la violation de la saveur leptonique. Il s’agit de modèles comme ceux comprenant plusieurs doublets de Higgs ou des bosons de Higgs composites ou encore les modèles impliquant des dimensions supplémentaires comme ceux de Randall-Sundrum. Alors si avec plus de données ATLAS et CMS confirment que cette tendance correspond à un effet réel, ce sera une véritable révolution.

HtomutauLes résultats obtenus par la Collaboration CMS pour six types de désintégrations différentes. Tous donnent une valeur non-nulle, contrairement aux prédictions du Modèle standard, pour le taux de désintégration de bosons de Higgs en paires de tau et muon.

Pauline Gagnon

Pour être averti-e lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution

 

by CERN (Francais) at July 14, 2014 08:06 AM

Quantum Diaries

Two anomalies worth noticing

The 37th International Conference on High Energy Physics just finished in Valencia, Spain. This year, no big surprises were announced: no new boson, no signs from new particles or clear phenomena revealing the nature of dark matter or new theories such as Supersymmetry. But as always, a few small anomalies were reported.

Looking for deviations from the theoretical predictions is precisely how experimentalists are trying to find a way to reveal “new physics”. It would help discover a more encompassing theory since everybody realises the current theoretical model, the Standard Model, has its limits and must be superseded by something else. However, all physicists know that small deviations often come and go. All measurements made in physics follow statistical laws. Therefore deviations from the expected value by one standard deviation occur in three measurements out of ten. Larger deviations are less common but still possible. A two standard deviation happens 5% of the time. Then there are systematic uncertainties that relate to the experimental equipment. These are not purely statistical, but can be improved with a better understanding of our detectors. The total experimental uncertainty quoted with each result corresponds to one standard variation. Here are two small anomalies reported at this conference that attracted attention this year.

The ATLAS Collaboration showed its preliminary result on the production of a pair of W bosons. Measuring this rate provides excellent checks of the Standard Model since theorists can predict how often pairs of W bosons are produced when protons collide in the Large Hadron Collider (LHC). The production rate depends on the energy released during these collisions. So far, two measurements can be made since the LHC operated at two different energies, namely 7 TeV and 8 TeV.

CMS and ATLAS had already released their results on their 7 TeV data. The measured rates exceeded slightly the theoretical prediction but were both well within their experimental error with a deviation of 1.0 and 1.4 standard deviation, respectively. CMS had also published results based on about 20% of all data collected at 8 TeV. It exceeded slightly the theoretical prediction by 1.7 standard deviation. The latest ATLAS result adds one more element to the picture. It is based on the full 8 TeV data sample. Now ATLAS reports a slightly stronger deviation for this rate at 8 TeV with 2.1 standard deviations from the theoretical prediction.

WWResults

The four experimental measurements for the WW production rate (black dots) with the experimental uncertainty (horizontal bar) as well as the current theoretical prediction (blue triangle) with its own uncertainty (blue strip). One can see that all measurements are higher than the current prediction, indicating that the theoretical calculation fails to include everything.

The four individual measurements are each reasonably consistent with expectation, but the fact that all four measurements lie above the predictions becomes intriguing. Most likely, this means that theorists have not yet taken into account all the small corrections required by the Standard Model to precisely determine this rate. This would be like having forgotten a few small expenses in one’s budget, leading to an unexplained deficit at the end of the month. Moreover, there could be common factors in the experimental uncertainties, which would lower the overall significance of this anomaly. But if the theoretical predictions remain what they are even when adding all possible little corrections, it could indicate the existence of new phenomena, which would be exciting. It would then be something to watch for when the LHC resumes operation in 2015 at 13 TeV.

The CMS Collaboration presented another intriguing result. They found some events consistent with coming from a decay of a Higgs boson into a tau and a muon. Such decays are prohibited in the Standard Model since they violate lepton flavour conservation. There are three “flavours” or types of charged leptons (a category of fundamental particles): the electron, the muon and the tau. Each one comes with its own type of neutrinos. According to all observations made so far, leptons are always produced either with their own neutrino or with their antiparticle. Hence, the decay of a Higgs boson in leptons should always produce a charged lepton and its antiparticle, but never two charged leptons of different flavour. Violating a conservation laws in particle physics is simply not allowed.

This needs to be scrutinised with more data, which will be possible when the LHC resumes next year. Lepton flavour violation is allowed outside the Standard Model in various models such as models with more than one Higgs doublet or composite Higgs models or Randall-Sundrum models of extra dimensions for example. So if both ATLAS and CMS confirm this trend as a real effect, it would be a small revolution.

HtomutauThe results obtained by the CMS Collaboration showing that six different channels all give a non-zero value for the decay rate of Higgs boson into pairs of tau and muon.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline
 or sign-up on this mailing list to receive and e-mail notification.

 

by CERN at July 14, 2014 07:52 AM

July 12, 2014

Clifford V. Johnson - Asymptotia

First Figs!
first_figs_2014Sorry for being a bit quiet the last week. I've ben working hard on a project and a lot of other things, and got snowed under. One of the things that has kept me busy has been the garden, and I am getting good rewards for my efforts. More later. The fig trees have begun their production of fruit, even after being [...] Click to continue reading this post

by Clifford at July 12, 2014 05:15 AM

John Baez - Azimuth

El Niño Project (Part 5)

 

And now for some comic relief.

Last time I explained how to download some weather data and start analyzing it, using programs written by Graham Jones. When you read that, did you think “Wow, that’s easy!” Or did you think “Huh? Run programs in R? How am I supposed to do that?”

If you’re in the latter group, you’re like me. But I managed to do it. And this is the tale of how. It’s a blow-by-blow account of my first steps, my blunders, my fears.

I hope that if you’re intimidated by programming, my tale will prove that you too can do this stuff… provided you have smart friends, or read this article.

More precisely, this article explains how to:

• download and run software that runs the programming language R;

• download temperature data from the National Center for Atmospheric Research;

• use R to create a file of temperature data for a given latitude/longitude rectangle for a given time interval.

I will not attempt to explain how to program in R.

If you want to copy what I’m doing, please remember that a few details depend on the operating system. Since I don’t care about operating systems, I use a Windows PC. If you use something better, some details will differ for you.

Also: at the end of this article there are some very basic programming puzzles.

A sad history

First, let me explain a bit about my relation to computers.

I first saw a computer at the Lawrence Hall of Science in Berkeley, back when I was visiting my uncle in the summer of 1978. It was really cool! They had some terminals where you could type programs in BASIC and run them.

I got especially excited when he gave me the book Computer Lib/Dream Machines by Ted Nelson. It espoused the visionary idea that people could write texts on computers all around the world—”hypertexts” where you could click on a link in one and hop to another!

I did more programming the next year in high school, sitting in a concrete block room with a teletype terminal that was connected to a mainframe somewhere far away. I stored my programs on paper tape. But my excitement gradually dwindled, because I was having more fun doing math and physics using just pencil and paper. My own brain was more easy to program than the machine. I did not start a computer company. I did not get rich. I learned quantum mechanics, and relativity, and Gödel’s theorem.

Later I did some programming in APL in college, and still later I did a bit in Mathematica in the early 1990s… but nothing much, and nothing sophisticated. Indeed, none of these languages would be the ones you’d choose to explore sophisticated ideas in computation!

I’ve just never been very interested… until now. I now want to do a lot of data analysis. It will be embarrassing to keep asking other people to do all of it for me. I need to learn how to do it myself.

Maybe you’d like to do this stuff too—or at least watch me make a fool of myself. So here’s my tale, from the start.

Downloading and running R

To use the programs written by Graham, I need to use R, a language currently popular among statisticians. It is not the language my programmer friends would want me to learn—they’d want me to use something like Python. But tough! I can learn that later.

To download R to my Windows PC, I cleverly type download R into Google, and go to the top website it recommends:

http://cran.r-project.org/bin/windows/base/

I click the big fat button on top saying

Download R 3.1.0 for Windows

and get asked to save a file R-3.1.0-win.exe. I save it in my Downloads folder; it takes a while to download since it was 57 megabytes. When I get it, I click on it and follow the easy default installation instructions. My Desktop window now has a little icon on it that says R.

Clicking this, I get an interface where I can type commands after a red

>

symbol. Following Graham’s advice, I start by trying

> 2^(1:8)

which generates a list of powers of 2 from 21 to 28, like this:

[1] 2 4 8 16 32 64 128 256

Then I try

> mean(2^(1:8))

which gives the arithmetic mean of this list. Somewhat more fun is

> plot(rnorm(20))

which plots a bunch of points, apparently 20 standard normal deviates.

When I hear “20 standard normal deviates” I think of the members of a typical math department… but no, those are deviants. Standard normal deviates are random numbers chosen from a Gaussian distribution of mean zero and variance 1.

Downloading climate data

To do something more interesting, I need to input data.

The papers by Ludescher et al use surface air temperatures in a certain patch of the Pacific, so I want to get ahold of those. They’re here:

NCEP/NCAR Reanalysis 1: Surface.

NCEP is the National Centers for Environmental Prediction, and NCAR is the National Center for Atmospheric Research. They have a bunch of files here containing worldwide daily average temperatures on a 2.5 degree latitude × 2.5 degree longitude grid (that’s 144 × 73 grid points), from 1948 to 2010. And if you go here, the website will help you get data from within a chosen rectangle in a grid, for a chosen time interval.

These are NetCDF files. NetCDF stands for Network Common Data Form:

NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

According to my student Blake Pollard:

… the method of downloading a bunch of raw data via ftp (file transfer protocol) is a great one to become familiar with. If you poke around on ftp://ftp.cdc.noaa.gov/Datasets or some other ftp servers maintained by government agencies you will find all the data you could ever want. Examples of things you can download for free: raw multispectral satellite images, processed data products, ‘re-analysis’ data (which is some way of combining analysis/simulation to assimilate data), sea surface temperature anomalies at resolutions much higher than 2.5 degrees (although you pay for that in file size). Also, believe it or not, people actually use NetCDF files quite widely, so once you know how to play around with those you’ll find the world quite literally at your fingertips!

I know about ftp: I’m so old that I know this was around before the web existed. Back then it meant “faster than ponies”. But I need to get R to accept data from these NetCDF files: that’s what scares me!

Graham said that R has a “package” called RNetCDF for using NetCDF files. So, I need to get ahold of this package, download some files in the NetCDF format, and somehow get R to eat those files with the help of this package.

At first I was utterly clueless! However, after a bit of messing around, I notice that right on top of the R interface there’s a menu item called Packages. I boldly click on this and choose Install Package(s).

I am rewarded with an enormous alphabetically ordered list of packages… obviously statisticians have lots of stuff they like to do over and over! I find RNetCDF, click on that and click something like “OK”.

I’m asked if I want to use a “personal library”. I click “no”, and get an error message. So I click “yes”. The computer barfs out some promising text:

utils:::menuInstallPkgs()
trying URL 'http://cran.stat.nus.edu.sg/bin/windows/contrib/3.1/RNetCDF_1.6.2-3.zip'
Content type 'application/zip' length 548584 bytes (535 Kb)
opened URL
downloaded 535 Kb

package ‘RNetCDF’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
C:\Users\JOHN\AppData\Local\Temp\Rtmp4qJ2h8\downloaded_packages

Success!

But now I need to figure out how to download a file and get R to eat it and digest it with the help of RNetCDF.

At this point my deus ex machina, Graham, descends from the clouds and says:

You can download the files from your browser. It is probably easiest to do that for starters. Put
ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/
into the browser, then right-click a file and Save link as…

This code will download a bunch of them:

for (year in 1950:1979) {
download.file(url=paste0("ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.", year, ".nc"), destfile=paste0("air.sig995.", year, ".nc"), mode="wb")
}

It will put them into the “working directory”, probably C:\Users\JOHN\Documents. You can find the working directory using getwd(), and change it with setwd(). But you must use / not \ in the filepath.

Compared to UNIX, the Windows operating system has the peculiarity of using \ instead of / in path names, but R uses the UNIX conventions even on Windows.

So, after some mistakes, in the R interface I type

> setwd("C:/Users/JOHN/Documents/My Backups/azimuth/el nino")

and then type

> getwd()

to see if I’ve succeeded. I’m rewarded with

[1] "C:/Users/JOHN/Documents/My Backups/azimuth/el nino"

Good!

Then, following Graham’s advice, I cut-and-paste this into the R interface:

for (year in 1950:1979) {
download.file(url=paste0("ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.", year, ".nc"), destfile=paste0("air.sig995.", year, ".nc"), mode="wb")
}

It seems to be working! A little bar appears showing how each year’s data is getting downloaded. It chugs away, taking a couple minutes for each year’s worth of data.

Using R to process NetCDF files

Okay, now I’ve got all the worldwide daily average temperatures on a 2.5 degree latitude × 2.5 degree longitude grid from 1950 to 1970.

The world is MINE!

But what do I do with it? Graham’s advice is again essential, along with a little R program, or script, that he wrote:

The R script netcdf-convertor.R from

https://github.com/azimuth-project/el-nino/tree/master/R

will eat the file, digest it, and spit it out again. It contains instructions.

I go to this URL, which is on GitHub, a popular free web-based service for software development. You can store programs here, edit them, and GitHub will help you keep track of the different versions. I know almost nothing about this stuff, but I’ve seen it before, so I’m not intimidated.

I click on the blue thing that says netcdf-convertor.R and see something that looks like the right script. Unfortunately I can’t see how to download it! I eventually see a button I’d overlooked, cryptically labelled “Raw”. I realize that since I don’t want a roasted or oven-broiled piece of software, I should click on this. I indeed succeed in downloading netcdf-convertor.R this way. Graham later says I could have done something better, but oh well. I’m just happy nothing has actually exploded yet.

Once I’ve downloaded this script, I open it using an text processor and look at it. At top are a bunch of comments written by Graham:


######################################################
######################################################

# You should be able to use this by editing this
# section only.

setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino")

lat.range <- 13:14
lon.range <- 142:143

firstyear <- 1957
lastyear <- 1958

outputfilename <- paste0("Scotland-", firstyear, "-", lastyear, ".txt")

######################################################
######################################################

# Explanation

# 1. Use setwd() to set the working directory
# to the one containing the .nc files such as
# air.sig995.1951.nc.
# Example:
# setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino")

# 2. Supply the latitude and longitude range. The
# NOAA data is every 2.5 degrees. The ranges are
# supplied as the number of steps of this size.
# For latitude, 1 means North Pole, 73 means South
# Pole. For longitude, 1 means 0 degrees East, 37
# is 90E, 73 is 180, 109 is 90W or 270E, 144 is
# 2.5W.

# These roughly cover Scotland.
# lat.range <- 13:14
# lon.range <- 142:143

# These are the area used by Ludescher et al,
# 2013. It is 27x69 points which are then
# subsampled to 9 by 23.
# lat.range <- 24:50
# lon.range <- 48:116

# 3. Supply the years
# firstyear <- 1950
# lastyear <- 1952

# 4. Supply the output name as a text string.
# paste0() concatenates strings which you may find
# handy:
# outputfilename <- paste0("Pacific-", firstyear, "-", lastyear, ".txt")

######################################################
######################################################

# Example of output

# S013E142 S013E143 S014E142 S014E143
# Y1950P001 281.60000272654 281.570002727211 281.60000272654 280.970002740622
# Y1950P002 280.740002745762 280.270002756268 281.070002738386 280.49000275135
# Y1950P003 280.100002760068 278.820002788678 281.120002737269 280.070002760738
# Y1950P004 281.070002738386 279.420002775267 281.620002726093 280.640002747998
# ...
# Y1950P193 285.450002640486 285.290002644062 285.720002634451 285.75000263378
# Y1950P194 285.570002637804 285.640002636239 286.070002626628 286.570002615452
# Y1950P195 285.92000262998 286.220002623275 286.200002623722 286.620002614334
# ...
# Y1950P364 276.100002849475 275.350002866238 276.37000284344 275.200002869591
# Y1950P365 276.990002829581 275.820002855733 276.020002851263 274.72000288032
# Y1951P001 278.220002802089 277.470002818853 276.700002836064 275.870002854615
# Y1951P002 277.750002812594 276.890002831817 276.650002837181 275.520002862439
# ...
# Y1952P365 280.35000275448 280.120002759621 280.370002754033 279.390002775937

# There is one row for each day, and 365 days in
# each year (leap days are omitted). In each row,
# you have temperatures in Kelvin for each grid
# point in a rectangle.

# S13E142 means 13 steps South from the North Pole
# and 142 steps East from Greenwich. The points
# are in reading order, starting at the top-left
# (Northmost, Westmost) and going along the top
# row first.

# Y1950P001 means year 1950, day 1. (P because
# longer periods might be used later.)

######################################################
######################################################

The instructions are admirably detailed concerning what I should do, but they don't say where the output will appear when I do it. This makes me nervous. I guess I should just try it. After all, the program is not called DestroyTheWorld.

Unfortunately, at this point a lot of things start acting weird.

It's too complicated and boring to explain in detail, but basically, I keep getting a file missing error message. I don't understand why this happens under some conditions and not others. I try lots of experiments.

Eventually I discover that one year of temperature data failed to download—the year 1949, right after the first year available! So, I'm getting the error message whenever I try to do anything involving that year of data.

To fix the problem, I simply download the 1949 data by hand from here:

ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/

(You can open ftp addresses in a web browser just like http addresses.) I put it in my working directory for R, and everything is fine again. Whew!

By the time things I get this file, I sort of know what to do—after all, I've spent about an hour trying lots of different things.

I decide to create a file listing temperatures near where I live in Riverside from 1948 to 1979. To do this, I open Graham's script netcdf-convertor.R in a word processor and change this section:

setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino")
lat.range <- 13:14
lon.range <- 142:143
firstyear <- 1957
lastyear <- 1958
outputfilename <- paste0("Scotland-", firstyear, "-", lastyear, ".txt")

to this:

setwd("C:/Users/JOHN/Documents/My Backups/azimuth/el nino")
lat.range <- 23:23
lon.range <- 98:98
firstyear <- 1948
lastyear <- 1979
outputfilename <- paste0("Riverside-", firstyear, "-", lastyear, ".txt")

Why? Well, I want it to put the file in my working directory. I want the years from 1948 to 1979. And I want temperature data from where I live!

Googling the info, I see Riverside, California is at 33.9481° N, 117.3961° W. 34° N is about 56 degrees south of the North Pole, which is 22 steps of size 2.5°. And because some idiot decided everyone should count starting at 1 instead of 0 even in contexts like this, the North Pole itself is step 1, not step 0… so Riverside is latitude step 23. That's why I write:

lat.range <- 23:23

Similarly, 117.5° W is 242.5° E, which is 97 steps of size 2.5°… which counts as step 98 according to this braindead system. That's why I write:

lon.range <- 98:98

Having done this, I save the file netcdf-convertor.R under another name, Riverside.R.

And then I do some stuff that it took some fiddling around to discover.

First, in my R interface I go to the menu item File, at far left, and click on Open script. It lets me browse around, so I go to my working directory for R and choose Riverside.R. A little window called R editor opens up in my R interface, containing this script.

I'm probably not doing this optimally, but I can now right-click on the R editor and see a menu with a choice called Select all. If I click this, everything in the window turns blue. Then I can right-click again and choose Run line or selection. And the script runs!

Voilà!

It huffs and puffs, and then stops. I peek in my working directory, and see that a file called

Riverside.1948-1979.txt

has been created. When I open it, it has lots of lines, starting with these:

S023E098
Y1948P001 279.95
Y1948P002 280.14
Y1948P003 282.27
Y1948P004 283.97
Y1948P005 284.27
Y1948P006 286.97

As Graham promised, each line has a year and day label, followed by a vector… which in my case is just a single number, since I only wanted the temperature in one location. I’m hoping this is the temperature near Riverside, in kelvin.

A small experiment

To see if this is working, I’d like to plot these temperatures and see if they make sense. Unfortunately I have no idea how to get R to take a file containing data of the sort I have and plot it! I need to learn how, but right now I’m exhausted, so I use another method to get the job done— a method that’s too suboptimal and embarrassing to describe here. (Hint: it involves the word “Excel”.)

I do a few things, but here’s the most interesting one—namely, not very interesting. I plot the temperatures for 1963:

I compare it to some publicly available data, not from Riverside, but from nearby Los Angeles:

As you can see, there was a cold day on January 13th, when the temperature dropped to 33°F. That seems to be visible on the graph I made, and looking at the data from which I made the graph, I see the temperature dropped to 251.4 kelvin on the 13th: that’s -7°F, very cold for here. It does get colder around Riverside than in Los Angeles in the winter, since it’s a desert, with temperatures not buffered by the ocean. So, this does seem compatible with the public records. That’s mildly reassuring.

But other features of the graph don’t match, and I’m not quite sure if they should or not. So, all this very tentative and unimpressive. However, I’ve managed to get over some of my worst fears, download some temperature data, and graph it! Now I need to learn how to use R to do statistics with this data, and graph it in a better way.

Puzzles

You can help me out by answering these puzzles. Later I might pose puzzles where you can help us write really interesting programs. But for now it’s just about learning R.

Puzzle 1. Given a text file with lots of lines of this form:

S023E098
Y1948P001 279.95
Y1948P002 280.14
Y1948P003 282.27
Y1948P004 283.97

write an R program that creates a huge vector, or list of numbers, like this:

279.95, 280.14, 282.27, 283.97, ...

Puzzle 2: Extend the above program so that it plots this list of numbers, or outputs it to a new file.

If you want to test your programs, here’s the actual file:

Riverside-1948-1979.txt

More puzzles

If those puzzles are too easy, here are two more. I gave these last time, but everyone was too wimpy to tackle them.

Puzzle 3. Modify the software so that it uses the same method to predict El Niños from 1980 to 2013. You’ll have to adjust two lines in netcdf-convertor-ludescher.R:

firstyear <- 1948
lastyear <- 1980

should become

firstyear <- 1980
lastyear <- 2013

or whatever range of years you want. You’ll also have to adjust names of years in ludescher-replication.R. Search the file for the string 19 and make the necessary changes. Ask me if you get stuck.

Puzzle 4. Right now we average the link strength over all pairs (i,j) where i is a node in the El Niño basin defined by Ludescher et al and j is a node outside this basin. The basin consists of the red dots here:

What happens if you change the definition of the El Niño basin? For example, can you drop those annoying two red dots that are south of the rest, without messing things up? Can you get better results if you change the shape of the basin?

To study these questions you need to rewrite ludescher-replication.R a bit. Here’s where Graham defines the El Niño basin:

ludescher.basin <- function() {
  lats <- c( 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6)
  lons <- c(11,12,13,14,15,16,17,18,19,20,21,22,16,22)
  stopifnot(length(lats) == length(lons))
  list(lats=lats,lons=lons)
}

These are lists of latitude and longitude coordinates: (5,11), (5,12), (5,13), etc. A coordinate like (5,11) means the little circle that’s 5 down and 11 across in the grid on the above map. So, that’s the leftmost point in Ludescher’s El Niño basin. By changing these lists, you can change the definition of the El Niño basin.

Next time I’ll discuss some criticisms of Ludescher et al’s paper, but later we will return to analyzing temperature data, looking for interesting patterns.


by John Baez at July 12, 2014 01:00 AM

July 11, 2014

Symmetrybreaking - Fermilab/SLAC

US reveals its next generation of dark matter experiments

Together, the three experiments will search for a variety of types of dark matter particles.

Two US federal funding agencies announced today which experiments they will support in the next generation of the search for dark matter.

The Department of Energy and National Science Foundation will back the Super Cryogenic Dark Matter Search-SNOLAB, or SuperCDMS; the LUX-Zeplin experiment, or LZ; and the next iteration of the Axion Dark Matter eXperiment, ADMX-Gen2.

“We wanted to pool limited resources to put together the most optimal unified national dark matter program we could create,” says Michael Salamon, who manages DOE’s dark matter program.

Second-generation dark matter experiments are defined as experiments that will be at least 10 times as sensitive as the current crop of dark matter detectors.

Program directors from the two federal funding agencies decided which experiments to pursue based on the advice of a panel of outside experts. Both agencies have committed to working to develop the new projects as expeditiously as possible, says Jim Whitmore, program director for particle astrophysics in the division of physics at NSF.

Physicists have seen plenty of evidence of the existence of dark matter through its strong gravitational influence, but they do not know what it looks like as individual particles. That’s why the funding agencies put together a varied particle-hunting team.

Both LZ and SuperCDMS will look for a type of dark matter particles called WIMPs, or weakly interacting massive particles. ADMX-Gen2 will search for a different kind of dark matter particles called axions.

LZ is capable of identifying WIMPs with a wide range of masses, including those much heavier than any particle the Large Hadron Collider at CERN could produce. SuperCDMS will specialize in looking for light WIMPs with masses lower than 10 GeV. (And of course both LZ and SuperCDMS are willing to stretch their boundaries a bit if called upon to double-check one another’s results.) 

If a WIMP hits the LZ detector, a high-tech barrel of liquid xenon, it will produce quanta of light, called photons. If a WIMP hits the SuperCDMS detector, a collection of hockey-puck-sized integrated circuits made with silicon or germanium, it will produce quanta of sound, called phonons.

“But if you detect just one kind of signal, light or sound, you can be fooled,” says LZ spokesperson Harry Nelson of the University of California, Santa Barbara. “A number of things can fake it.”

SuperCDMS and LZ will be located underground—SuperCDMS at SNOLAB in Ontario, Canada, and LZ at the Sanford Underground Research Facility in South Dakota—to shield the detectors from some of the most common fakers: cosmic rays. But they will still need to deal with natural radiation from the decay of uranium and thorium in the rock around them: “One member of the decay chain, lead-210, has a half-life of 22 years,” says SuperCDMS spokesperson Blas Cabrera of Stanford University. “It’s a little hard to wait that one out.”

To combat this, both experiments collect a second signal, in addition to light or sound—charge. The ratio of the two signals lets them know whether the light or sound came from a dark matter particle or something else.

SuperCDMS will be especially skilled at this kind of differentiation, which is why the experiment should excel at searching for hard-to-hear low-mass particles.

LZ’s strength, on the other hand, stems from its size.

Dark matter particles are constantly flowing through the Earth, so their interaction points in a dark matter detector should be distributed evenly throughout. Quanta of radiation, however, can be stopped by much less significant barriers—alpha particles by a piece of paper, beta particles by a sandwich. Even gamma ray particles, which are harder to stop, cannot reach the center of LZ’s 7-ton detector. When a particle with the right characteristics interacts in the center of LZ, scientists will know to get excited.

The ADMX detector, on the other hand, approaches the dark matter search with a more delicate touch. The dark matter axions ADMX scientists are looking for are too light for even SuperCDMS to find.

If an axion passed through a magnetic field, it could convert into a photon. The ADMX team encourages this subtle transformation by placing their detector within a strong magnetic field, and then tries to detect the change.

“It’s a lot like an AM radio,” says ADMX-Gen2 co-spokesperson Gray Rybka of the University of Washington in Seattle.

The experiment slowly turns the dial, tuning itself to watch for one axion mass at a time. Its main background noise is heat.

“The more noise there is, the harder it is to hear and the slower you have to tune,” Rybka says.

In its current iteration, it would take around 100 years for the experiment to get through all of the possible channels. But with the addition of a super-cooling refrigerator, ADMX-Gen2 will be able to search all of its current channels, plus many more, in the span of just three years.

With SuperCDMS, LZ and ADMX-Gen2 in the works, the next several years of the dark matter search could be some of its most interesting.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at July 11, 2014 11:34 PM