Particle Physics Planet

July 30, 2014

Quantum Diaries

Accelerator physicist invents new way to clean up oil spills

This article appeared in Fermilab Today on July 30, 2014.

Fermilab physicist Arden Warner revolutionizes oil spill cleanup with magnetizable oil invention. Photo: Hanae Armitage

Four years ago, Fermilab accelerator physicist Arden Warner watched national news of the BP oil spill and found himself frustrated with the cleanup response.

“My wife asked ‘Can you separate oil from water?’ and I said ‘Maybe I could magnetize it!’” Warner recalled. “But that was just something I said. Later that night while I was falling asleep, I thought, you know what, that’s not a bad idea.”

Sleep forgone, Warner began experimenting in his garage. With shavings from his shovel, a splash of engine oil and a refrigerator magnet, Warner witnessed the preliminary success of a concept that could revolutionize the process of oil spill damage control.

Warner has received patent approval on the cleanup method.

The concept is simple: Take iron particles or magnetite dust and add them to oil. It turns out that these particles mix well with oil and form a loose colloidal suspension that floats in water. Mixed with the filings, the suspension is susceptible to magnetic forces. At a barely discernible 2 to 6 microns in size, the particles tend to clump together, and it only takes a sparse dusting for them to bond with the oil. When a magnetic field is applied to the oil and filings, they congeal into a viscous liquid known as a magnetorheological fluid. The fluid’s viscosity allows a magnetic field to pool both filings and oil to a single location, making them easy to remove. (View a 30-second video of the reaction.)

“It doesn’t take long — you add the filings, you pull them out. The entire process is even more efficient with hydrophobic filings. As soon as they hit the oil, they sink in,” said Warner, who works in the Accelerator Division. Hydrophobic filings are those that don’t like to interact with water — think of hydrophobic as water-fearing. “You could essentially have a device that disperses filings and a magnetic conveyor system behind it that picks it up. You don’t need a lot of material.”

Warner tested more than 100 oils, including sweet crude and heavy crude. As it turns out, the crude oils’ natural viscosity makes it fairly easy to magnetize and clear away. Currently, booms, floating devices that corral oil spills, are at best capable of containing the spill; oil removal is an entirely different process. But the iron filings can work in conjunction with an electromagnetic boom to allow tighter constriction and removal of the oil. Using solenoids, metal coils that carry an electrical current, the electromagnetic booms can steer the oil-filing mixture into collector tanks.

Unlike other oil cleanup methods, the magnetized oil technique is far more environmentally sound. There are no harmful chemicals introduced into the ocean — magnetite is a naturally occurring mineral. The filings are added and, briefly after, extracted. While there are some straggling iron particles, the vast majority is removed in one fell, magnetized swoop — the filings can even be dried and reused.

“This technique is more environmentally benign because it’s natural; we’re not adding soaps and chemicals to the ocean,” said Cherri Schmidt, head of Fermilab’s Office of Partnerships and Technology Transfer. “Other ‘cleanup’ techniques disperse the oil and make the droplets smaller or make the oil sink to the bottom. This doesn’t do that.”

Warner’s ideas for potential applications also include wildlife cleanup and the use of chemical sensors. Small devices that “smell” high and low concentrations of oil could be fastened to a motorized electromagnetic boom to direct it to the most oil-contaminated areas.

“I get crazy ideas all the time, but every so often one sticks,” Warner said. “This is one that I think could stick for the benefit of the environment and Fermilab.”

Hanae Armitage

Emily Lakdawalla - The Planetary Society Blog

8th Mars Report: Martian habitability
Valerie Fox reports from the 8th International Conference on Mars on the habitability of the Red Planet.

Peter Coles - In the Dark

Jimmy Anderson & Moeen split hairs in England cricket team Beard Index

Important poll on the Beard Index for England’s cricketers..

My own vote went to Jimmy Anderson, a remark on whose performance yesterday by me on Twitter also led to me featuring on the BBC Sports Website:

Today is the 4th Day and England have just declared on 205-4, leaving India to score 445 to win in approximately 132 overs…

..and India close on 112-4. The ball is starting to turn and with another 331 to win off 90 overs (3.67 an over) the odds are firmly on England’s side.

Originally posted on Kmflett's Blog:

Beard Liberation Front
Press release 29th July contact Keith Flett 07803 167266

Jimmy Anderson & Moeen split hairs in England Cricket Team Beard Index

The Beard Liberation Front, the informal network of beard wearers, has issued an update to its England cricket Beard Index which shows Moeen Ali and Jimmy Anderson tied with Ian Bell and Alastair Cook moving up the rankings

Hirsute England players have only recently been a significant factor in the team’s performance but the campaigners say that facial hair on the pitch can have several, sometimes combined, impacts:

1] Beards can add gravitas and presence. Moeen is known as ‘the beard that’s feared’
2] Beards can influence aerodynamics both with bat and ball as a movement of the facial hair can cause subtle changes to air currents

Beard Index [combining factors 1 & 2] out of 10

Moeen 9
Anderson 9
Bell 6
Cook 6

View original 40 more words

arXiv blog

The Curious Evolution of Artificial Life

When it comes to research into Artficial Life, commercial projects have begun to outpace academic ones.

The term “Artificial Life” emerged in 1986 when the American computer scientist Christopher Langton coined it while organizing the first “Workshop on the Synthesis and Simulation of Living Systems.” Since then the idea of artificial life has spread through computer science into gaming, the study of artificial intelligence, and beyond.

ZapperZ - Physics and Physicists

The Title Doesn't Match The Content
The title of this news article is "Developments In Particle Physics Are About to Transform Our Daily Lives". Yet, the article really has nothing to do with "particle physics", which is an area of study that investigates the physics of elementary particles. The article has more to do with the applications of quantum physics.

Why it wasn't just called "Developments in Quantum Physics...." instead is beyond me. Maybe the phrase "particle physics" makes the title looks sexier, regardless on whether it is accurate or not.

Zz.

Christian P. Robert - xi'an's og

Bangalore workshop [ಬೆಂಗಳೂರು ಕಾರ್ಯಾಗಾರ]

First day at the Indo-French Centre for Applied Mathematics and the get-together (or speed-dating!) workshop. The campus of the Indian Institute of Science of Bangalore where we all stay is very pleasant with plenty of greenery in the middle of a very busy city. Plus, being at about 1000m means the temperature remains tolerable for me, to the point of letting me run in the morning.Plus, staying in a guest house in the campus also means genuine and enjoyable south Indian food.

The workshop is a mix of statisticians and of mathematicians of neurosciences, from both India and France, and we are few enough to have a lot of opportunities for discussion and potential joint projects. I gave the first talk this morning (hence a fairly short run!) on ABC model choice with random forests and, given the mixed audience, may have launched too quickly into the technicalities of the forests. Even though I think I kept the statisticians on-board for most of the talk. While the mathematical biology talks mostly went over my head (esp. when I could not resist dozing!), I enjoyed the presentation of Francis Bach of a fast stochastic gradient algorithm, where the stochastic average is only updated one term at a time, for apparently much faster convergence results. This is related with a joint work with Éric Moulines that both Éric and Francis presented in the past month. And makes me wonder at the intuition behind the major speed-up. Shrinkage to the mean maybe?

Filed under: pictures, Running, Statistics, Travel, University life, Wines Tagged: ABC model choice, Bangalore, IFCAM, Indian Institute of Science, random forests, stochastic gradient

Emily Lakdawalla - The Planetary Society Blog

NASA's Budget Stalls Out
Congress has all but given up its goal of passing a budget before the end of this fiscal year in September. Instead, we will likely see a temporary extension through the elections in November.

July 29, 2014

Emily Lakdawalla - The Planetary Society Blog

8th Mars Report: Was Ancient Mars Warm and Wet or Cold and Icy?
One of the hot topics of the 8th International Conference on Mars was the nature of Mars' ancient past. Abigail Fraeman reports on our updated view of whether Mars was ever warm and wet.

Emily Lakdawalla - The Planetary Society Blog

Rosetta update: Long journey to a comet nearly complete
A journey of nearly a decade is almost over. Rosetta is making its final approach to comet 67P/Churyumov-Gerasimenko, and the comet's strange shape is beginning to come into focus. As of today, the spacecraft is only 2000 kilometers away from the comet, and 8 days away from arrival.

Symmetrybreaking - Fermilab/SLAC

Partnership generates bright ideas for photon science

Photon science, a spin-off of particle physics, has returned to its roots for help developing better, faster detectors.

In late 1940s, scientists doing fundamental physics research at the General Electric Research Laboratory in Schenectady, New York, noticed a bright arc of light coming from their particle accelerator. As a beam of electrons whipped around the accelerator’s circular track, photons trickled away like water from a punctured hose.

At the time, this was considered a problem; the leaking photons were sapping energy from the electron beam. But scientists at labs around the world were already looking into the phenomenon, and not long after, circular particle accelerators were being built explicitly to capture the escaping light.

Today, these instruments are called synchrotrons, and they serve as powerful tools for studying the atomic and molecular structure of a seemingly limitless number of materials.

Despite their symbiotic beginning, synchrotron science and particle physics existed largely independent of one another. However, recent developments in the design and construction of particle detectors for synchrotron experiments—as well as new light source instruments—have sparked a reunion.

The custom-detector revolution

Modern synchrotrons generate powerful beams of light—infrared, ultraviolet, or X-ray—and aim them at a sample—such as a protein being tested for use in a drug. The light interacts with the sample, bouncing off of it, passing through it or being absorbed into it. (Imagine a beam of sunlight diffracting in a crystal or reflecting off the face of a watch.) By detecting how the sample changes the light, scientists can gather all kinds of information about its structure, make-up and behavior.

A synchrotron facility can host dozens of experiments at a time. The detector plays a vital role in each one: It captures the light, which becomes the data, which holds the answers to the experimenter’s questions.

And yet from the 1950s through the 1990s, the vast majority of detectors used at synchrotrons were not built specifically for these experiments. The designers and engineers would usually buy off-the-shelf X-ray detectors intended for other purposes, or adapt used detectors the best they could to fit the needs of the users.

Heinz Graafsma, head of the detector group at DESY, the German Electron Synchrotron, says the science coming out of synchrotrons during this time of patchwork detectors was fantastic, thanks largely to the dramatic improvements to the quality of the light beam. But that same rapid advancement made developing detectors “like shooting at a moving target,” Graafsma says. Customized detectors could take as long as a decade to design and build, and in that time the brightness of the light beam could go up by two or three orders of magnitude, rendering the detector obsolete.

The lack of custom detectors may also simply have come down to tight budgets, says Sol Grunner, former director of the Cornell High Energy Synchrotron Source or CHESS.

Frustrated with the limits of detector technology, Gunner became one of the early pioneers to build custom detectors for synchrotrons, allowing scientists to conduct experiments that could not be done otherwise. His work in the 1990s helped set the stage for a cultural shift in synchrotron detectors.

A renewed partnership

In the last 15 years, things have changed, especially with the advent of the free-electron laser—a kind of synchrotron on steroids.

Photon scientists have begun to face some of the same challenges as particle physicists: Scientists at light sources increasingly must collect huge amounts of data at a dizzying rate.

So they have begun to look to particle physicists for technological insight. The partnerships that have developed have turned out to be beneficial for both sides.

The Linac Coherent Light Source, an X-ray free-electron laser at SLAC National Accelerator Laboratory, has six beamlines open for users. The LCLS has produced 6 petabytes of data in its first five years of operation and currently averages 1.5 petabytes per year. That much stored data is comparable to the major experiments at the Large Hadron Collider at CERN.

“There’s a big team, it’s actually bigger than the detector team, handling the big data that comes out of the LCLS,” says Chris Kenny, head of the LCLS detector group. “A lot of the know-how and a lot of the people were taken directly from particle physics.”

At the National Synchrotron Light Source at Brookhaven National Laboratory, a group led by Pete Siddons is developing a detector that will use something called a Vertically Integrated Photon Imaging Chip, designed by high-energy physicists at Fermilab. VIPIC is an example of a circuit built with a specific purpose in mind, rather than a generic circuit that can have many applications. High-energy physics helped pioneer the creation of application-specific integrated circuits, called ASICs.

With the advanced capability of the VIPIC chip, the researchers hope the new detector will allow synchrotron users to watch fast processes as they take place. This could include watching materials undergo phase transitions, such as the change between liquid and solid.

Siddons and his NSLS detector team are building the silicon detectors that will capture the light, but they need particle physicists to fabricate the highly specialized integrated circuits for sorting all the incoming information.

“Making integrated circuits is a very, very specialized, tricky business,” Siddons says.

Physicists and engineers at Fermilab design the circuits, which are then fabricated at commercial foundries. Fermilab scientists then put the circuits and particle sensors together into large integrated systems. The collaborative project will also involve contributions by scientists at Argonne National Laboratory and AGH University of Science and Technology in Krakow, Poland.

“You need very expensive software tools to do it—for doing the design and layout and simulating and checking,” Siddons says. “And they have that, because the high-energy physics community has been building large detector systems forever.”

Benefits for both sides

Compared to those used in synchrotron science, particle physics detectors live very long lives.

“We may build a few large scale detectors a decade,” says Ron Lipton, a Fermilab scientist who has been involved with the development of several large scale particle physics detectors and is a collaborator on the VIPIC chip.

Partnering with synchrotron science has given particle physicists a chance to develop and test new technologies on a shorter time scale, he says.

Scientists at the Paul Scherrer Institute in Switzerland used chip technology from particle physics experiments at CERN to create one of the most widely used custom synchrotron detectors: the Pilatus detector. Researchers at Fermilab and DESY say the work put into technologies like this have already fed new information and new ideas back into particle physics.

Collaboration also provides work for detector engineers and increases the market need for detector components, which drives down costs, Lipton says.

These days, more synchrotron facilities employ scientists and engineers to design custom detectors for lab use or for future commercialization. Half a dozen detectors, designed especially for light sources, are already available.

“It is at the moment very exciting,” Graafsma says. “There are budgets available. The facilities see this as an important issue. But also the technology is now available. So we can really build the detectors we want.”

Like what you see? Sign up for a free subscription to symmetry!

astrobites - astro-ph reader's digest

Hide and Seek Planets

Title: Planets and Stellar Activity: Hide and Seek in the CoRoT-7 System.
Authors: R. D. Haywood, A. Collier Cameron, D. Queloz, S. C. C. Barros, M. Deleuil, R. Fares, M. Gillon, A. F. Lanza, C. Lovis, C. Moutou, F. Pepe, D. Pollacco, A. Santerne, D. Segransan, and Y. C. Unruh
First author’s institution: School of Physics and Astronomy, St. Andrews, United Kingtom
Status: Accepted to MNRAS.

Three years ago we told you about Gliese 581 d, the first discovered terrestrial-mass planet in the so-called “habitable zone“. The article was called “Pack your suitcase?” Despite this title, here at Astrobites we sincerely hope you didn’t actually take off for this star system: earlier this month, a new study claimed this planet and one of its companions don’t actually exist. Instead, astronomers were observing signals caused by the star itself, which they attributed to additional planets in the system beyond the three that have been unambiguously detected.

Gliese 581 isn’t the only planetary system confounding astronomers! The potential planet orbiting Alpha Centauri B has been closely scrutinized recently with inconclusive results. The authors of today’s paper analyze another previously announced planet, CoRoT-7 d, and conclude that it is likely not real either. What’s going on here? To understand why this is happening, let’s look at how Haywood et al. studied this system.

Detecting Planets

From statistical analyses, we know there are about 100 billion planets in the galaxy. Of these, we’ve detected nearly 2,000 definite planets and have found an additional 4,000 “candidate planets.” Of these 6,000 planets, we’ve actually managed to take pictures of about 20. And that total includes the 8 planets in our own solar system! Most of the time when we detect a planet, we don’t actually see a picture of it. Instead, we infer its presence from observing its effects on its host star. When we detect a planet with the radial velocity method, we observe small changes in the velocity of a star as it and its planet orbit their common center of mass. When we detect a planet with the transit method, we observe small changes in the amount of light we receive from a star as the disk of the star is blocked by an orbiting planet. After detecting either of these effects, astronomers will announce the detection of a new planet.

While finding planets may sound simple, the stars do their best to make it tricky. Stars aren’t static objects: starspots come and go on the surface, evolving in time and rotating across the star’s surface. These starspots are cooler and fainter than the stellar surface, so they cause the amount of light we receive to decrease as they pass along our line of sight, much like a transit! Starspots can affect a radial velocity signal too: as the spot rotates across the star’s surface, it first moves towards us before reaching the center of the star and beginning to move away from us, blocking some of the star’s light and changing the shape of spectral lines. These small shifts can be mistakenly attributed to a planet that doesn’t exist. It gets worse, too: many spots on the surface of a star can slow down the star’s convection, causing the rate at which material bubbles up and down from the stellar surface to change. We observe this as a velocity change, and again can incorrectly attribute this to a planet if not treated properly.

Stellar rotation observed by Haywood et al. The authors observed the star for 75 days and detected significant photometric variability, which they attribute to starspots passing across the surface of the star. Figure 1 from Haywood et al.

Separating Planets from Noise

The CoRoT-7 system was uncovered in 2009 when the innermost planet, CoRoT-7b was found via the transit method. Follow-up radial velocity observations detected two additional planets. However, the host star is known to be very active, so Haywood et al. decided to analyze the system closely to ensure that all the planets are real, and not artifacts induced by starspots. As we said in the previous section, starspots will affect both the radial velocity data and transit photometry, making both data sets correlated: each data point depends not only on what the planets are doing, but time-sensitive properties of the noise (the starspots).

To alleviate this problem, the authors obtained radial velocity data from HARPS at the same time they collected transit photometry from CoRoT. They then modeled the effects of both the planets and the noise using a Gaussian process, assuming the noise followed a simple functional form. They assumed the RV signal was composed of three components. One of these components is stellar variability, with a period similar to the rotation period. A second is changes in the RV signal due to changes in the convective properties of the star, which depends on the size of the spots. The third is a long-period effect caused by slow changes in the activity of the star. This noise term is used to account for additional astrophysical effects in the stellar atmosphere that may appear in the radial velocity data but not the photometry. If this noise is real its affects must change on the same timescale as the stellar rotation, although it can have a different phase; the authors tie the periodicity of this noise to their fit to the rotation period.

By combining these three effects, the authors were able to develop a complete noise model for the star. They then simultaneously fit the RV and photometric observations to their model, which included two planets and the aforementioned noise parameters. By letting the parameters (mass, orbital phase, eccentricity, amplitude of each noise mode, stellar rotation period, even number of planets!) vary, the authors were able to find which model creates the most appropriate fit and how well we can measure each of these parameters.

The authors try to fit several models to the data. They fit a model which includes just stellar activity, one with 1 planet, one with 2 planets, and one with 3 planets. They find the 0 and 1 planet models can not explain the data well, and that the 2 planet model is favored by a factor of 10 over the 3 planet model. Thus, with 90% confidence the authors conclude the 2 planet model (shown below) is the most appropriate model, suggesting planet d may not exist.

(top) best-fitting model and uncertainties, along with (bottom) residuals to the radial velocity observations collected by the authors. The authors find that only two planets are necessary to fully explain this system, when an appropriate noise model is applied. Figure 5 from Haywood et al.

Since they have very similar effects on our observations, starspots can easily masquerade as small planets. To differentiate between the two, our best hope is to model the noise in our observations simultaneously with the planet parameters. This hasn’t always been done, but the recent controversies over a few supposed planets may encourage astronomers to carefully model their noise before announcing a new planet. Now that Haywood et al. are on the case, we might hear about more planets in the future that, like Alderaan and Krypton, are relegated to ex-planet status.

Quantum Diaries

CERN through the eyes of a young scientist

Inspired by the event at the UNESCO headquarters in Paris that celebrated the anniversary of the signature of the CERN convention, Sophie Redford wrote about her impressions on joining CERN as a young researcher. A CERN fellow designing detectors for the future CLIC accelerator, she did her PhD at the University of Oxford, observing rare B decays with the LHCb experiment.

The “60 years of CERN” celebrations give us all the chance to reflect on the history of our organization. As a young scientist, the early years of CERN might seem remote. However, the continuity of CERN and its values connects this distant past to the present day. At CERN, the past isn’t so far away.

Of course, no matter when you arrive at CERN for the first time, it doesn’t take long to realize that you are in a place with a special history. On the surface, CERN can appear scruffy. Haphazard buildings produce a maze of long corridors, labelled with seemingly random numbers to test the navigation of newcomers. Auditoriums retain original artefacts: ashtrays and blackboards unchanged since the beginning, alongside the modern-day gadgetry of projectors and video-conferencing systems.

The theme of re-use continues underground, where older machines form the injection chain for new. It is here, in the tunnels and caverns buried below the French and Swiss countryside, where CERN spends its money. Accelerators and detectors, their immense size juxtaposed with their minute detail, constitute an unparalleled scientific experiment gone global. As a young scientist this is the stuff of dreams, and you can’t help but feel lucky to be a part of it.

If the physical situation of CERN seems unique, so is the sociological. The row of flags flying outside the main entrance is a colourful red herring, for aside from our diverse allegiances during international sporting events, nationality is meaningless inside CERN. Despite its location straddling international borders, despite our wallets containing two currencies and our heads many languages, scientific excellence is the only thing that matters here. This is a community driven by curiosity, where coffee and cooperation result in particle beams. At CERN we question the laws of our universe. Many answers are as yet unknown but our shared goal of discovery bonds us irrespective of age or nationality.

As a young scientist at CERN I feel welcome and valued; this is an environment where reason and logic rule. I feel privileged to profit from the past endeavour of others, and great pride to contribute to the future of that which others have started. I have learnt that together we can achieve extraordinary things, and that seemingly insurmountable problems can be overcome.

In many ways, the second 60 years of CERN will be nothing like the first. But by continuing to build on our past we can carry the founding values of CERN into the future, allowing the next generation of young scientists to pursue knowledge without borders.

By Sophie Redford

Peter Coles - In the Dark

Politics, Polls and Insignificance

In between various tasks I had a look at the news and saw a story about opinion polls that encouraged me to make another quick contribution to my bad statistics folder.

The piece concerned (in the Independent) includes the following statement:

A ComRes survey for The Independent shows that the Conservatives have dropped to 27 per cent, their lowest in a poll for this newspaper since the 2010 election. The party is down three points on last month, while Labour, now on 33 per cent, is up one point. Ukip is down one point to 17 per cent, with the Liberal Democrats up one point to eight per cent and the Green Party up two points to seven per cent.

The link added to ComRes is mine; the full survey can be found here. Unfortunately, the report, as is sadly almost always the case in surveys of this kind, neglects any mention of the statistical uncertainty in the poll. In fact the last point is based on a telephone poll of a sample of just 1001 respondents. Suppose the fraction of the population having the intention to vote for a particular party is $p$. For a sample of size $n$ with $x$ respondents indicating that they hen one can straightforwardly estimate $p \simeq x/n$. So far so good, as long as there is no bias induced by the form of the question asked nor in the selection of the sample, which for a telephone poll is doubtful.

A  little bit of mathematics involving the binomial distribution yields an answer for the uncertainty in this estimate of p in terms of the sampling error:

$\sigma = \sqrt{\frac{p(1-p)}{n}}$

For the sample size given, and a value $p \simeq 0.33$ this amounts to a standard error of about 1.5%. About 95% of samples drawn from a population in which the true fraction is $p$ will yield an estimate within $p \pm 2\sigma$, i.e. within about 3% of the true figure. In other words the typical variation between two samples drawn from the same underlying population is about 3%.

If you don’t believe my calculation then you could use ComRes’ own “margin of error calculator“. The UK electorate as of 2012 numbered 46,353,900 and a sample size of 1001 returns a margin of error of 3.1%. This figure is not quoted in the report however.

Looking at the figures quoted in the report will tell you that all of the changes reported since last month’s poll are within the sampling uncertainty and are therefore consistent with no change at all in underlying voting intentions over this period.

A summary of the report posted elsewhere states:

A ComRes survey for the Independent shows that Labour have jumped one point to 33 per cent in opinion ratings, with the Conservatives dropping to 27 per cent – their lowest support since the 2010 election.

No! There’s no evidence of support for Labour having “jumped one point”, even if you could describe such a marginal change as a “jump” in the first place.

Statistical illiteracy is as widespread amongst politicians as it is amongst journalists, but the fact that silly reports like this are commonplace doesn’t make them any less annoying. After all, the idea of sampling uncertainty isn’t all that difficult to understand. Is it?

And with so many more important things going on in the world that deserve better press coverage than they are getting, why does a “quality” newspaper waste its valuable column inches on this sort of twaddle?

The Great Beyond - Nature blog

Space sex gecko experiment is safe – for now

Posted on behalf of Katia Moskvitch.

Phew. Five experimental geckos that were feared lost in space have phoned home, restoring hopes that research into their zero-gravity sex lives can go on.

Probing the sex lives of others, in space

Bjørn Christian Tørrissen/CC BY-SA 3.0

The four females and one male are onboard a satellite as part of an experiment to investigate sexual activity and reproduction in microgravity carried out by Russia’s space agency. Roscosmos launched the lizards using a six-tonne Foton-M4 rocket on 19 July. But the fate of the tiny cosmonauts became uncertain when their satellite briefly lost contact with ground control on Thursday 24 July.

Luckily, technicians managed to restore control on Saturday, and Roscosmos announced on its website that since then it has communicated with the satellite 17 times.”Contact is established, the prescribed commands have been conducted according to plan,” said Roscosmos chief Oleg Ostapenko.

Keeping the geckos company are Drosophila fruit flies, as well as mushrooms, plant seeds and various microorganisms that are also being studied. There is also a special vacuum furnace on board, which is being used to analyse the melting and solidification of metal alloys in microgravity.

Foton-M4 is set to carry out experiments over two months, and involves a “study of the effect of microgravity on sexual behaviour, the body of adult animals and embryonic development”, according to the website of the Institute of Medico-Biological Problems of the Russian Academy of Sciences, which has developed the project along with Roscosmos.

Specific aims of the Gecko-F4 mission include:

• Create the conditions for sexual activity, copulation and reproduction of geckos in orbit
• Film the geckos’ sex acts and potential egg-laying and maximise the likelihood that any eggs survive
• Detect possible structural and metabolic changes in the animals, as well as any eggs and foetuses

Scientists plan to perform additional experiments when Foton-M4 returns to Earth after its two-month mission. That is, assuming contact isn’t lost again. If contact with ground control was lost altogether, the satellite would stay in its 357-mile orbit for about four months, and then re-enter the atmosphere in an uncontrolled way and burn up.

Roscosmos engineers are now trying to figure out what led to the loss of control, with the main theory being that it may have been hit by space debris. The geckos’ craft is located in low Earth orbit, which stretches from about 160 kilometres above the planet’s surface out to some 2,000km. As a result, the intrepid lizards share the orbit with almost 20,000 objects, including more than 500 active satellites and the International Space Station, which circles the Earth at about 400 km above the surface.

It is not the first time Roscosmos has studied sex in zero gravity. In 2007, it sent a crew of geckos, newts, snails, Mongolian gerbils and cockroaches to space – and brought them back to Earth 12 days later. The cockroaches conceived while in space, and one, named Nadezhda, which means “hope” in Russian, became the first animal to give birth in space. Russian researcher Dmitry Atyakshin commented at the time that the roaches “run faster than ordinary cockroaches, and are much more energetic and resilient”.

Brace yourselves for super-geckos in September, when the current mission is due back on Earth.

Emily Lakdawalla - The Planetary Society Blog

Landsat 8 Looks at the Supermoon
Why did Landsat 8, an Earth-observing spacecraft, turn its unblinking eyes toward the July 12 supermoon?

July 28, 2014

The Great Beyond - Nature blog

NIH advocates gear up for budget fight

The US National Institutes of Health (NIH) would see its budget worries eased if a long-time political champion gets his way.

Senator Tom Harkin, the Iowa Democrat who leads the Senate panel that oversees the NIH, introduced legislation on 24 July that would ensure that the NIH’s budget never drops below its current US$29.9 billion. The bill also proposes that Congress increase the NIH’s budget by up to 10% for the next two years, and 5% each year for the next five years. By 2021, the agency’s budget would rise to$46.2 billion.

The legislation is unusual in that it sets a minimum level for NIH funding regardless of the government’s total budget for a given year. That approach could pit the NIH against other agencies when money is scarce, a scenario that some agency supporters worry is not so hypothetical. A 2011 law known as the Budget Control Act caps total government spending to 2021 (although the caps have since been relaxed for 2014 and 2015).

The Harkin bill would allow increases for NIH beyond this cap. Some advocacy groups say that NIH is not the only biomedical agency that is in need of a boost. ”The painful effects of austerity span beyond NIH across the entire health continuum,”  Emily Holubowich, senior vice-president of the Coalition for Health Funding wrote to Nature. “We support a balanced, comprehensive, permanent solution to end this era of austerity for all public health and core government functions.”

The legislation’s future is uncertain, however. Harkin is retiring at the end of this year, and Congress is working on a schedule shortened by the federal election in November. Lawmakers are not expected to finish work on a funding plan for the 2015 budget year — which begins on 1 October — until after the election.

Sean Carroll - Preposterous Universe

Quantum Sleeping Beauty and the Multiverse

Hidden in my papers with Chip Sebens on Everettian quantum mechanics is a simple solution to a fun philosophical problem with potential implications for cosmology: the quantum version of the Sleeping Beauty Problem. It’s a classic example of self-locating uncertainty: knowing everything there is to know about the universe except where you are in it. (Skeptic’s Play beat me to the punch here, but here’s my own take.)

The setup for the traditional (non-quantum) problem is the following. Some experimental philosophers enlist the help of a subject, Sleeping Beauty. She will be put to sleep, and a coin is flipped. If it comes up heads, Beauty will be awoken on Monday and interviewed; then she will (voluntarily) have all her memories of being awakened wiped out, and be put to sleep again. Then she will be awakened again on Tuesday, and interviewed once again. If the coin came up tails, on the other hand, Beauty will only be awakened on Monday. Beauty herself is fully aware ahead of time of what the experimental protocol will be.

So in one possible world (heads) Beauty is awakened twice, in identical circumstances; in the other possible world (tails) she is only awakened once. Each time she is asked a question: “What is the probability you would assign that the coin came up tails?”

Modified from a figure by Stuart Armstrong.

(Some other discussions switch the roles of heads and tails from my example.)

The Sleeping Beauty puzzle is still quite controversial. There are two answers one could imagine reasonably defending.

• Halfer” — Before going to sleep, Beauty would have said that the probability of the coin coming up heads or tails would be one-half each. Beauty learns nothing upon waking up. She should assign a probability one-half to it having been tails.
• Thirder” — If Beauty were told upon waking that the coin had come up heads, she would assign equal credence to it being Monday or Tuesday. But if she were told it was Monday, she would assign equal credence to the coin being heads or tails. The only consistent apportionment of credences is to assign 1/3 to each possibility, treating each possible waking-up event on an equal footing.

The Sleeping Beauty puzzle has generated considerable interest. It’s exactly the kind of wacky thought experiment that philosophers just eat up. But it has also attracted attention from cosmologists of late, because of the measure problem in cosmology. In a multiverse, there are many classical spacetimes (analogous to the coin toss) and many observers in each spacetime (analogous to being awakened on multiple occasions). Really the SB puzzle is a test-bed for cases of “mixed” uncertainties from different sources.

Chip and I argue that if we adopt Everettian quantum mechanics (EQM) and our Epistemic Separability Principle (ESP), everything becomes crystal clear. A rare case where the quantum-mechanical version of a problem is actually easier than the classical version.

In the quantum version, we naturally replace the coin toss by the observation of a spin. If the spin is initially oriented along the x-axis, we have a 50/50 chance of observing it to be up or down along the z-axis. In EQM that’s because we split into two different branches of the wave function, with equal amplitudes.

Our derivation of the Born Rule is actually based on the idea of self-locating uncertainty, so adding a bit more to it is no problem at all. We show that, if you accept the ESP, you are immediately led to the “thirder” position, as originally advocated by Elga. Roughly speaking, in the quantum wave function Beauty is awakened three times, and all of them are on a completely equal footing, and should be assigned equal credences. The same logic that says that probabilities are proportional to the amplitudes squared also says you should be a thirder.

But! We can put a minor twist on the experiment. What if, instead of waking up Beauty twice when the spin is up, we instead observe another spin. If that second spin is also up, she is awakened on Monday, while if it is down, she is awakened on Tuesday. Again we ask what probability she would assign that the first spin was down.

This new version has three branches of the wave function instead of two, as illustrated in the figure. And now the three branches don’t have equal amplitudes; the bottom one is (1/√2), while the top two are each (1/√2)2 = 1/2. In this case the ESP simply recovers the Born Rule: the bottom branch has probability 1/2, while each of the top two have probability 1/4. And Beauty wakes up precisely once on each branch, so she should assign probability 1/2 to the initial spin being down. This gives some justification for the “halfer” position, at least in this slightly modified setup.

All very cute, but it does have direct implications for the measure problem in cosmology. Consider a multiverse with many branches of the cosmological wave function, and potentially many identical observers on each branch. Given that you are one of those observers, how do you assign probabilities to the different alternatives?

Simple. Each observer Oi appears on a branch with amplitude ψi, and every appearance gets assigned a Born-rule weight wi = |ψi|2. The ESP instructs us to assign a probability to each observer given by

$P(O_i) = w_i/(\sum_j w_j).$

It looks easy, but note that the formula is not trivial: the weights wi will not in general add up to one, since they might describe multiple observers on a single branch and perhaps even at different times. This analysis, we claim, defuses the “Born Rule crisis” pointed out by Don Page in the context of these cosmological spacetimes.

Sleeping Beauty, in other words, might turn out to be very useful in helping us understand the origin of the universe. Then again, plenty of people already think that the multiverse is just a fairy tale, so perhaps we shouldn’t be handing them ammunition.

arXiv blog

How to Spot a Social Bot on Twitter

Social bots are sending a significant amount of information through the Twittersphere. Now there’s a tool to help identify them.

Back in 2011, a team from Texas A&M University carried out a cyber sting to trap nonhuman Twitter users that were polluting the Twittersphere with spam. Their approach was to set up “honeypot” accounts which posted nonsensical content that no human user would ever be interested in. Any account that retweeted this content, or friended the owner, must surely be a nonhuman user known as a social bot.

astrobites - astro-ph reader's digest

Groups and clusters: who turned off the AC?

Title: A volume-limited sample of X-ray galaxy groups and clusters – II. X-ray cavity dynamics.

First author’s institution: Institute of Astronomy, Cambridge, UK.

Status: Accepted to MNRAS.

Groups and clusters of galaxies, the regions of largest concentration of matter in the Universe, are also reservoirs of hot ionized gas. The intracluster gas emits in the high energy region of the electromagnetic spectrum, X-rays, due to the deceleration of charged particles when they pass each other, a process known as thermal bremsstrahlung. For some reason, this gas is observed to be very hot (~ 107-108 K) and does not easily cool down by radiating energy away, defying theoretical predictions. The solution seems to be to add a bit of heating to the mix, but what causes it?

Several heating mechanisms have been proposed. The dominant one is believed to be radiation emitted by Active Galactic Nuclei (AGN) in the group or cluster. AGN are powerhouses in the central regions of certain galaxies as a result of emission associated with a supermassive black hole. The energy from the AGN creates cavities in the gas around them, like bubbles that rise from each active galaxy. The authors of this paper identify a sample of 49 groups and clusters for which they study the properties of their X-ray cavities to understand how this might prevent the intracluster gas from cooling.

Figure 1. An example of an X-ray image of a cluster (left panel) and the resulting map of inhomogeneities (right panel). The bubbles are indicated with white arrows. Figure A1 of Panagoulia et al.

The  groups and clusters studied in this paper have timescales for cooling of the gas of less than 3 Gyr. If no heating were present, the gas should be cooling efficiently, but it is not. To detect X-ray cavities, the authors use images taken by the Chandra observatory or the XMM-Newton observatory of each group or cluster in the X-rays. Since they are looking for inhomogeneities in the images, they take the image, smooth it, and subtract a “more smoothed” version from a “less smoothed version” of the same object. This allows them to identify inhomogeneities in the X-ray emission, as in Figure 1.

Figure 2. The cavity power compared to the cooling luminosity. Cavities below the black line have some difficulty preventing the gas from cooling. Figure 6 of Panagoulia et al.

The authors  find that 61% of the 49 groups and clusters present cavities. This percentage represents the fraction of time when the AGN is “on” and heating the gas. For clusters with small cooling time, the detection of cavities is limited by the resolution of the data, and hence this is only a lower limit to the AGN duty cycle.

Is AGN heating enough to prevent the gas from cooling? The authors address this question in Figure 2, where they compare the heating power of each observed cavity to the “cooling luminosity”, the luminosity of the gas within the radius where it would take 3 Gyr to cool. In other words, the authors use the size of the cavity and the inferred properties of the intracluster gas to obtain an estimate of the AGN heating and compare it to the amount of energy that the gas is radiating away. The points correspond to the different observed cavities, and the lines represent equality between heating and cooling (under different assumptions). In general, cavities above the black line would have sufficient energy to prevent gas from cooling. Cavities below it can have some trouble balancing the cooling and some other heating mechanism might be required. The fact that the points lie mostly within the range defined by the dashed lines, is interpreted as evidence for continuous (rather than intermittent) bubble activity from the AGN.

Overall, the question of what mechanisms make up for all the heating remains open. Even though AGNs seem to be continuously pushing gas away through the creation of bubbles, in some groups and clusters, the authors find that the X-ray cavities are not efficient enough to prevent cooling of the gas.

Symmetrybreaking - Fermilab/SLAC

NOvA detector takes shape

On Thursday members of the NOvA neutrino experiment celebrated their progress with a visit to their massive detector in northern Minnesota.

In 2012, upon beholding the newly completed NOvA far-detector building in northern Minnesota, University of Minnesota physicist Marvin Marshak didn’t believe the collaboration would be able to adequately populate it. At the time, the mammoth structure, which is the length of two basketball courts and would house the future NOvA detector, impressed visitors with the full force of not only its size, but its emptiness.

“It was scary. We looked at this building and thought, ‘Are we really going to be able to fill this place up?’” says Marshak, NOvA laboratory director. “People looked like tiny little insects against the backdrop of the building.”

His worries were needless. On Thursday, the NOvA collaboration celebrated the new detector, which now fills the building nicely, in Ash River, Minnesota.

The celebration came near the conclusion of NOvA’s collaboration meeting, which took place in Minneapolis. Attendees took a one-day excursion to the far detector, 280 miles north, to see the detector.

The collaboration also discussed the beginning of data taking with the full detectors in the next few weeks. A celebration at Fermilab is planned for later this year.

NOvA, a Fermilab-hosted neutrino experiment, makes use of two detectors: a smaller, underground detector at Fermilab and the much larger, 14-kiloton detector in Minnesota. The neutrino beam, originating at Fermilab through the NuMI beamline, travels 500 miles from the near detector through the Earth to the far detector.

NOvA scientists will work to uncover the true mass ordering of neutrinos’ three types. They’ll also look for evidence of CP violation, which could help explain why there is so much more matter than antimatter in our universe and, thus, why we’re here.

“We’re going to kick all the physics analyses into high gear and get ready for first publications,” says Indiana University’s Mark Messier, NOvA co-spokesperson. “We hope to have first results by the end of the year.”

It’s been a long time coming. Researchers submitted a letter of intent to show their interest in a new neutrino experiment in 2002. In the years since, the collaboration has been hard at work designing, developing, producing and installing hardware, software, fiber optics and even the glue that would hold the kiloton-scale blocks’ components together.

With almost all of the modules of the detector already taking data, it’s a new era for NOvA and the Fermilab neutrino program.

“We’re excited to get this experiment up and running; we’ve been working toward this for a long time,” says Fermilab’s Pat Lukens, manager for far detector assembly.

“For at least the next 10 years, there are only two long-baseline neutrino beam experiments in the world—NOvA and [Japanese experiment] T2K,” Marshak says. “Some of the answers we’re looking for are going to come from the experiments that we have right now.”

A version of this article was published in Fermilab Today.

Like what you see? Sign up for a free subscription to symmetry!

Peter Coles - In the Dark

In Thunder, Lightning and in Rain..

A while before 6am this morning I was woken up by the sound of fairly distant thunder to the West of my flat. I left the windows open – they’ve been open all the time in this hot weather – and dozed while rumblings continued. Just after six there was a terrifically bright flash and an instantaneous bang that set car alarms off in my street; lightning must have struck a building very close. Then the rain arrived. I got up to close the windows against the torrential downpour, at which point I noticed that water was coming in through the ceiling. A further inspection revealed another leak in the cupboard where the boiler lives and another which had water dripping from a light fitting. A frantic half hour with buckets and mops followed, but I had to leave to get to work so I just left buckets under the drips and off I went into the deluge to get soaked.

Here is the map of UK rain at 07:45 am, with Brighton in the thick of it:

I made it up to campus (wet and late); it’s still raining but hopefully will settle down soon. This is certainly turning into a summer of extremes!

Tommaso Dorigo - Scientificblogging

More On The Alleged WW Excess From The LHC

This is just a short update on the saga of the anomalous excess of W-boson-pair production that the ATLAS and CMS collaborations have reported in their 7-TeV and 8-TeV proton-proton collision data. A small bit of information which I was unaware of, and which can be added to the picture.

Clifford V. Johnson - Asymptotia

Hey, You…
Today (Sunday) I devoted my work time to finishing an intensely complicated page. It is the main "establishing shot" type page for a story set in a Natural History Museum. This is another "don't do" if you want to save yourself time, since such a location results in lots of drawings of bones and stuffed animals and people looking at bones and stuffed animals. (The other big location "don't do" from an earlier post was cityscapes with lots of flashy buildings with endless windows to draw. :) ) Perhaps annoyingly, I won't show you the huge panels filled with such things, and instead show you a small corner panel of the type that people might not look at much (because there are no speech bubbles and so forth). This is seconds before our characters meet. A fun science-filled conversation will follow...(Yes these are the same characters from another story I've shown you extracts from.) [Update: I suppose I ought to explain the cape? It is a joke. I thought I'd have a [...] Click to continue reading this post

July 27, 2014

The n-Category Cafe

Basic Category Theory

My new book is out!

It’s an introductory category theory text, and I can prove it exists: there’s a copy right in front of me. (You too can purchase a proof.) Is it unique? Maybe. Here are three of its properties:

• It doesn’t assume much.
• It sticks to the basics.
• It’s short.

I want to thank the $nn$-Café patrons who gave me encouragement during my last week of work on this. As I remarked back then, some aspects of writing a book — even a short one — require a lot of persistence.

But I also want to take this opportunity to make a suggestion. There are now quite a lot of introductions to category theory available, of various lengths, at various levels, and in various styles. I don’t kid myself that mine is particularly special: it’s just what came out of my individual circumstances, as a result of the courses I’d taught. I think the world has plenty of introductions to category theory now.

What would be really good is for there to be a nice second book on category theory. Now, there are already some resources for advanced categorical topics: for instance, in my book, I cite both the $nn$Lab and Borceux’s three-volume Handbook of Categorical Algebra for this. But useful as those are, what we’re missing is a shortish book that picks up where Categories for the Working Mathematician leaves off.

Let me be more specific. One of the virtues of Categories for the Working Mathematician (apart from being astoundingly well-written) is that it’s selective. Mac Lane covers a lot in just 262 pages, and he does so by repeatedly making bold choices about what to exclude. For instance, he implicitly proves that for any finitary algebraic theory, the category of algebras has all colimits — but he does so simply by proving it for groups, rather than explicitly addressing the general case. (After all, anyone who knows what a finitary algebraic theory is could easily generalize the proof.) He also writes briskly: few words are wasted.

I’m imagining a second book on category theory of a similar length to Categories for the Working Mathematician, and written in the same brisk and selective manner. Over beers five years ago, Nicola Gambino and I discussed what this hypothetical book ought to contain. I’ve lost the piece of paper I wrote it down on (thus, Nicola is absolved of all blame), but I attempted to recreate it sometime later. Here’s a tentative list of chapters, in no particular order:

• Enriched categories
• 2-categories (and a bit on higher categories)
• Topos theory (obviously only an introduction) and categorical set theory
• Fibrations
• Bimodules, Morita equivalence, Cauchy completeness and absolute colimits
• Operads and Lawvere theories
• Categorical logic (again, just a bit) and internal category theory
• Derived categories
• Flat functors and locally presentable categories
• Ends and Kan extensions (already in Mac Lane’s book, but maybe worth another pass).

Someone else should definitely write such a book.

Christian P. Robert - xi'an's og

PMC for combinatoric spaces

I received this interesting [edited] email from Xiannian Fan at CUNY:

I am trying to use PMC to solve Bayesian network structure learning problem (which is in a combinatorial space, not continuous space).

In PMC, the proposal distributions qi,t can be very flexible, even specific to each iteration and each instance. My problem occurs due to the combinatorial space.

For importance sampling, the requirement for proposal distribution, q, is:

support (p) ⊂ support (q)             (*)

For PMC, what is the support of the proposal distribution in iteration t? is it

support (p) ⊂ U support(qi,t)    (**)

or does (*) apply to every qi,t?

For continuous problem, this is not a big issue. We can use random walk of Normal distribution to do local move satisfying (*). But for combination search, local moving only result in finite states choice, just not satisfying (*). For example for a permutation (1,3,2,4), random swap has only choose(4,2)=6 neighbor states.

Fairly interesting question about population Monte Carlo (PMC), a sequential version of importance sampling we worked on with French colleagues in the early 2000’s.  (The name population Monte Carlo comes from Iba, 2000.)  While MCMC samplers do not have to cover the whole support of p at each iteration, it is much harder for importance samplers as their core justification is to provide an unbiased estimator to for all integrals of interest. Thus, when using the PMC estimate,

1/n ∑i,t {p(xi,t)/qi,t(xi,t)}h(qi,t),  xi,t~qi,t(x)

this estimator is only unbiased when the supports of the qi,t “s are all containing the support of p. The only other cases I can think of are

1. associating the qi,t “s with a partition Si,t of the support of p and using instead

i,t {p(xi,t)/qi,t(xi,t)}h(qi,t), xi,t~qi,t(x)

2. resorting to AMIS under the assumption (**) and using instead

1/n ∑i,t {p(xi,t)/∑j,t qj,t(xi,t)}h(qi,t), xi,t~qi,t(x)

but I am open to further suggestions!

Filed under: Statistics, University life Tagged: AMIS, CUNY, importance sampling, Monte Carlo Statistical Methods, PMC, population Monte Carlo, simulation, unbiasedness

Peter Coles - In the Dark

Demolition at Didcot

As someone who has spent his fair share of time traveling backwards and forwards on the First Great Western railway line between Cardiff (or Swindon) and London, it seems appropriate to note that the environs of Didcot Parkway station (which lies on the main line) will look rather different next time I do that journey. In the early hours of this morning, three of the six enormous cooling towers came tumbling down:

I gather the other three are also scheduled for demolition, although I doubt I’ll be able to attend that event in person either!

Quantum Diaries

What are Sterile Neutrinos?

Sterile Neutrinos in Under 500 Words

Hi Folks,

In the Standard Model, we have three groups of particles: (i) force carriers, like photons and gluons; (ii) matter particles, like electrons, neutrinos and quarks; and (iii) the Higgs. Each force carrier is associated with a force. For example: photons are associated with electromagnetism, the W and Z bosons are associated with the weak nuclear force, and gluons are associated with the strong nuclear force. In principle, all particles (matter, force carries, the Higgs) can carry a charge associated with some force. If this is ever the case, then the charged particle can absorb or radiate a force carrier.

Credit: Wikipedia

As a concrete example, consider electrons and top quarks. Electrons carry an electric charge of “-1″ and a top quark carries an electric charge of “+2/3″. Both the electron and top quark can absorb/radiate photons, but since the top quark’s electric charge is smaller than the electron’s electric charge, it will not absorb/emit a photon as often as an electron. In a similar vein, the electron carries no “color charge”, the charge associated with the strong nuclear force, whereas the top quark does carry color and interacts via the strong nuclear force. Thus, electrons have no idea gluons even exist but top quarks can readily emit/absorb them.

Neutrinos  possess a weak nuclear charge and hypercharge, but no electric or color charge. This means that neutrinos can absorb/emit W and Z bosons and nothing else.  Neutrinos are invisible to photons (particle of light) as well as gluons (particles of the color force).  This is why it is so difficult to observe neutrinos: the only way to detect a neutrino is through the weak nuclear interactions. These are much feebler than electromagnetism or the strong nuclear force.

Sterile neutrinos are like regular neutrinos: they are massive (spin-1/2) matter particles that do not possess electric or color charge. The difference, however, is that sterile neutrinos do not carry weak nuclear or hypercharge either. In fact, they do not carry any charge, for any force. This is why they are called “sterile”; they are free from the influences of  Standard Model forces.

Credit: somerandompearsonsblog.blogspot.com

The properties of sterile neutrinos are simply astonishing. For example: Since they have no charge of any kind, they can in principle be their own antiparticles (the infamous “sterile Majorana neutrino“). As they are not associated with either the strong nuclear scale or electroweak symmetry breaking scale, sterile neutrinos can, in principle, have an arbitrarily large/small mass. In fact, very heavy sterile neutrinos might even be dark matter, though this is probably not the case. However, since sterile neutrinos do have mass, and at low energies they act just like regular Standard Model neutrinos, then they can participate in neutrino flavor oscillations. It is through this subtle effect that we hope to find sterile neutrinos if they do exist.

Credit: Kamioka Observatory/ICRR/University of Tokyo

Until next time!

Happy Colliding,

Richard (@bravelittlemuon)

Peter Coles - In the Dark

Night hath no wings

Night hath no wings to him that cannot sleep;
And Time seems then not for to fly, but creep;
Slowly her chariot drives, as if that she
Had broke her wheel, or crack’d her axletree.
Just so it is with me, who list’ning, pray
The winds to blow the tedious night away,
That I might see the cheerful peeping day.
Sick is my heart; O Saviour! do Thou please
To make my bed soft in my sicknesses;
Lighten my candle, so that I beneath
Sleep not for ever in the vaults of death;
Let me thy voice betimes i’ th’ morning hear;
Call, and I’ll come; say Thou the when and where:
Draw me but first, and after Thee I’ll run,
And make no one stop till my race be done.

by Robert Herrick (1591-1674)

Christian P. Robert - xi'an's og

off to Bangalore [#2]

While I was trying to find a proper window to take a picture of the mountains of Eastern Turkey, an Air France flight attendant suggested me to try the view from the pilots’ cockpit! I thought she was joking but, after putting a request to the captain, she came to walk me there and I had a fantastic five minutes with the pilots, chatting and taking unconstrained views of the region of Van, as we were nearing Turkey. I was actually most surprised at the very possibility of entering the cockpit as I thought it was now completely barred to passengers. Thanks then to the Air France crew that welcomed me there!

Filed under: Mountains, pictures, Travel Tagged: Air France, cockpit, Mount Süphan, Turkey, Van Lake

July 26, 2014

Quantum Diaries

A Physicist and Historian Walk Into a Coffee Shop

It’s Saturday, so I’m at the coffee shop working on my thesis again. It’s become a tradition over the last year that I meet a writer friend each week, we catch up, have something to drink, and sit down for a few hours of good-quality writing time.

The work desk at the coffee shop: laptop, steamed pork bun, and rosebud latte.

We’ve gotten to know the coffee shop really well over the course of this year. It’s pretty new in the neighborhood, but dark and hidden enough that business is slow, and we don’t feel bad keeping a table for several hours. We have our favorite menu items, but we’ve tried most everything by now. Some mornings, the owner’s family comes in, and the kids watch cartoons at another table.

I work on my thesis mostly, or sometimes I’ll work on analysis that spills over from the week, or I’ll check on some scheduled jobs running on the computing cluster.

My friend Jason writes short stories, works on revising his novel (magical realism in ancient Egypt in the reign of Rameses XI), or drafts posts for his blog about the puzzles of the British constitution. We trade tips on how to organize notes and citations, and how to stay motivated. So I’ve been hearing a lot about the cultural difference between academic work in the humanities and the sciences. One of the big differences is the level of citation that’s expected.

As a particle physicist, when I write a paper it’s very clear which experiment I’m writing about. I only write about one experiment at a time, and I typically focus on a very small topic. Because of that, I’ve learned that the standard for making new claims is that you usually make one new claim per paper, and it’s highlighted in the abstract, introduction, and conclusion with a clear phrase like “the new contribution of this work is…” It’s easy to separate which work you claim as your own and which work is from others, because anything outside “the new contribution of this work” belongs to others. A single citation for each external experiment should suffice.

For academic work in history, the standard is much different: the writing itself is much closer to the original research. As a start, you’ll need a citation for each quote, going to sources that are as primary as you can get your hands on. The stranger idea for me is that you also need a citation for each and every idea of analysis that someone else has come up with, and that a statement without a citation is automatically claimed as original work. This shows up in the difference between Jason’s posts about modern constitutional issues and historical ones: the historical ones have huge source lists, while the modern ones are content with a few hyperlinks.

In both cases, things that are “common knowledge” doesn’t need to be cited, like the fact that TeV cosmic rays exist (they do) or the year that Elizabeth I ascended the throne (1558).

There’s a difference in the number of citations between modern physics research and history research. Is that because of the timing (historical versus modern) or the subject matter? Do they have different amounts of common knowledge? For modern topics in physics and in history, the sources are available online, so a hyperlink is a perfect reference, even in formal post. By that standard, all Quantum Diaries posts should be ok with the hyperlink citation model. But even in those cases, Jason puts footnoted citations to modern articles in the JSTOR database, and uses more citations overall.

Another cool aspect of our coffee shop is that the music is sometimes ridiculous, and it interrupts my thoughts if I get stuck in some esoteric bog. There’s an oddly large sample of German covers of 30s and 40s showtunes. You haven’t lived until you’ve heard “The Lady is a Tramp” in German while calculating oscillation probabilities. I’m kidding. Mostly.

Jason has shown me a different way of handling citations, and I’ve taught him some of the basics of HTML, so now his citations can appear as hyperlinks to the references list!

As habits go, I’m proud of this social coffee shop habit. I default to getting stuff done, even if I’m feeling slightly off or uninspired.  The social reward of hanging out makes up for the slight activation energy of getting off my couch, and once I’m out of the house, it’s always easier to focus.  I miss prime Farmers’ Market time, but I could go before we meet. The friendship has been a wonderful supportive certainty over the last year, plus I get some perspective on my field compared to others.

Tommaso Dorigo - Scientificblogging

A Useful Approximation For The Tail Of A Gaussian
This is just a short post to report about a useful paper I found by preparing for a talk I will be giving next week at the 3rd International Conference on New Frontiers in Physics, in the pleasant setting of the Orthodox Academy of Crete, near Kolympari.

My talk will be titled "Extraordinary Claims: the 0.000029% Solution", making reference to the 5-sigma "discovery threshold" that has become a well-known standard for reporting the observation of new effects or particles in high-energy physics and astrophysics.

Clifford V. Johnson - Asymptotia

Pot Luck
Here in Aspen there was a pleasant party over at the apartment of one of the visiting physicists this evening. I know it seems odd, but it has been a while since I've been at a party with a lot of physicists (I'm not counting the official dinners at the Strings conference a fews weeks back), and I enjoyed it. I heard a little about what some old friends were up to, and met some spouses and learned what they do, and so forth. For the first time, I think, I spoke at length to some curious physicists about the graphic book project, and the associated frustrating adventures in the publishing world (short version: most people love it, but they just don't want to take a risk on an unusual project...), and they were excited about it, which was nice of them. It was a pot luck, and so although I was thinking I'd be tired and just take along a six-pack of beer, by lunchtime I decided that I'd make a little something and take it along. Then, as I tend to do, it became two little somethings...and I went and bought the ingredients at the supermarket nearby and worked down at the centre until later. Well, first I made a simple syrup from sugar and water and muddled and worried a lot of tarragon into it. Then in the evening, there was a lot of peeling and chopping. This is usually one of my favourite things, but the knives in the apartment I am staying in are as blunt as sticks of warm butter, and so chopping was long and fretful. (And dangerous... don't people realise that blunt knives are actually more dangerous than sharp ones?) [...] Click to continue reading this post

July 25, 2014

arXiv blog

An Indoor Positioning System Based On Echolocation

GPS doesn’t work indoors. Can a bat-like echolocation system take its place?

The satellite-based global positioning system has revolutionised the way humans interact with our planet. But a serious weakness is that GPS doesn’t work indoors. Consequently, researchers and engineers have been studying various ways to work out position in doors in a way that is simple and inexpensive.

The n-Category Cafe

The Ten-Fold Way (Part 2)

How can we discuss all the kinds of matter described by the ten-fold way in a single setup?

It’s bit tough, because 8 of them are fundamentally ‘real’ while the other 2 are fundamentally ‘complex’. Yet they should fit into a single framework, because there are 10 super division algebras over the real numbers, and each kind of matter is described using a super vector space — or really a super Hilbert space — with one of these super division algebras as its ‘ground field’.

Combining physical systems is done by tensoring their Hilbert spaces… and there does seem to be a way to do this even with super Hilbert spaces over different super division algebras. But what sort of mathematical structure can formalize this?

Here’s my current attempt to solve this problem. I’ll start with a warmup case, the threefold way. In fact I’ll spend most of my time on that! Then I’ll sketch how the ideas should extend to the tenfold way.

Fans of lax monoidal functors, Deligne’s tensor product of abelian categories, and the collage of a profunctor will be rewarded for their patience if they read the whole article. But the basic idea is supposed to be simple: it’s about a multiplication table.

The $𝟛\mathbb\left\{3\right\}$-fold way

First of all, notice that the set

$𝟛=\left\{1,0,-1\right\} \mathbb\left\{3\right\} = \\left\{1,0,-1\\right\}$

is a commutative monoid under ordinary multiplication:

$\begin{array}{rrrr}×& 1& 0& -1\\ 1& 1& 0& -1\\ 0& 0& 0& 0\\ -1& -1& 0& 1\end{array} \begin\left\{array\right\}\left\{rrrr\right\} \mathbf\left\{\times\right\} & \mathbf\left\{1\right\} & \mathbf\left\{0\right\} & \mathbf\left\{-1\right\} \\ \mathbf\left\{1\right\} & 1 & 0 & -1 \\ \mathbf\left\{0\right\} & 0 & 0 & 0 \\ \mathbf\left\{-1\right\} & -1 & 0 & 1 \end\left\{array\right\} $

Next, note that there are three (associative) division algebras over the reals: $ℝ,ℂ\mathbb\left\{R\right\}, \mathbb\left\{C\right\}$ or $\mathbb\left\{H\right\}$. We can equip a real vector space with the structure of a module over any of these algebras. We’ll then call it a real, complex or quaternionic vector space.

For the real case, this is entirely dull. For the complex case, this amounts to giving our real vector space $VV$ a complex structure: a linear operator $i:V\to Vi: V \to V$ with ${i}^{2}=-1i^2 = -1$. For the quaternionic case, it amounts to giving $VV$ a quaternionic structure: a pair of linear operators $i,j:V\to Vi, j: V \to V$ with

${i}^{2}={j}^{2}=-1,\phantom{\rule{2em}{0ex}}ij=-ji i^2 = j^2 = -1, \qquad i j = -j i $

We can then define $k=ijk = i j$.

The terminology ‘quaternionic vector space’ is a bit quirky, since the quaternions aren’t a field, but indulge me. ${ℍ}^{n}\mathbb\left\{H\right\}^n$ is a quaternionic vector space in an obvious way. $n×nn \times n$ quaternionic matrices act by multiplication on the right as ‘quaternionic linear transformations’ — that is, left module homomorphisms — of ${ℍ}^{n}\mathbb\left\{H\right\}^n$. Moreover, every finite-dimensional quaternionic vector space is isomorphic to ${ℍ}^{n}\mathbb\left\{H\right\}^n$. So it’s really not so bad! You just need to pay some attention to left versus right.

Now: I claim that given two vector spaces of any of these kinds, we can tensor them over the real numbers and get a vector space of another kind. It goes like this:

$\begin{array}{cccc}\otimes & \mathrm{real}& \mathrm{complex}& \mathrm{quaternionic}\\ \mathrm{real}& \mathrm{real}& \mathrm{complex}& \mathrm{quaternionic}\\ \mathrm{complex}& \mathrm{complex}& \mathrm{complex}& \mathrm{complex}\\ \mathrm{quaternionic}& \mathrm{quaternionic}& \mathrm{complex}& \mathrm{real}\end{array} \begin\left\{array\right\}\left\{cccc\right\} \mathbf\left\{\otimes\right\} & \mathbf\left\{real\right\} & \mathbf\left\{complex\right\} & \mathbf\left\{quaternionic\right\} \\ \mathbf\left\{real\right\} & real & complex & quaternionic \\ \mathbf\left\{complex\right\} & complex & complex & complex \\ \mathbf\left\{quaternionic\right\} & quaternionic & complex & real \end\left\{array\right\} $

You’ll notice this has the same pattern as the multiplication table we saw before:

$\begin{array}{rrrr}×& 1& 0& -1\\ 1& 1& 0& -1\\ 0& 0& 0& 0\\ -1& -1& 0& 1\end{array} \begin\left\{array\right\}\left\{rrrr\right\} \mathbf\left\{\times\right\} & \mathbf\left\{1\right\} & \mathbf\left\{0\right\} & \mathbf\left\{-1\right\} \\ \mathbf\left\{1\right\} & 1 & 0 & -1 \\ \mathbf\left\{0\right\} & 0 & 0 & 0 \\ \mathbf\left\{-1\right\} & -1 & 0 & 1 \end\left\{array\right\} $

So:

• $\mathbb\left\{R\right\}$ acts like 1.
• $\mathbb\left\{C\right\}$ acts like 0.
• $\mathbb\left\{H\right\}$ acts like -1.

There are different ways to understand this, but a nice one is to notice that if we have algebras $AA$ and $BB$ over some field, and we tensor an $AA$-module and a $BB$-module (over that field), we get an $A\otimes BA \otimes B$-module. So, we should look at this ‘multiplication table’ of real division algebras:

$\begin{array}{lrrr}\otimes & ℝ& ℂ& ℍ\\ ℝ& ℝ& ℂ& ℍ\\ ℂ& ℂ& ℂ\oplus ℂ& ℂ\left[2\right]\\ ℍ& ℍ& ℂ\left[2\right]& ℝ\left[4\right]\end{array} \begin\left\{array\right\}\left\{lrrr\right\} \mathbf\left\{\otimes\right\} & \mathbf\left\{\mathbb\left\{R\right\}\right\} & \mathbf\left\{\mathbb\left\{C\right\}\right\} & \mathbf\left\{\mathbb\left\{H\right\}\right\} \\ \mathbf\left\{\mathbb\left\{R\right\}\right\} & \mathbb\left\{R\right\} & \mathbb\left\{C\right\} & \mathbb\left\{H\right\} \\ \mathbf\left\{\mathbb\left\{C\right\}\right\} & \mathbb\left\{C\right\} & \mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\} & \mathbb\left\{C\right\}\left[2\right] \\ \mathbf\left\{\mathbb\left\{H\right\}\right\} & \mathbb\left\{H\right\} & \mathbb\left\{C\right\}\left[2\right] & \mathbb\left\{R\right\}\left[4\right] \end\left\{array\right\} $

Here $ℂ\left[2\right]\mathbb\left\{C\right\}\left[2\right]$ means the 2 × 2 complex matrices viewed as an algebra over $\mathbb\left\{R\right\}$, and $ℝ\left[4\right]\mathbb\left\{R\right\}\left[4\right]$ means that 4 × 4 real matrices.

What’s going on here? Naively you might have hoped for a simpler table, which would have instantly explained my earlier claim:

$\begin{array}{lrrr}\otimes & ℝ& ℂ& ℍ\\ ℝ& ℝ& ℂ& ℍ\\ ℂ& ℂ& ℂ& ℂ\\ ℍ& ℍ& ℂ& ℝ\end{array} \begin\left\{array\right\}\left\{lrrr\right\} \mathbf\left\{\otimes\right\} & \mathbf\left\{\mathbb\left\{R\right\}\right\} & \mathbf\left\{\mathbb\left\{C\right\}\right\} & \mathbf\left\{\mathbb\left\{H\right\}\right\} \\ \mathbf\left\{\mathbb\left\{R\right\}\right\} & \mathbb\left\{R\right\} & \mathbb\left\{C\right\} &\mathbb\left\{H\right\} \\ \mathbf\left\{\mathbb\left\{C\right\}\right\} & \mathbb\left\{C\right\} & \mathbb\left\{C\right\} & \mathbb\left\{C\right\} \\ \mathbf\left\{\mathbb\left\{H\right\}\right\} & \mathbb\left\{H\right\} & \mathbb\left\{C\right\} & \mathbb\left\{R\right\} \end\left\{array\right\} $

This isn’t true, but it’s ‘close enough to true’. Why? Because we always have a god-given algebra homomorphism from the naive answer to the real answer! The interesting cases are these:

$ℂ\to ℂ\oplus ℂ \mathbb\left\{C\right\} \to \mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\} $ $ℂ\to ℂ\left[2\right] \mathbb\left\{C\right\} \to \mathbb\left\{C\right\}\left[2\right] $ $ℝ\to ℝ\left[4\right] \mathbb\left\{R\right\} \to \mathbb\left\{R\right\}\left[4\right] $

where the first is the diagonal map $a↦\left(a,a\right)a \mapsto \left(a,a\right)$, and the other two send numbers to the corresponding scalar multiples of the identity matrix.

So, for example, if $VV$ and $WW$ are $\mathbb\left\{C\right\}$-modules, then their tensor product (over the reals! — all tensor products here are over $\mathbb\left\{R\right\}$) is a module over $ℂ\otimes ℂ\cong ℂ\oplus ℂ\mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \cong \mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\}$, and we can then pull that back via $ff$ to get a right $\mathbb\left\{C\right\}$-module.

What’s really going on here?

There’s a monoidal category ${\mathrm{Alg}}_{ℝ}Alg_\left\{\mathbb\left\{R\right\}\right\}$ of algebras over the real numbers, where the tensor product is the usual tensor product of algebras. The monoid $𝟛\mathbb\left\{3\right\}$ can be seen as a monoidal category with 3 objects and only identity morphisms. And I claim this:

Claim. There is an oplax monoidal functor $F:𝟛\to {\mathrm{Alg}}_{ℝ}F : \mathbb\left\{3\right\} \to Alg_\left\{\mathbb\left\{R\right\}\right\}$ with $\begin{array}{ccl}F\left(1\right)& =& ℝ\\ F\left(0\right)& =& ℂ\\ F\left(-1\right)& =& ℍ\end{array} \begin\left\{array\right\}\left\{ccl\right\} F\left(1\right) &=& \mathbb\left\{R\right\} \\ F\left(0\right) &=& \mathbb\left\{C\right\} \\ F\left(-1\right) &=& \mathbb\left\{H\right\} \end\left\{array\right\} $

What does ‘oplax’ mean? Some readers of the $nn$-Category Café eat oplax monoidal functors for breakfast and are chortling with joy at how I finally summarized everything I’d said so far in a single terse sentence! But others of you see ‘oplax’ and get a queasy feeling.

The key idea is that when we have two monoidal categories $CC$ and $DD$, a functor $F:C\to DF : C \to D$ is ‘oplax’ if it preserves the tensor product, not up to isomorphism, but up to a specified morphism. More precisely, given objects $x,y\in Cx,y \in C$ we have a natural transformation

${F}_{x,y}:F\left(x\otimes y\right)\to F\left(x\right)\otimes F\left(y\right) F_\left\{x,y\right\} : F\left(x \otimes y\right) \to F\left(x\right) \otimes F\left(y\right) $

If you had a ‘lax’ functor this would point the other way, and they’re a bit more popular… so when it points the opposite way it’s called ‘oplax’.

(In the lax case, ${F}_{x,y}F_\left\{x,y\right\}$ should probably be called the laxative, but we’re not doing that case, so I don’t get to make that joke.)

This morphism ${F}_{x,y}F_\left\{x,y\right\}$ needs to obey some rules, but the most important one is that using it twice, it gives two ways to get from $F\left(x\otimes y\otimes z\right)F\left(x \otimes y \otimes z\right)$ to $F\left(x\right)\otimes F\left(y\right)\otimes F\left(z\right)F\left(x\right) \otimes F\left(y\right) \otimes F\left(z\right)$, and these must agree.

Let’s see how this works in our example… at least in one case. I’ll take the trickiest case. Consider

${F}_{0,0}:F\left(0\cdot 0\right)\to F\left(0\right)\otimes F\left(0\right), F_\left\{0,0\right\} : F\left(0 \cdot 0\right) \to F\left(0\right) \otimes F\left(0\right), $

that is:

${F}_{0,0}:ℂ\to ℂ\otimes ℂ F_\left\{0,0\right\} : \mathbb\left\{C\right\} \to \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} $

There are, in principle, two ways to use this to get a homomorphism

$F\left(0\cdot 0\cdot 0\right)\to F\left(0\right)\otimes F\left(0\right)\otimes F\left(0\right)F\left(0 \cdot 0 \cdot 0 \right) \to F\left(0\right) \otimes F\left(0\right) \otimes F\left(0\right)$

or in other words, a homomorphism

$ℂ\to ℂ\otimes ℂ\otimes ℂ \mathbb\left\{C\right\} \to \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} $

where remember, all tensor products are taken over the reals. One is

$ℂ\stackrel{{F}_{0,0}}{⟶}ℂ\otimes ℂ\stackrel{1\otimes {F}_{0,0}}{⟶}ℂ\otimes \left(ℂ\otimes ℂ\right) \mathbb\left\{C\right\} \stackrel\left\{F_\left\{0,0\right\}\right\}\left\{\longrightarrow\right\} \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \stackrel\left\{1 \otimes F_\left\{0,0\right\}\right\}\left\{\longrightarrow\right\} \mathbb\left\{C\right\} \otimes \left(\mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\}\right) $

and the other is

$ℂ\stackrel{{F}_{0,0}}{⟶}ℂ\otimes ℂ\stackrel{{F}_{0,0}\otimes 1}{⟶}\left(ℂ\otimes ℂ\right)\otimes ℂ \mathbb\left\{C\right\} \stackrel\left\{F_\left\{0,0\right\}\right\}\left\{\longrightarrow\right\} \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \stackrel\left\{F_\left\{0,0\right\} \otimes 1\right\}\left\{\longrightarrow\right\} \left(\mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\}\right)\otimes \mathbb\left\{C\right\} $

I want to show they agree (after we rebracket the threefold tensor product using the associator).

Unfortunately, so far I have described ${F}_{0,0}F_\left\{0,0\right\}$ in terms of an isomorphism

$ℂ\otimes ℂ\cong ℂ\oplus ℂ \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \cong \mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\} $

Using this isomorphism, ${F}_{0,0}F_\left\{0,0\right\}$ becomes the diagonal map $a↦\left(a,a\right)a \mapsto \left(a,a\right)$. But now we need to really understand ${F}_{0,0}F_\left\{0,0\right\}$ a bit better, so I’d better say what isomorphism I have in mind! I’ll use the one that goes like this:

$\begin{array}{ccl}ℂ\otimes ℂ& \to & ℂ\oplus ℂ\\ 1\otimes 1& ↦& \left(1,1\right)\\ i\otimes 1& ↦& \left(i,i\right)\\ 1\otimes i& ↦& \left(i,-i\right)\\ i\otimes i& ↦& \left(1,-1\right)\end{array} \begin\left\{array\right\}\left\{ccl\right\} \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} &\to& \mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\} \\ 1 \otimes 1 &\mapsto& \left(1,1\right) \\ i \otimes 1 &\mapsto &\left(i,i\right) \\ 1 \otimes i &\mapsto &\left(i,-i\right) \\ i \otimes i &\mapsto & \left(1,-1\right) \end\left\{array\right\} $

This may make you nervous, but it truly is an isomorphism of real algebras, and it sends $a\otimes 1a \otimes 1$ to $\left(a,a\right)\left(a,a\right)$. So, unraveling the web of confusion, we have

$\begin{array}{rccc}{F}_{0,0}:& ℂ& \to & ℂ\otimes ℂ\\ & a& ↦& a\otimes 1\end{array} \begin\left\{array\right\}\left\{rccc\right\} F_\left\{0,0\right\} : & \mathbb\left\{C\right\} &\to& \mathbb\left\{C\right\}\otimes \mathbb\left\{C\right\} \\ & a &\mapsto & a \otimes 1 \end\left\{array\right\} $

Why didn’t I just say that in the first place? Well, I suffered over this a bit, so you should too! You see, there’s an unavoidable arbitrary choice here: I could just have well used $a↦1\otimes aa \mapsto 1 \otimes a$. ${F}_{0,0}F_\left\{0,0\right\}$ looked perfectly god-given when we thought of it as a homomorphism from $\mathbb\left\{C\right\}$ to $ℂ\oplus ℂ\mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\}$, but that was deceptive, because there’s a choice of isomorphism $ℂ\otimes ℂ\to ℂ\oplus ℂ\mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \to \mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\}$ lurking in this description.

This makes me nervous, since category theory disdains arbitrary choices! But it seems to work. On the one hand we have

$\begin{array}{ccccc}ℂ& \stackrel{{F}_{0,0}}{⟶}& ℂ\otimes ℂ& \stackrel{1\otimes {F}_{0,0}}{⟶}& ℂ\otimes ℂ\otimes ℂ\\ a& ↦& a\otimes 1& ↦& a\otimes \left(1\otimes 1\right)\end{array} \begin\left\{array\right\}\left\{ccccc\right\} \mathbb\left\{C\right\} &\stackrel\left\{F_\left\{0,0\right\}\right\}\left\{\longrightarrow\right\} &\mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} &\stackrel\left\{1 \otimes F_\left\{0,0\right\}\right\}\left\{\longrightarrow\right\}& \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \\ a &\mapsto & a \otimes 1 & \mapsto & a \otimes \left(1 \otimes 1\right) \end\left\{array\right\} $

On the other hand, we have

$\begin{array}{ccccc}ℂ& \stackrel{{F}_{0,0}}{⟶}& ℂ\otimes ℂ& \stackrel{{F}_{0,0}\otimes 1}{⟶}& ℂ\otimes ℂ\otimes ℂ\\ a& ↦& a\otimes 1& ↦& \left(a\otimes 1\right)\otimes 1\end{array} \begin\left\{array\right\}\left\{ccccc\right\} \mathbb\left\{C\right\} &\stackrel\left\{F_\left\{0,0\right\}\right\}\left\{\longrightarrow\right\} & \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} &\stackrel\left\{F_\left\{0,0\right\} \otimes 1\right\}\left\{\longrightarrow\right\} & \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} \\ a &\mapsto & a \otimes 1 & \mapsto & \left(a \otimes 1\right) \otimes 1 \end\left\{array\right\} $

So they agree!

I need to carefully check all the other cases before I dare call my claim a theorem. Indeed, writing up this case has increased my nervousness… before, I’d thought it was obvious.

But let me march on, optimistically!

Consequences

In quantum physics, what matters is not so much the algebras $\mathbb\left\{R\right\}$, $\mathbb\left\{C\right\}$ and $\mathbb\left\{H\right\}$ themselves as the categories of vector spaces — or indeed, Hilbert spaces —-over these algebras. So, we should think about the map sending an algebra to its category of modules.

For any field $kk$, there should be a contravariant pseudofunctor

$\mathrm{Rep}:{\mathrm{Alg}}_{k}\to {\mathrm{Rex}}_{k} Rep: Alg_k \to Rex_k $

where ${\mathrm{Rex}}_{k}Rex_k$ is the 2-category of

• $kk$-linear finitely cocomplete categories,

• $kk$-linear functors preserving finite colimits,

• and natural transformations.

The idea is that $\mathrm{Rep}Rep$ sends any algebra $AA$ over $kk$ to its category of modules, and any homomorphism $f:A\to Bf : A \to B$ to the pullback functor ${f}^{*}:\mathrm{Rep}\left(B\right)\to \mathrm{Rep}\left(A\right)f^* : Rep\left(B\right) \to Rep\left(A\right) $.

(Functors preserving finite colimits are also called right exact; this is the reason for the funny notation $\mathrm{Rex}Rex$. It has nothing to do with the dinosaur of that name.)

Moreover, $\mathrm{Rep}Rep$ gets along with tensor products. It’s definitely true that given real algebras $AA$ and $BB$, we have

$\mathrm{Rep}\left(A\otimes B\right)\simeq \mathrm{Rep}\left(A\right)⊠\mathrm{Rep}\left(B\right) Rep\left(A \otimes B\right) \simeq Rep\left(A\right) \boxtimes Rep\left(B\right) $

where $\boxtimes$ is the tensor product of finitely cocomplete $kk$-linear categories. But we should be able to go further and prove $\mathrm{Rep}Rep$ is monoidal. I don’t know if anyone has bothered yet.

(In case you’re wondering, this $\boxtimes$ thing reduces to Deligne’s tensor product of abelian categories given some ‘niceness assumptions’, but it’s a bit more general. Read the talk by Ignacio López Franco if you care… but I could have used Deligne’s setup if I restricted myself to finite-dimensional algebras, which is probably just fine for what I’m about to do.)

So, if my earlier claim is true, we can take the oplax monoidal functor

$F:𝟛\to {\mathrm{Alg}}_{ℝ}F : \mathbb\left\{3\right\} \to Alg_\left\{\mathbb\left\{R\right\}\right\} $

and compose it with the contravariant monoidal pseudofunctor

$\mathrm{Rep}:{\mathrm{Alg}}_{ℝ}\to {\mathrm{Rex}}_{ℝ} Rep : Alg_\left\{\mathbb\left\{R\right\}\right\} \to Rex_\left\{\mathbb\left\{R\right\}\right\} $

giving a guy which I’ll call

$\mathrm{Vect}:𝟛\to {\mathrm{Rex}}_{ℝ} Vect: \mathbb\left\{3\right\} \to Rex_\left\{\mathbb\left\{R\right\}\right\} $

I guess this guy is a contravariant oplax monoidal pseudofunctor! That doesn’t make it sound very lovable… but I love it. The idea is that:

• $\mathrm{Vect}\left(1\right)Vect\left(1\right)$ is the category of real vector spaces

• $\mathrm{Vect}\left(0\right)Vect\left(0\right)$ is the category of complex vector spaces

• $\mathrm{Vect}\left(-1\right)Vect\left(-1\right)$ is the category of quaternionic vector spaces

and the operation of multiplication in $𝟛=\left\{1,0,-1\right\}\mathbb\left\{3\right\} = \\left\{1,0,-1\\right\}$ gets sent to the operation of tensoring any one of these three kinds of vector space with any other kind and getting another kind!

So, if this works, we’ll have combined linear algebra over the real numbers, complex numbers and quaternions into a unified thing, $\mathrm{Vect}Vect$. This thing deserves to be called a $𝟛\mathbb\left\{3\right\}$-graded category. This would be a nice way to understand Dyson’s threefold way.

What’s really going on?

What’s really going on with this monoid $𝟛\mathbb\left\{3\right\}$? It’s a kind of combination or ‘collage’ of two groups:

• The Brauer group of $\mathbb\left\{R\right\}$, namely ${ℤ}_{2}\cong \left\{-1,1\right\}\mathbb\left\{Z\right\}_2 \cong \\left\{-1,1\\right\}$. This consists of Morita equivalence classes of central simple algebras over $\mathbb\left\{R\right\}$. One class contains $\mathbb\left\{R\right\}$ and the other contains $\mathbb\left\{H\right\}$. The tensor product of algebras corresponds to multiplication in $\left\{-1,1\right\}\\left\{-1,1\\right\}$.

• The Brauer group of $\mathbb\left\{C\right\}$, namely the trivial group $\left\{0\right\}\\left\{0\\right\}$. This consists of Morita equivalence classes of central simple algebras over $\mathbb\left\{C\right\}$. But $\mathbb\left\{C\right\}$ is algebraically closed, so there’s just one class, containing $\mathbb\left\{C\right\}$ itself!

See, the problem is that while $\mathbb\left\{C\right\}$ is a division algebra over $\mathbb\left\{R\right\}$, it’s not ‘central simple’ over $\mathbb\left\{R\right\}$: its center is not just $\mathbb\left\{R\right\}$, it’s bigger. This turns out to be why $ℂ\otimes ℂ\mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\}$ is so funny compared to the rest of the entries in our division algebra multiplication table.

So, we’ve really got two Brauer groups in play. But we also have a homomorphism from the first to the second, given by ‘tensoring with $\mathbb\left\{C\right\}$’: complexifying any real central simple algebra, we get a complex one.

And whenever we have a group homomorphism $\alpha :G\to H\alpha: G \to H$, we can make their disjoint union $G\bigsqcup HG \sqcup H$ into monoid, which I’ll call $G{\bigsqcup }_{\alpha }HG \sqcup_\alpha H$.

It works like this. Given $g,g\prime \in Gg,g\text{'} \in G$, we multiply them the usual way. Given $h,h\prime \in Hh, h\text{'} \in H$, we multiply them the usual way. But given $g\in Gg \in G$ and $h\in Hh \in H$, we define

$gh:=\alpha \left(g\right)h g h := \alpha\left(g\right) h$

and

$hg:=h\alpha \left(g\right) h g := h \alpha\left(g\right) $

The multiplication on $G{\bigsqcup }_{\alpha }HG \sqcup_\alpha H$ is associative! For example:

$\left(gg\prime \right)h=\alpha \left(gg\prime \right)h=\alpha \left(g\right)\alpha \left(g\prime \right)h=\alpha \left(g\right)\left(g\prime h\right)=g\left(g\prime h\right) \left(g g\text{'}\right)h = \alpha\left(g g\text{'}\right) h = \alpha\left(g\right) \alpha\left(g\text{'}\right) h = \alpha\left(g\right) \left(g\text{'}h\right) = g\left(g\text{'}h\right) $

Moreover, the element ${1}_{G}\in G1_G \in G$ acts as the identity of $G{\bigsqcup }_{\alpha }HG \sqcup_\alpha H$. For example:

${1}_{G}h=\alpha \left({1}_{G}\right)h={1}_{H}h=h 1_G h = \alpha\left(1_G\right) h = 1_H h = h $

But of course $G{\bigsqcup }_{\alpha }HG \sqcup_\alpha H$ isn’t a group, since “once you get inside $HH$ you never get out”.

This construction could be called the collage of $GG$ and $HH$ via $\alpha \alpha$, since it’s reminiscent of a similar construction of that name in category theory.

Question. What do monoid theorists call this construction?

Question. Can we do a similar trick for any field? Can we always take the Brauer groups of all its finite-dimensional extensions and fit them together into a monoid by taking some sort of collage? If so, I’d call this the Brauer monoid of that field.

The $\mathrm{𝟙𝟘}\mathbb\left\{10\right\}$-fold way

If you carefully read Part 1, maybe you can guess how I want to proceed. I want to make everything ‘super’.

I’ll replace division algebras over $\mathbb\left\{R\right\}$ by super division algebras over $\mathbb\left\{R\right\}$. Now instead of 3 = 2 + 1 there are 10 = 8 + 2:

• 8 of them are central simple over $\mathbb\left\{R\right\}$, so they give elements of the super Brauer group of $\mathbb\left\{R\right\}$, which is ${ℤ}_{8}\mathbb\left\{Z\right\}_8$.

• 2 of them are central simple over $\mathbb\left\{C\right\}$, so they give elements of the super Brauer group of $\mathbb\left\{C\right\}$, which is ${ℤ}_{2}\mathbb\left\{Z\right\}_2$.

Complexification gives a homomorphism

$\alpha :{ℤ}_{8}\to {ℤ}_{2} \alpha: \mathbb\left\{Z\right\}_8 \to \mathbb\left\{Z\right\}_2 $

namely the obvious nontrivial one. So, we can form the collage

$\mathrm{𝟙𝟘}={ℤ}_{8}{\bigsqcup }_{\alpha }{ℤ}_{2} \mathbb\left\{10\right\} = \mathbb\left\{Z\right\}_8 \sqcup_\alpha \mathbb\left\{Z\right\}_2 $

It’s a commutative monoid with 10 elements! Each of these is the equivalence class of one of the 10 real super division algebras.

I’ll then need to check that there’s an oplax monoidal functor

$G:\mathrm{𝟙𝟘}\to {\mathrm{SuperAlg}}_{ℝ}G : \mathbb\left\{10\right\} \to SuperAlg_\left\{\mathbb\left\{R\right\}\right\} $

sending each element of $\mathrm{𝟙𝟘}\mathbb\left\{10\right\}$ to the corresponding super division algebra.

If $GG$ really exists, I can compose it with a thing

$\mathrm{SuperRep}:{\mathrm{SuperAlg}}_{ℝ}\to {\mathrm{Rex}}_{ℝ} SuperRep : SuperAlg_\left\{\mathbb\left\{R\right\}\right\} \to Rex_\left\{\mathbb\left\{R\right\}\right\} $

sending each super algebra to its category of ‘super representations’ on super vector spaces. This should again be a contravariant monoidal pseudofunctor.

We can call the composite of $GG$ with $\mathrm{SuperRep}SuperRep$

$\mathrm{SuperVect}:\mathrm{𝟙𝟘}\to {Rex}_{ℝ} SuperVect: \mathbb\left\{10\right\} \to \Rex_\left\{\mathbb\left\{R\right\}\right\} $

If it all works, this thing $\mathrm{SuperVect}SuperVect$ will deserve to be called a $\mathrm{𝟙𝟘}\mathbb\left\{10\right\}$-graded category. It contains super vector spaces over the 10 kinds of super division algebras in a single framework, and says how to tensor them. And when we look at super Hilbert spaces, this setup will be able to talk about all ten kinds of matter I mentioned last time… and how to combine them.

So that’s the plan. If you see problems, or ways to simplify things, please let me know!

The n-Category Cafe

The Ten-Fold Way (Part 1)

There are 10 of each of these things:

• Associative real super-division algebras.

• Classical families of compact symmetric spaces.

• Ways that Hamiltonians can get along with time reversal ($TT$) and charge conjugation ($CC$) symmetry.

• Dimensions of spacetime in string theory.

It’s too bad nobody took up writing This Week’s Finds in Mathematical Physics when I quit. Someone should have explained this stuff in a nice simple way, so I could read their summary instead of fighting my way through the original papers. I don’t have much time for this sort of stuff anymore!

Let me start by explaining the basic idea, and then move on to more fancy aspects.

Ten kinds of matter

The idea of the ten-fold way goes back at least to 1996, when Altland and Zirnbauer discovered that substances can be divided into 10 kinds.

The basic idea is pretty simple. Some substances have time-reversal symmetry: they would look the same, even on the atomic level, if you made a movie of them and ran it backwards. Some don’t — these are more rare, like certain superconductors made of yttrium barium copper oxide! Time reversal symmetry is described by an antiunitary operator $TT$ that squares to 1 or to -1: please take my word for this, it’s a quantum thing. So, we get 3 choices, which are listed in the chart under $TT$ as 1, -1, or 0 (no time reversal symmetry).

Similarly, some substances have charge conjugation symmetry, meaning a symmetry where we switch particles and holes: places where a particle is missing. The ‘particles’ here can be rather abstract things, like phonons - little vibrations of sound in a substance, which act like particles — or spinons — little vibrations in the lined-up spins of electrons. Basically any way that something can wave can, thanks to quantum mechanics, act like a particle. And sometimes we can switch particles and holes, and a substance will act the same way!

Like time reversal symmetry, charge conjugation symmetry is described by an antiunitary operator $CC$ that can square to 1 or to -1. So again we get 3 choices, listed in the chart under $CC$ as 1, -1, or 0 (no charge conjugation symmetry).

So far we have 3 × 3 = 9 kinds of matter. What is the tenth kind?

Some kinds of matter don’t have time reversal or charge conjugation symmetry, but they’re symmetrical under the combination of time reversal and charge conjugation! You switch particles and holes and run the movie backwards, and things look the same!

In the chart they write 1 under the $SS$ when your matter has this combined symmetry, and 0 when it doesn’t. So, “0 0 1” is the tenth kind of matter (the second row in the chart).

This is just the beginning of an amazing story. Since then people have found substances called topological insulators that act like insulators in their interior but conduct electricity on their surface. We can make 3-dimensional topological insulators, but also 2-dimensional ones (that is, thin films) and even 1-dimensional ones (wires). And we can theorize about higher-dimensional ones, though this is mainly a mathematical game.

So we can ask which of the 10 kinds of substance can arise as topological insulators in various dimensions. And the answer is: in any particular dimension, only 5 kinds can show up. But it’s a different 5 in different dimensions! This chart shows how it works for dimensions 1 through 8. The kinds that can’t show up are labelled 0.

If you look at the chart, you’ll see it has some nice patterns. And it repeats after dimension 8. In other words, dimension 9 works just like dimension 1, and so on.

If you read some of the papers I listed, you’ll see that the $\mathbb\left\{Z\right\}$’s and ${ℤ}_{2}\mathbb\left\{Z\right\}_2$’s in the chart are the homotopy groups of the ten classical series of compact symmetric spaces. The fact that dimension $n+8n+8$ works like dimension $nn$ is called Bott periodicity.

Furthermore, the stuff about operators $TT$, $CC$ and $SS$ that square to 1, -1 or don’t exist at all is closely connected to the classification of associative real super division algebras. It all fits together.

Super division algebras

In 2005, Todd Trimble wrote a short paper called The super Brauer group and super division algebras.

In it, he gave a quick way to classify the associative real super division algebras: that is, finite-dimensional associative real ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-graded algebras having the property that every nonzero homogeneous element is invertible. The result was known, but I really enjoyed Todd’s effortless proof.

However, I didn’t notice that there are exactly 10 of these guys. Now this turns out to be a big deal. For each of these 10 algebras, the representations of that algebra describe ‘types of matter’ of a particular kind — where the 10 kinds are the ones I explained above!

So what are these 10 associative super division algebras?

3 of them are purely even, with no odd part: the usual associative division algebras $ℝ,ℂ\mathbb\left\{R\right\}, \mathbb\left\{C\right\}$ and $\mathbb\left\{H\right\}$.

7 of them are not purely even. Of these, 6 are Morita equivalent to the real Clifford algebras ${\mathrm{Cl}}_{1},{\mathrm{Cl}}_{2},{\mathrm{Cl}}_{3},{\mathrm{Cl}}_{5},{\mathrm{Cl}}_{6}Cl_1, Cl_2, Cl_3, Cl_5, Cl_6$ and ${\mathrm{Cl}}_{7}Cl_7$. These are the superalgebras generated by 1, 2, 3, 5, 6, or 7 odd square roots of -1.

Now you should have at least two questions:

• What’s ‘Morita equivalence’? — and even if you know, why should it matter here? Two algebras are Morita equivalent if they have equivalent categories of representations. The same definition works for superalgebras, though now we look at their representations on super vector spaces (${ℤ}_{2}\mathbb\left\{Z\right\}_2$-graded vector spaces). For physics what we really care about is the representations of an algebra or superalgebra: as I mentioned, those are ‘types of matter’. So, it makes sense to count two superalgebras as ‘the same’ if they’re Morita equivalent.

• 1, 2, 3, 5, 6, and 7? That’s weird — why not 4? Well, Todd showed that ${\mathrm{Cl}}_{4}Cl_4$ is Morita equivalent to the purely even super division algebra $\mathbb\left\{H\right\}$. So we already had that one on our list. Similarly, why not 0? ${\mathrm{Cl}}_{0}Cl_0$ is just $\mathbb\left\{R\right\}$. So we had that one too.

Representations of Clifford algebras are used to describe spin-1/2 particles, so it’s exciting that 8 of the 10 associative real super division algebras are Morita equivalent to real Clifford algebras.

But I’ve already mentioned one that’s not: the complex numbers, $\mathbb\left\{C\right\}$, regarded as a purely even algebra. And there’s one more! It’s the complex Clifford algebra $ℂ{\mathrm{l}}_{1}\mathbb\left\{C\right\}\mathrm\left\{l\right\}_1$. This is the superalgebra you get by taking the purely even algebra $\mathbb\left\{C\right\}$ and throwing in one odd square root of -1.

As soon as you hear that, you notice that the purely even algebra $\mathbb\left\{C\right\}$ is the complex Clifford algebra $ℂ{\mathrm{l}}_{0}\mathbb\left\{C\right\}\mathrm\left\{l\right\}_0$. In other words, it’s the superalgebra you get by taking the purely even algebra $\mathbb\left\{C\right\}$ and throwing in no odd square roots of -1.

More connections

At this point things start fitting together:

• You can multiply Morita equivalence classes of algebras using the tensor product of algebras: $\left[A\right]\otimes \left[B\right]=\left[A\otimes B\right]\left[A\right] \otimes \left[B\right] = \left[A \otimes B\right]$. Some equivalence classes have multiplicative inverses, and these form the Brauer group. We can do the same thing for superalgebras, and get the super Brauer group. The super division algebras Morita equivalent to ${\mathrm{Cl}}_{0},\dots ,{\mathrm{Cl}}_{7}Cl_0, \dots , Cl_7$ serve as representatives of the super Brauer group of the real numbers, which is ${ℤ}_{8}\mathbb\left\{Z\right\}_8$. I explained this in week211 and further in week212. It’s a nice purely algebraic way to think about real Bott periodicity!

• As we’ve seen, the super division algebras Morita equivalent to ${\mathrm{Cl}}_{0}Cl_0$ and ${\mathrm{Cl}}_{4}Cl_4$ are a bit funny. They’re purely even. So they serve as representatives of the plain old Brauer group of the real numbers, which is ${ℤ}_{2}\mathbb\left\{Z\right\}_2$.

• On the other hand, the complex Clifford algebras $ℂ{\mathrm{l}}_{0}=ℂ\mathbb\left\{C\right\}\mathrm\left\{l\right\}_0 = \mathbb\left\{C\right\}$ and $ℂ{\mathrm{l}}_{1}\mathbb\left\{C\right\}\mathrm\left\{l\right\}_1$ serve as representatives of the super Brauer group of the complex numbers, which is also ${ℤ}_{2}\mathbb\left\{Z\right\}_2$. This is a purely algebraic way to think about complex Bott periodicity, which has period 2 instead of period 8.

Meanwhile, the purely even $ℝ,ℂ\mathbb\left\{R\right\}, \mathbb\left\{C\right\}$ and $\mathbb\left\{H\right\}$ underlie Dyson’s ‘three-fold way’, which I explained in detail here:

Briefly, if you have an irreducible unitary representation of a group on a complex Hilbert space $HH$, there are three possibilities:

• The representation is isomorphic to its dual via an invariant symmetric bilinear pairing $g:H×H\to ℂg : H \times H \to \mathbb\left\{C\right\}$. In this case it has an invariant antiunitary operator $J:H\to HJ : H \to H$ with ${J}^{2}=1J^2 = 1$. This lets us write our representation as the complexification of a real one.

• The representation is isomorphic to its dual via an invariant antisymmetric bilinear pairing $\omega :H×H\to ℂ\omega : H \times H \to \mathbb\left\{C\right\}$. In this case it has an invariant antiunitary operator $J:H\to HJ : H \to H$ with ${J}^{2}=-1J^2 = -1$. This lets us promote our representation to a quaternionic one.

• The representation is not isomorphic to its dual. In this case we say it’s truly complex.

In physics applications, we can take $JJ$ to be either time reversal symmetry, $TT$, or charge conjugation symmetry, $CC$. Studying either symmetry separately leads us to Dyson’s three-fold way. Studying them both together leads to the ten-fold way!

So the ten-fold way seems to combine in one nice package:

• real Bott periodicity,
• complex Bott periodicity,
• the real Brauer group,
• the real super Brauer group,
• the complex super Brauer group, and
• the three-fold way.

I could throw ‘the complex Brauer group’ into this list, because that’s lurking here too, but it’s the trivial group, with $\mathbb\left\{C\right\}$ as its representative.

There really should be a better way to understand this. Here’s my best attempt right now.

The set of Morita equivalence classes of finite-dimensional real superalgebras gets a commutative monoid structure thanks to direct sum. This commutative monoid then gets a commutative rig structure thanks to tensor product. This commutative rig — let’s call it $\Re \mathfrak\left\{R\right\}$ — is apparently too complicated to understand in detail, though I’d love to be corrected about that. But we can peek at pieces:

• We can look at the group of invertible elements in $\Re \mathfrak\left\{R\right\}$ — more precisely, elements with multiplicative inverses. This is the real super Brauer group ${ℤ}_{8}\mathbb\left\{Z\right\}_8$.

• We can look at the sub-rig of $\Re \mathfrak\left\{R\right\}$ coming from semisimple purely even algebras. As a commutative monoid under addition, this is ${ℕ}^{3}\mathbb\left\{N\right\}^3$, since it’s generated by $ℝ,ℂ\mathbb\left\{R\right\}, \mathbb\left\{C\right\}$ and $\mathbb\left\{H\right\}$. This commutative monoid becomes a rig with a funny multiplication table, e.g. $ℂ\otimes ℂ=ℂ\oplus ℂ\mathbb\left\{C\right\} \otimes \mathbb\left\{C\right\} = \mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\}$. This captures some aspects of the three-fold way.

We should really look at a larger chunk of the rig $\Re \mathfrak\left\{R\right\}$, that includes both of these chunks. How about the sub-rig coming from all semisimple superalgebras? What’s that?

And here’s another question: what’s the relation to the 10 classical families of compact symmetric spaces? The short answer is that each family describes a family of possible Hamiltonians for one of our 10 kinds of matter. For a more detailed answer, I suggest reading Gregory Moore’s Quantum symmetries and compatible Hamiltonians. But if you look at this chart by Ryu et al, you’ll see these families involve a nice interplay between $ℝ,ℂ\mathbb\left\{R\right\}, \mathbb\left\{C\right\}$ and $\mathbb\left\{H\right\}$, which is what this story is all about:

The families of symmetric spaces are listed in the column “Hamiltonian”.

All this stuff is fitting together more and more nicely! And if you look at the paper by Freed and Moore, you’ll see there’s a lot more involved when you take the symmetries of crystals into account. People are beginning to understand the algebraic and topological aspects of condensed matter much more deeply these days.

The list

Just for the record, here are all 10 associative real super division algebras. 8 are Morita equivalent to real Clifford algebras:

• ${\mathrm{Cl}}_{0}Cl_0$ is the purely even division algebra $\mathbb\left\{R\right\}$.

• ${\mathrm{Cl}}_{1}Cl_1$ is the super division algebra $ℝ\oplus ℝe\mathbb\left\{R\right\} \oplus \mathbb\left\{R\right\}e$, where $ee$ is an odd element with ${e}^{2}=-1e^2 = -1$.

• ${\mathrm{Cl}}_{2}Cl_2$ is the super division algebra $ℂ\oplus ℂe\mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\}e$, where $ee$ is an odd element with ${e}^{2}=-1e^2 = -1$ and $ei=-iee i = -i e$.

• ${\mathrm{Cl}}_{3}Cl_3$ is the super division algebra $ℍ\oplus ℍe\mathbb\left\{H\right\} \oplus \mathbb\left\{H\right\}e$, where $ee$ is an odd element with ${e}^{2}=1e^2 = 1$ and $ei=ie,ej=je,ek=kee i = i e, e j = j e, e k = k e$.

• ${\mathrm{Cl}}_{4}Cl_4$ is $ℍ\left[2\right]\mathbb\left\{H\right\}\left[2\right]$, the algebra of $2×22 \times 2$ quaternionic matrices, given a certain ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-grading. This is Morita equivalent to the purely even division algebra $\mathbb\left\{H\right\}$.

• ${\mathrm{Cl}}_{5}Cl_5$ is $ℍ\left[2\right]\oplus ℍ\left[2\right]\mathbb\left\{H\right\}\left[2\right] \oplus \mathbb\left\{H\right\}\left[2\right]$ given a certain ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-grading. This is Morita equivalent to the super division algebra $ℍ\oplus ℍe\mathbb\left\{H\right\} \oplus \mathbb\left\{H\right\}e$ where $ee$ is an odd element with ${e}^{2}=-1e^2 = -1$ and $ei=ie,ej=je,ek=kee i = i e, e j = j e, e k = k e$.

• ${\mathrm{Cl}}_{6}Cl_6$ is $ℂ\left[4\right]\oplus ℂ\left[4\right]\mathbb\left\{C\right\}\left[4\right] \oplus \mathbb\left\{C\right\}\left[4\right]$ given a certain ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-grading. This is Morita equivalent to the super division algebra $ℂ\oplus ℂe\mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\}e$ where $ee$ is an odd element with ${e}^{2}=1e^2 = 1$ and $ei=-iee i = -i e$.

• ${\mathrm{Cl}}_{7}Cl_7$ is $ℝ\left[8\right]\oplus ℝ\left[8\right]\mathbb\left\{R\right\}\left[8\right] \oplus \mathbb\left\{R\right\}\left[8\right]$ given a certain ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-grading. This is Morita equivalent to the super division algebra $ℝ\oplus ℝe\mathbb\left\{R\right\} \oplus \mathbb\left\{R\right\}e$ where $ee$ is an odd element with ${e}^{2}=1e^2 = 1$.

${\mathrm{Cl}}_{n+8}Cl_\left\{n+8\right\}$ is Morita equivalent to ${\mathrm{Cl}}_{n}Cl_n$ so we can stop here if we’re just looking for Morita equivalence classes, and there also happen to be no more super division algebras down this road. It is nice to compare ${\mathrm{Cl}}_{n}Cl_n$ and ${\mathrm{Cl}}_{8-n}Cl_\left\{8-n\right\}$: there’s a nice pattern here.

The remaining 2 real super division algebras are complex Clifford algebras:

• $ℂ{\mathrm{l}}_{0}\mathbb\left\{C\right\}\mathrm\left\{l\right\}_0$ is the purely even division algebra $\mathbb\left\{C\right\}$.

• $ℂ{\mathrm{l}}_{1}\mathbb\left\{C\right\}\mathrm\left\{l\right\}_1$ is the super division algebra $ℂ\oplus ℂe\mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\} e$, where $ee$ is an odd element with ${e}^{2}=-1e^2 = -1$ and $ei=iee i = i e$.

In the last one we could also say “with ${e}^{2}=1e^2 = 1$” — we’d get something isomorphic, not a new possibility.

Ten dimensions of string theory

Oh yeah — what about the 10 dimensions in string theory? Are they really related to the ten-fold way?

It seems weird, but I think the answer is “yes, at least slightly”.

Remember, 2 of the dimensions in 10d string theory are those of the string worldsheet, which is a complex manifold. The other 8 are connected to the octonions, which in turn are connected to the 8-fold periodicity of real Clifford algebra. So the 8+2 split in string theory is at least slightly connected to the 8+2 split in the list of associative real super division algebras.

This may be more of a joke than a deep observation. After all, the 8 dimensions of the octonions are not individual things with distinct identities, as the 8 super division algebras coming from real Clifford algebras are. So there’s no one-to-one correspondence going on here, just an equation between numbers.

Still, there are certain observations that would be silly to resist mentioning.

Lubos Motl - string vacua and pheno

Realistic heterotic non-supersymmetric models
Michael Blaszczyk (we will ultimately teach them to write it as "Blaščik" as any other decent non-Eastern Slavic nation) and three co-authors from German, Greek, and Mexican institutions wrote an interesting paper
Non-supersymmetric heterotic model building
where they show how naturally the $$SO(16)\times SO(16)$$ heterotic string theory without supersymmetry is able to produce unifying models with one Higgs doublet, three generations, some logic inherited from non-supersymmetric $$SO(10)$$ grand unified theories, and (almost?) nothing beyond the Standard Model at low energies.

Note that the $$SO(16)\times SO(16)$$ heterotic strings treats the projections and sectors for the extra world sheet fermions (or bosons) a bit differently. Effectively, one changes the statistics of the $${\bf 128}$$ chiral spinor of the $$SO(16)$$ groups to the opposite statistics which splits the $${\bf 248}$$-dimensional multiplets of $$E_8$$ back to $${\bf 120}\oplus{\bf 128}$$, breaks both $$E_8$$ groups to $$SO(16)$$, and breaks supersymmetry.

Non-supersymmetric string theories generally have tachyons. However, they show that at smooth Calabi-Yaus – Calabi-Yau manifolds are not really that special once you break spacetime SUSY (they chose them for simplicity, to use the existing tools) – the non-supersymmetric heterotic string theory doesn't really have any tachyons. So if such tachyons appear in twisted sectors near orbifold points, and they do, they must become massive and harmless if the orbifold singularity is resolved.

With low-energy supersymmetry, one of course sacrifices certain things like the "solution of the hierarchy problem" – which doesn't seem to be a big sacrifice based on the null results that keep on flowing from the LHC.

Conceptually, I would say that these non-supersymmetric heterotic models still belong to a supersymmetric theory. Just the SUSY is broken at the string scale, not at low energies. The non-supersymmetric 10D string theories may be related to the supersymmetric 10D stringy vacua by various T-dualities etc. so they must also be understood as solutions to the same a priori supersymmetric theory – the SUSY breaking may always be understood as a spontaneous one.

Even with these non-SUSY models, it's remarkable to see how "directly" string theory produces a Standard Model. The standard embedding really seems to be "pretty much exactly what we need" to get the Standard Model spectrum.

Geraint Lewis - Cosmic Horizons

A cosmic two-step: the universal dance of the dwarf galaxies
We had a paper in Nature this week, and I think this paper is exciting and important. I've written an article for The Conversation which you can read it here.

Enjoy!

July 24, 2014

ATLAS Experiment

Identity problems

An obligatory eye scan is required for all ATLAS underground personnel entering the experimental cavern. The iris recognition is performed by the IrisID iCAM7000.

Gate to the underworld

Its only point in life is to keep track of who enters and leaves the Zone. It sounds like a simple task for such an advanced technology, but — like most things in the world of research — it’s never without some hiccups.

The iCAM7000 comes complete with an interactive voice feedback system, personified by a sassy, but simplistic, guard-woman who I liken to the Cerberus of the ATLAS cavern. There exists one possible outcome for each of her heads: 1) she allows you to proceed into the underworld, opening the forward door; 2) she sends you back to where you came from, opening the backward door; or 3) she allows you to proceed, but the forward door remains closed and the backward door opens instead. The particular failure mechanism behind the latter, seemingly contradictory, case has yet to be understood and is best discussed in the appropriate forum. In the middle case, a robotic voice greets you:

“Soarry, we cannot confirm your identity.”

I end up hearing this way more often than you might expect. So often, in fact, that the sound of her drawled ‘soarry’ now produces an instantaneous Pavlovian response of frustration and rejection in me. Keep in mind that every emotion is amplified by a factor of 10 when you’re 100 meters below ground.

Sometimes the IR scanner positioned at the entrance to the capsule decides it doesn’t like something about you (e.g. your height, your weight, your mood) or the way you entered (e.g. too quick, too slow, with too much hip). One handy trick is to take your helmet off and start from scratch. The general consensus here is that it confuses the straps with a second person entering the capsule. But of course this is only conjecture, as there is never any useful debugging output, only:

“Soarry, we cannot confirm your identity.”

Moreover, it’s not entirely clear what she means by that. I find it amusing to think that the problem could simply be with my identity itself.

Identity searching

It’s true I’ve been doing a lot of soul-searching lately. Don’t get me wrong, I love testing cables for ATLAS — it’s humbling to be a small part of something grandiose. But lately, my knees have been taking a real beating at PP2 due to multiple bangs against various steel support structures and long hours of kneeling on the anti-skid aluminum planks. Maybe I’m getting too old for this and she’s finally on to me.

Come to think of it, I wonder if ATLAS stores an identity database of all their underground staff. My thoughts begin to wander off into an Orwellian nightmare starring our favorite iCAM7000 as ‘Big Sister’ …

Looking into the iris scanner, there’s an orange dot that turns green when it is properly aligned between the eyes and your head is at an appropriate distance from the scanner. A little back, a little forwards. Fortunately, some verbal guidance is given here, albeit rather spasmodically:

“Please move a little back from the camera.”

Steadfastly watching the dot while centering it on my forehead always feels a little like being a sniper on a rooftop waiting for that perfect shot. Except in this case, I’m both the target and the assassin. The narrative twist sends my thoughts spiralling out of control.

Looking into the abyss

In 300 years, what will they think when they stumble upon this abandoned relic of humanity? Will they conclude it’s some sort of unfinished spaceship, waiting patiently for its first test flight? Will they be accelerating particles we don’t even know exist yet? Will it all seem like a futile exercise or will it be praised as pioneering work that paved the way for current technologies and their understanding of the universe?

Luckily it’s not up to me to decide. For now, I’m just a cable tester:

“Thank you, you have been identified.”

I rejoice internally as I hear those magical words and see the tunnel passageway open in front of me. And then I suddenly realize that I forgot to pee.

 Michael Leyton is Visiting Assistant Professor at the University of Texas at Dallas. He has been a member of the ATLAS collaboration since 2004 and tested over 800 km of cables in the experimental cavern. His favorite cable harnesses are Type-2 VVDC and Type-4 HV. Photos by Cécile Lapoire

Quantum Diaries

Fermilab technology available for license: Bed-ridden boredom spurs new invention

This article appeared in Fermilab Today on July 24, 2014.

Fermilab engineer Jim Hoff has invented an electronic circuit that can guard against radiation damage. Photo: Hanae Armitage

Fermilab engineer Jim Hoff has received patent approval on a very tiny, very clever invention that could have an impact on aerospace, agriculture and medical imaging industries.

Hoff has engineered a widely adaptable latch — an electronic circuit capable of remembering a logical state — that suppresses a commonly destructive circuit error caused by radiation.

There are two radiation-based errors that can damage a circuit: total dose and single-event upset. In the former, the entire circuit is doused in radiation and damaged; in an SEU, a single particle of radiation delivers its energy to the chip and alters a state of memory, which takes the form of 1s and 0s. Altered states of memory equate to an unintentional shift from logical 1 or logical 0 and ultimately lead to loss of data or imaging resolution. Hoff’s design is essentially a chip immunization, preemptively guarding against SEUs.

“There are a lot of applications,” Hoff said. “Anyone who needs to store data for a length of time and keep it in that same state, uncorrupted — anyone flying in a high-altitude plane, anyone using medical imaging technology — could use this.”

Past experimental data showed that, in any given total-ionizing radiation dose, the latch reduces single-event upsets by a factor of about 40. Hoff suspects that the invention’s newer configurations will yield at least two orders of magnitude in single-event upset reduction.

The invention is fondly referred to as SEUSS, which stands for single-event upset suppression system. It’s relatively inexpensive and designed to integrate easily with a multitude of circuits — all that’s needed is a compatible transistor.

Hoff’s line of work lies in chip development, and SEUSS is currently used in some Fermilab-developed chips such as FSSR, which is used in projects at Jefferson Lab, and Phoenix, which is used in the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.

The idea of SEUSS was born out of post-knee-surgery, bed-ridden boredom. On strict bed rest, Hoff’s mind naturally wandered to engineering.

“As I was lying there, leg in pain, back cramping, I started playing with designs of my most recent project at work,” he said. “At one point I stopped and thought, ‘Wow, I just made a single-event upset-tolerant SR flip-flop!’”

While this isn’t the world’s first SEUSS-tolerant latch, Hoff is the first to create a single-event upset suppression system that is also a set-reset flip-flop, meaning it can take the form of almost any latch. As a flip-flop, the adaptability of the latch is enormous and far exceeds that of its pre-existing latch brethren.

“That’s what makes this a truly special latch — its incredible versatility,” says Hoff.

From a broader vantage point, the invention is exciting for more than just Fermilab employees; it’s one of Fermilab’s first big efforts in pursuing potential licensees from industry.

Cherri Schmidt, head of Fermilab’s Office of Partnerships and Technology Transfer, with the assistance of intern Miguel Marchan, has been developing the marketing plan to reach out to companies who may be interested in licensing the technology for commercial application.

“We’re excited about this one because it could really affect a large number of industries and companies,” Schmidt said. “That, to me, is what makes this invention so interesting and exciting.”

Hanae Armitage

Sean Carroll - Preposterous Universe

Why Probability in Quantum Mechanics is Given by the Wave Function Squared

One of the most profound and mysterious principles in all of physics is the Born Rule, named after Max Born. In quantum mechanics, particles don’t have classical properties like “position” or “momentum”; rather, there is a wave function that assigns a (complex) number, called the “amplitude,” to each possible measurement outcome. The Born Rule is then very simple: it says that the probability of obtaining any possible measurement outcome is equal to the square of the corresponding amplitude. (The wave function is just the set of all the amplitudes.)

Born Rule:     $\mathrm{Probability}(x) = |\mathrm{amplitude}(x)|^2.$

The Born Rule is certainly correct, as far as all of our experimental efforts have been able to discern. But why? Born himself kind of stumbled onto his Rule. Here is an excerpt from his 1926 paper:

That’s right. Born’s paper was rejected at first, and when it was later accepted by another journal, he didn’t even get the Born Rule right. At first he said the probability was equal to the amplitude, and only in an added footnote did he correct it to being the amplitude squared. And a good thing, too, since amplitudes can be negative or even imaginary!

The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this:

1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
2. Wave functions evolve in time according to the Schrödinger equation.
3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.
4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.
5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

It’s an ungainly mess, we all agree. You see that the Born Rule is simply postulated right there, as #4. Perhaps we can do better.

Of course we can do better, since “textbook quantum mechanics” is an embarrassment. There are other formulations, and you know that my own favorite is Everettian (“Many-Worlds”) quantum mechanics. (I’m sorry I was too busy to contribute to the active comment thread on that post. On the other hand, a vanishingly small percentage of the 200+ comments actually addressed the point of the article, which was that the potential for many worlds is automatically there in the wave function no matter what formulation you favor. Everett simply takes them seriously, while alternatives need to go to extra efforts to erase them. As Ted Bunn argues, Everett is just “quantum mechanics,” while collapse formulations should be called “disappearing-worlds interpretations.”)

Like the textbook formulation, Everettian quantum mechanics also comes with a list of postulates. Here it is:

1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
2. Wave functions evolve in time according to the Schrödinger equation.

That’s it! Quite a bit simpler — and the two postulates are exactly the same as the first two of the textbook approach. Everett, in other words, is claiming that all the weird stuff about “measurement” and “wave function collapse” in the conventional way of thinking about quantum mechanics isn’t something we need to add on; it comes out automatically from the formalism.

The trickiest thing to extract from the formalism is the Born Rule. That’s what Charles (“Chip”) Sebens and I tackled in our recent paper:

Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics
Charles T. Sebens, Sean M. Carroll

A longstanding issue in attempts to understand the Everett (Many-Worlds) approach to quantum mechanics is the origin of the Born rule: why is the probability given by the square of the amplitude? Following Vaidman, we note that observers are in a position of self-locating uncertainty during the period between the branches of the wave function splitting via decoherence and the observer registering the outcome of the measurement. In this period it is tempting to regard each branch as equiprobable, but we give new reasons why that would be inadvisable. Applying lessons from this analysis, we demonstrate (using arguments similar to those in Zurek’s envariance-based derivation) that the Born rule is the uniquely rational way of apportioning credence in Everettian quantum mechanics. In particular, we rely on a single key principle: changes purely to the environment do not affect the probabilities one ought to assign to measurement outcomes in a local subsystem. We arrive at a method for assigning probabilities in cases that involve both classical and quantum self-locating uncertainty. This method provides unique answers to quantum Sleeping Beauty problems, as well as a well-defined procedure for calculating probabilities in quantum cosmological multiverses with multiple similar observers.

Chip is a graduate student in the philosophy department at Michigan, which is great because this work lies squarely at the boundary of physics and philosophy. (I guess it is possible.) The paper itself leans more toward the philosophical side of things; if you are a physicist who just wants the equations, we have a shorter conference proceeding.

Before explaining what we did, let me first say a bit about why there’s a puzzle at all. Let’s think about the wave function for a spin, a spin-measuring apparatus, and an environment (the rest of the world). It might initially take the form

(α[up] + β[down] ; apparatus says “ready” ; environment0).             (1)

This might look a little cryptic if you’re not used to it, but it’s not too hard to grasp the gist. The first slot refers to the spin. It is in a superposition of “up” and “down.” The Greek letters α and β are the amplitudes that specify the wave function for those two possibilities. The second slot refers to the apparatus just sitting there in its ready state, and the third slot likewise refers to the environment. By the Born Rule, when we make a measurement the probability of seeing spin-up is |α|2, while the probability for seeing spin-down is |β|2.

In Everettian quantum mechanics (EQM), wave functions never collapse. The one we’ve written will smoothly evolve into something that looks like this:

α([up] ; apparatus says “up” ; environment1)
+ β([down] ; apparatus says “down” ; environment2).             (2)

This is an extremely simplified situation, of course, but it is meant to convey the basic appearance of two separate “worlds.” The wave function has split into branches that don’t ever talk to each other, because the two environment states are different and will stay that way. A state like this simply arises from normal Schrödinger evolution from the state we started with.

So here is the problem. After the splitting from (1) to (2), the wave function coefficients α and β just kind of go along for the ride. If you find yourself in the branch where the spin is up, your coefficient is α, but so what? How do you know what kind of coefficient is sitting outside the branch you are living on? All you know is that there was one branch and now there are two. If anything, shouldn’t we declare them to be equally likely (so-called “branch-counting”)? For that matter, in what sense are there probabilities at all? There was nothing stochastic or random about any of this process, the entire evolution was perfectly deterministic. It’s not right to say “Before the measurement, I didn’t know which branch I was going to end up on.” You know precisely that one copy of your future self will appear on each branch. Why in the world should we be talking about probabilities?

Note that the pressing question is not so much “Why is the probability given by the wave function squared, rather than the absolute value of the wave function, or the wave function to the fourth, or whatever?” as it is “Why is there a particular probability rule at all, since the theory is deterministic?” Indeed, once you accept that there should be some specific probability rule, it’s practically guaranteed to be the Born Rule. There is a result called Gleason’s Theorem, which says roughly that the Born Rule is the only consistent probability rule you can conceivably have that depends on the wave function alone. So the real question is not “Why squared?”, it’s “Whence probability?”

Of course, there are promising answers. Perhaps the most well-known is the approach developed by Deutsch and Wallace based on decision theory. There, the approach to probability is essentially operational: given the setup of Everettian quantum mechanics, how should a rational person behave, in terms of making bets and predicting experimental outcomes, etc.? They show that there is one unique answer, which is given by the Born Rule. In other words, the question “Whence probability?” is sidestepped by arguing that reasonable people in an Everettian universe will act as if there are probabilities that obey the Born Rule. Which may be good enough.

But it might not convince everyone, so there are alternatives. One of my favorites is Wojciech Zurek’s approach based on “envariance.” Rather than using words like “decision theory” and “rationality” that make physicists nervous, Zurek claims that the underlying symmetries of quantum mechanics pick out the Born Rule uniquely. It’s very pretty, and I encourage anyone who knows a little QM to have a look at Zurek’s paper. But it is subject to the criticism that it doesn’t really teach us anything that we didn’t already know from Gleason’s theorem. That is, Zurek gives us more reason to think that the Born Rule is uniquely preferred by quantum mechanics, but it doesn’t really help with the deeper question of why we should think of EQM as a theory of probabilities at all.

Here is where Chip and I try to contribute something. We use the idea of “self-locating uncertainty,” which has been much discussed in the philosophical literature, and has been applied to quantum mechanics by Lev Vaidman. Self-locating uncertainty occurs when you know that there multiple observers in the universe who find themselves in exactly the same conditions that you are in right now — but you don’t know which one of these observers you are. That can happen in “big universe” cosmology, where it leads to the measure problem. But it automatically happens in EQM, whether you like it or not.

Think of observing the spin of a particle, as in our example above. The steps are:

1. Everything is in its starting state, before the measurement.
2. The apparatus interacts with the system to be observed and becomes entangled. (“Pre-measurement.”)
3. The apparatus becomes entangled with the environment, branching the wave function. (“Decoherence.”)
4. The observer reads off the result of the measurement from the apparatus.

The point is that in between steps 3. and 4., the wave function of the universe has branched into two, but the observer doesn’t yet know which branch they are on. There are two copies of the observer that are in identical states, even though they’re part of different “worlds.” That’s the moment of self-locating uncertainty. Here it is in equations, although I don’t think it’s much help.

You might say “What if I am the apparatus myself?” That is, what if I observe the outcome directly, without any intermediating macroscopic equipment? Nice try, but no dice. That’s because decoherence happens incredibly quickly. Even if you take the extreme case where you look at the spin directly with your eyeball, the time it takes the state of your eye to decohere is about 10-21 seconds, whereas the timescales associated with the signal reaching your brain are measured in tens of milliseconds. Self-locating uncertainty is inevitable in Everettian quantum mechanics. In that sense, probability is inevitable, even though the theory is deterministic — in the phase of uncertainty, we need to assign probabilities to finding ourselves on different branches.

So what do we do about it? As I mentioned, there’s been a lot of work on how to deal with self-locating uncertainty, i.e. how to apportion credences (degrees of belief) to different possible locations for yourself in a big universe. One influential paper is by Adam Elga, and comes with the charming title of “Defeating Dr. Evil With Self-Locating Belief.” (Philosophers have more fun with their titles than physicists do.) Elga argues for a principle of Indifference: if there are truly multiple copies of you in the world, you should assume equal likelihood for being any one of them. Crucially, Elga doesn’t simply assert Indifference; he actually derives it, under a simple set of assumptions that would seem to be the kind of minimal principles of reasoning any rational person should be ready to use.

But there is a problem! Naïvely, applying Indifference to quantum mechanics just leads to branch-counting — if you assign equal probability to every possible appearance of equivalent observers, and there are two branches, each branch should get equal probability. But that’s a disaster; it says we should simply ignore the amplitudes entirely, rather than using the Born Rule. This bit of tension has led to some worry among philosophers who worry about such things.

Resolving this tension is perhaps the most useful thing Chip and I do in our paper. Rather than naïvely applying Indifference to quantum mechanics, we go back to the “simple assumptions” and try to derive it from scratch. We were able to pinpoint one hidden assumption that seems quite innocent, but actually does all the heavy lifting when it comes to quantum mechanics. We call it the “Epistemic Separability Principle,” or ESP for short. Here is the informal version (see paper for pedantic careful formulations):

ESP: The credence one should assign to being any one of several observers having identical experiences is independent of features of the environment that aren’t affecting the observers.

That is, the probabilities you assign to things happening in your lab, whatever they may be, should be exactly the same if we tweak the universe just a bit by moving around some rocks on a planet orbiting a star in the Andromeda galaxy. ESP simply asserts that our knowledge is separable: how we talk about what happens here is independent of what is happening far away. (Our system here can still be entangled with some system far away; under unitary evolution, changing that far-away system doesn’t change the entanglement.)

The ESP is quite a mild assumption, and to me it seems like a necessary part of being able to think of the universe as consisting of separate pieces. If you can’t assign credences locally without knowing about the state of the whole universe, there’s no real sense in which the rest of the world is really separate from you. It is certainly implicitly used by Elga (he assumes that credences are unchanged by some hidden person tossing a coin).

With this assumption in hand, we are able to demonstrate that Indifference does not apply to branching quantum worlds in a straightforward way. Indeed, we show that you should assign equal credences to two different branches if and only if the amplitudes for each branch are precisely equal! That’s because the proof of Indifference relies on shifting around different parts of the state of the universe and demanding that the answers to local questions not be altered; it turns out that this only works in quantum mechanics if the amplitudes are equal, which is certainly consistent with the Born Rule.

See the papers for the actual argument — it’s straightforward but a little tedious. The basic idea is that you set up a situation in which more than one quantum object is measured at the same time, and you ask what happens when you consider different objects to be “the system you will look at” versus “part of the environment.” If you want there to be a consistent way of assigning credences in all cases, you are led inevitably to equal probabilities when (and only when) the amplitudes are equal.

What if the amplitudes for the two branches are not equal? Here we can borrow some math from Zurek. (Indeed, our argument can be thought of as a love child of Vaidman and Zurek, with Elga as midwife.) In his envariance paper, Zurek shows how to start with a case of unequal amplitudes and reduce it to the case of many more branches with equal amplitudes. The number of these pseudo-branches you need is proportional to — wait for it — the square of the amplitude. Thus, you get out the full Born Rule, simply by demanding that we assign credences in situations of self-locating uncertainty in a way that is consistent with ESP.

We like this derivation in part because it treats probabilities as epistemic (statements about our knowledge of the world), not merely operational. Quantum probabilities are really credences — statements about the best degree of belief we can assign in conditions of uncertainty — rather than statements about truly stochastic dynamics or frequencies in the limit of an infinite number of outcomes. But these degrees of belief aren’t completely subjective in the conventional sense, either; there is a uniquely rational choice for how to assign them.

Working on this project has increased my own personal credence in the correctness of the Everett approach to quantum mechanics from “pretty high” to “extremely high indeed.” There are still puzzles to be worked out, no doubt, especially around the issues of exactly how and when branching happens, and how branching structures are best defined. (I’m off to a workshop next month to think about precisely these questions.) But these seem like relatively tractable technical challenges to me, rather than looming deal-breakers. EQM is an incredibly simple theory that (I can now argue in good faith) makes sense and fits the data. Now it’s just a matter of convincing the rest of the world!

The Great Beyond - Nature blog

Three HIV insights from sombre global meeting

Posted on behalf of Erika Check Hayden.

The run-up to the 20th International AIDS Meeting, scheduled to wrap up on 25 July in Melbourne, Australia, was overshadowed by news that a three-year-old child once thought to be cured of HIV still harbours the virus — and by the horrific crash of Malaysian Airlines flight 17, which claimed the lives of six conference delegates.

Here Nature rounds up three major cure stories that unfolded at the conference.

1. HIV establishes secure hiding places from the immune system extremely early after infection — a huge obstacle to developing a cure for HIV infection, scientists say. Researchers reported in Nature on 20 July that monkeys treated starting three days after infection with simian immunodeficiency virus (SIV) appeared to be free of virus after two years. But after treatment was stopped, the virus escaped from its hiding places, or “reservoirs”, and the infections resurged in the monkeys.

2. Still, finding a way to eliminate such reservoirs seems the only way to a cure, and scientists reported a degree of progress in this direction on 22 July. A team from Aarhus University in Denmark used a cancer drug called romidepsin to re-activate dormant HIV  in six infected patients. HIV then surged to detectable levels in five of the patients. The next step in this ‘kick and kill’ strategy is to find a way to eliminate the resurgent virus and the cells that produce it — perhaps by using a therapeutic vaccine to boost the patients’ immune systems.

3. Other researchers are testing gene therapy to cure HIV — for instance, by modifying immune cells so that they become resistant to the virus. A team at the University of New South Wales in Australia is examining whether inducing lab-grown cells to make proteins that target HIV can block infection. Another Australian group infused HIV-infected cells with molecules that prevented cells from becoming activated by drugs such as romidepsin. This raises the possibility of a strategy that reverses the kick-and-kill approach: it would prevent resting cells from leaving their dormant state. Neither strategy is yet being tested in patients, but against this week’s suite of dispiriting news, they offer a reminder that the HIV cure field is far from exhausting all options.

Symmetrybreaking - Fermilab/SLAC

How to weigh a galaxy cluster

The study of galaxy clusters is bringing scientists closer to an understanding of the “dark” universe.

Step on a scale and you’ll get a quick measure of your weight. Weighing galaxy clusters, groups of hundreds or thousands of galaxies bound together by gravity, isn’t so easy.

But scientists have many ways to do it. This is fortunate for particle astrophysics; determining the mass of galaxy clusters led to the discovery of dark matter, and it’s key to the continuing study of the “dark” universe: dark matter and dark energy.

“Galaxy cluster measurements are one of the most powerful probes of cosmology we have,” says Steve Allen, an associate professor of physics at SLAC National Accelerator Laboratory and Stanford University.

When you weigh a galaxy cluster, what you see is not all that you get. Decades ago, when scientists first estimated the masses of galaxy clusters based on the motions of the galaxies within them, they realized that something strange was going on. The galaxies were moving faster than expected, which implied that the clusters were more massive than previously thought, based on the amout of light they emitted. The prevailing explanation today is that galaxy clusters contain vast amounts of dark matter.

Measurements of the masses of galaxy clusters can tell scientists about the sizes and shapes of the dark matter “halos” enveloping them and can help them determine the effects of dark energy, which scientists think is driving the universe’s accelerating expansion.

Today, researchers use a combination of simulations and space- and ground-based telescope observations to estimate the total masses of galaxy clusters.

Redshift, blueshift: Just as an ambulance’s siren seems higher in pitch as it approaches and lower as it speeds into the distance, the light of objects traveling away from us is shifted to longer, “redder” wavelengths, and the light of those traveling toward us is shifted to shorter, “bluer” wavelengths. Measurements of these shifts in light coming from galaxies orbiting a galaxy cluster can tell scientists how much gravitational pull the cluster has, which is related to its mass.

Gravitational lensing: Gravitational lensing, theorized by Albert Einstein, occurs when the light from a distant galaxy is bent by the gravitational pull of a massive object between it and the viewer. This bending distorts the image of the background galaxy (pictured above). Where the effects are strong, the process can cause dramatic distortions; multiple images of the galaxy can appear. Typically, however, the effects are subtle and require careful measurements to detect. The greater the lensing effect caused by a galaxy cluster, the larger the galaxy cluster’s mass.

X-rays: Galaxy clusters are filled with superhot, 10- to 100-million-degree gas that shines brightly at X-ray wavelengths. Scientists use X-ray data from space telescopes to find and study massive galaxy clusters. They can use the measured properties of the gas to infer the clusters’ masses.

The Sunyaev-Zel’dovich effect: The Sunyaev-Zel’dovich effect is a shift in the wavelength of the Cosmic Microwave Background—light left over from the big bang—that occurs when this light passes through the hot gas in a galaxy cluster. The size of the wavelength shift can tell scientists the mass of the galaxy cluster it passed through.

“These methods are much more powerful in combination than alone,” says Aaron Roodman, a faculty member at the Kavli Institute for Particle Astrophysics and Cosmology at SLAC National Accelerator Laboratory.

Forthcoming data from the Dark Energy Survey, the under-construction Large Synoptic Survey Telescope and Dark Energy Spectroscopic Instrument, improved Sunyaev-Zel’dovich effect measurements, and the soon-to-be-launched ASTRO-H and eRosita X-ray telescopes should further improve galaxy cluster mass estimates and advance cosmology. Computer simulations are also playing an important role in testing and improving mass estimates based on data from observations.

Even with an extensive toolkit, it remains a challenging business to weigh galaxy clusters,  says Marc Kamionkowski, a theoretical physicist and professor of physics and astronomy at Johns Hopkins University. They are constantly changing; they continue to suck in matter; their dark matter halos can overlap; and no two are alike.

“It’s like asking how many birds are in my backyard,” he says.

Despite this, Allen says he sees no roadblocks toward pushing mass estimates to within a few percent accuracy.

“We will be able to take full advantage of these amazing new data sets that are coming along,” he says. “We are going to see rapid advances.”

Like what you see? Sign up for a free subscription to symmetry!

Jester - Resonaances

Higgs Recap
On the occasion of summer conferences the LHC experiments dumped a large number of new Higgs results. Most of them have already been advertised on blogs, see e.g. here or here or here. In case you missed anything, here I summarize the most interesting updates of the last few weeks.

1. Mass measurements.
Both ATLAS and CMS recently presented improved measurements of the Higgs boson mass in the diphoton and 4-lepton final states. The errors shrink to 400 MeV in ATLAS and 300 MeV in CMS. The news is that Higgs has lost some weight (the boson, not Peter). A naive combination of the ATLAS and CMS results yields the central value 125.15 GeV. The profound consequence is that, for another year at least,  we will call it the 125 GeV particle, rather than the 125.5 GeV particle as before ;)

While the central values of the Higgs mass combinations quoted by ATLAS and CMS are very close, 125.36 vs 125.03 GeV, the individual inputs are still a bit apart from each other. Although the consistency of the ATLAS measurements in the  diphoton and 4-lepton channels has improved, these two independent mass determinations differ by 1.5 GeV, which corresponds to a 2 sigma tension. Furthermore, the central values of the Higgs mass quoted by ATLAS and CMS differ by 1.3 GeV in the diphoton channel and by 1.1 in the 4-lepton channel, which also amount to 2 sigmish discrepancies. This could be just bad luck, or maybe the systematic errors are slightly larger than the experimentalists think.

2. Diphoton rate update.
CMS finally released a new value of the Higgs signal strength in the diphoton channel.  This CMS measurement was a bit of a roller-coaster: initially they measured an excess, then with the full dataset they reported a small deficit. After more work and more calibration they settled to the value 1.14+0.26-0.23 relative to the standard model prediction, in perfect agreement with the standard model. Meanwhile ATLAS is also revising the signal strength in this channel towards the standard model value.  The number 1.29±0.30 quoted  on the occasion of the mass measurement is not yet the final one; there will soon be a dedicated signal strength measurement with, most likely, a slightly smaller error.  Nevertheless, we can safely announce that the celebrated Higgs diphoton excess is no more.

3. Off-shell Higgs.
Most of the LHC searches are concerned with an on-shell Higgs, that is when its 4-momentum squared is very close to its mass. This is where Higgs is most easily recognizable, since it can show as a bump in invariant mass distributions. However Higgs, like any quantum particle, can also appear as a virtual particle off-mass-shell and influence, in a subtler way, the cross section or differential distributions of various processes. One place where an off-shell Higgs may visible contribute is the pair production of on-shell Z bosons. In this case, the interference between gluon-gluon → Higgs → Z Z process and  the non-Higgs one-loop Standard Model contribution to gluon-gluon → Z Z process can influence the cross section in a non-negligible way.  At the beginning, these off-shell measurements were advertised as a model-independent Higgs width measurement, although now it is recognized the "model-independent" claim does not stand. Nevertheless, measuring the ratio of the off-shell and on-shell Higgs production provides qualitatively new information  about the Higgs couplings and, under some specific assumptions, can be interpreted an indirect constraint on the Higgs width. Now both ATLAS and CMS quote the constraints on the Higgs width at the level of 5 times the Standard Model value.  Currently, these results are not very useful in practice. Indeed, it would require a tremendous conspiracy to reconcile the current data with the Higgs width larger than 1.3 the standard model  one. But a new front has been opened, and one hopes for much more interesting results in the future.

4. Tensor structure of Higgs couplings.
Another front that is being opened as we speak is constraining higher order Higgs couplings with a different tensor structure. So far, we have been given the so-called spin/parity measurements. That is to say, the LHC experiments imagine a 125 GeV particle with a different spin and/or parity than the Higgs, and the couplings to matter consistent with that hypothesis. Than they test  whether this new particle or the standard model Higgs better describes the observed differential  distributions of Higgs decay products. This has some appeal to general public and nobel committees but little practical meaning. That's because the current data, especially the Higgs signal strength measured in multiple channels, clearly show that the Higgs is, in the first approximation, the standard model one. New physics, if exists, may only be a small perturbation on top of the standard model couplings. The relevant  question is how well we can constrain these perturbations. For example, possible couplings of the Higgs to the Z boson are

In the standard model only the first type of coupling is present in the Lagrangian, and all the a coefficients are zero. New heavy particles coupled to the Higgs and Z bosons could be indirectly detected by measuring non-zero a's, In particular, a3 violates the parity symmetry and could arise from mixing of the standard model Higgs with a pseudoscalar particle. The presence of non-zero a's would show up, for example,  as a modification of the lepton momentum distributions in the Higgs decay to 4 leptons. This was studied by CMS in this note. What they do is not perfect yet, and the results are presented in an unnecessarily complicated fashion. In any case it's a step in the right direction: as the analysis improves and more statistics is accumulated in the next runs these measurements will become an important probe of new physics.

5. Flavor violating decays.
In the standard model, the Higgs couplings conserve flavor, in both the quark and the lepton sectors. This is a consequence of the assumption that the theory is renormalizable and that only 1 Higgs field is present.  If either of these assumptions is violated, the Higgs boson may mediate transitions between different generations of matter. Earlier, ATLAS and CMS  searched for top quark decay to charm and Higgs. More recently, CMS turned to lepton flavor violation, searching for Higgs decays to τμ pairs. This decay cannot occur in the standard model, so the search is a clean null test. At the same time, the final state is relatively simple from the experimental point of view, thus this decay may be a sensitive probe of new physics. Amusingly, CMS sees a 2.5 sigma significant  excess corresponding to the h→τμ branching fraction of order 1%. So we can entertain a possibility that Higgs holds the key to new physics and flavor hierarchies, at least until ATLAS comes out with its own measurement.

July 23, 2014

astrobites - astro-ph reader's digest

Searching for Signs of Plate Tectonics in Polluted White Dwarfs

Plate tectonics is a unique feature of Earth. Unlike every other rocky body in our Solar System, Earth’s crust is broken up into roughly a dozen pieces (plates) that move around with different velocities. When two plates converge, one plate sinks under the other in a process called subduction, eventually becoming recycled into the mantle and causing volcanism, earthquakes, and mountain building on the surface.

Plate tectonics regulates Earth’s atmospheric composition through the cycle of subduction and volcanic degassing. Subduction returns carbonate rocks like limestone from the seafloor into the mantle, for example, while carbon dioxide is emitted from volcanoes at plate boundaries. It’s probably not a coincidence that the only known planet with plate tectonics is the only one with life. So, we’d really like to know if plate tectonics is a thing that happens on terrestrial exoplanets.

Today’s paper discusses the first step in a search for evidence of plate tectonics—a hunt for the remnants of continental crust within two white dwarfs that have accreted little bits of rocky planets, aka planetesimals. Although these two stars yielded no evidence for extrasolar plate tectonics, a survey of more targets seems exceedingly worthwhile.

Telltale Signs of Plate Tectonics

To a rough approximation, rocky planets are chemically differentiated into three distinct layers. Early in the process of planetary formation, an iron-rich core separates from a silicate mantle. A basaltic crust then forms over time from the partial melting of the mantle. Oxygen and silicon are the most abundant elements in basaltic crust, followed by calcium and aluminum, as observed on Mars, the Moon, and Vesta.

Figure 1 (Wikipedia). Cartoon of a converging boundary between oceanic and continental plates. Oceanic, basaltic crust is subducted because it is more dense than continental crust. The subducting plate partially melts, causing surface volcanism and the production of more continental crust, which is enriched in incompatible elements like strontium and barium.

Earth’s oceanic crust is basaltic, but we also have continental crust, which forms from partial melting of subducted oceanic crust. Continental crust is chemically distinct because it is extremely enriched in “incompatible” elements that generally have large ionic radii and thus tend to come out of a partially melting rock. In particular, Earth’s continental crust is dramatically enriched in strontium and barium, by factors of ~10-100 relative to calcium. No other known process can produce similarly high elemental ratios.

Continents are less dense than oceanic crust, so they tend to stick around on the surface, as shown on Figure 1. If a planet has plate tectonics for a significant part of its history, then evidence should remain on its outermost skin. Unfortunately, we can’t send a probe to any exoplanets in the foreseeable future, nor can we spectrally analyze the surfaces of distant rocky planets from Earth. Using a new technique, however, we can search the corpses of Sun-like stars for remnants of Earth-like planets.

A Pilot Search of Two White Dwarfs

Our Sun will end its life as a white dwarf, after it expands into a red giant and sheds its outer layers as planetary nebulae. (N.b., planetary nebulae have nothing to do with planets and are a great example of astronomers being very silly with words. But they look cool!) Planets could survive this end stage of stellar evolution and remain orbiting the white dwarf.

Figure 2 (Jura et al. 2014). Spectrum of the polluted white dwarf GD 362 in the vicinity of the Ba II spectral feature. Black lines are data and the red lines are a model with the maximum amount of barium consistent with the data. GD 362 is accreting rocky planetesimals, but they are not sufficiently enriched in barium to argue that they contain a significant amount of continental crust from a planet with plate tectonics.

Elements heavier than hydrogen or helium quickly (in ~10,000 years) sink below the observable photosphere of white dwarfs. This is fascinating, because observing heavier elements in the spectrum of a white dwarf means that something like a rocky planetesimal has just accreted, or is accreting, onto the “polluted” white dwarf. (Without external accretion, we expect to see only hydrogen and helium.) Scientists used these tactics to argue for extremely water-rich asteroids orbiting the white dwarf GD 61.

Imagine the following scenario: an asteroid impact knocks off part of the crust from a planet orbiting a white dwarf that has plate tectonics. If these crustal remnants were then accreted onto the white dwarf, then we could observe signs of rocky material with a very high barium-to-calcium or strontium-to-calcium ratio in its spectrum.

Figure 2 shows the region of the spectrum of the polluted white dwarf GD 362 (black data) that would reveal an absorption line of barium. Alack, the authors place an upper limit on the barium abundance (red model) that excludes the accretion of a significant amount of continental crust. Similar results were obtained for another polluted white dwarf, PG 1225-079.

Current observational techniques are sufficient to detect signs of continental crust, if nature were to obligingly provide. Accretion of continental crust onto a white dwarf is probably an exceedingly rare event, even if rocky planets with plate tectonics are plentiful. But finding a plate tectonics is a big step towards finding life, so the search continues.

ZapperZ - Physics and Physicists

Lights On Pipes - Which One Heats The Most?
We also deal with elementary stuff here.

The people at the Frostbite Theater at JLab has another video out. This time, they show an experiment on which pipes heats the most when shined with light.

The results is not surprising. But what is surprising is why the white pipe heats up faster initially. So, anyone wants to enter a Science Fair to study why this is so, especially when Steve is way too old to enter?

Zz.

ZapperZ - Physics and Physicists

Not sure how long this article will be available without a subscription, but in case you missed this article on LIGO in last week's issue of Nature, this is a good one to keep.

De Rosa, a physicist at Louisiana State University in Baton Rouge, knows he has a long night ahead of him. He and half a dozen other scientists and engineers are trying to achieve 'full lock' on a major upgrade to the detector — to gain complete control over the infrared laser beams that race up and down two 4-kilometre tunnels at the heart of the facility. By precisely controlling the path of the lasers and measuring their journey in exquisite detail, the LIGO team hopes to observe the distinctive oscillations produced by a passing gravitational wave: a subtle ripple in space-time predicted nearly a century ago by Albert Einstein, but never observed directly.

It's a daunting task, with instrument of such precision, that so many things can contribute to the "noise" being detected. We will just have to wait and see if we will get to detect such gravitational waves anytime soon.

Zz.

arXiv blog

How to Convert a Satellite Dish Into a Radio Telescope

If you fancy trying your hand at radio astronomy, why not convert an old satellite dish.

John Baez - Azimuth

El Niño Project (Part 6)

guest post by Steven Wenner

Hi, I’m Steve Wenner.

I’m an industrial statistician with over 40 years of experience in a wide range applications (quality, reliability, product development, consumer research, biostatistics); but, somehow, time series only rarely crossed my path. Currently I’m working for a large consumer products company.

My undergraduate degree is in physics, and I also have a master’s in pure math. I never could reconcile how physicists used math (explain that Dirac delta function to me again in math terms? Heaviside calculus? On the other hand, I thought category theory was abstract nonsense until John showed me otherwise!). Anyway, I had to admit that I lacked the talent to pursue pure math or theoretical physics, so I became a statistician. I never regretted it—statistics has provided a very interesting and intellectually challenging career.

I got interested in Ludescher et al’s paper on El Niño prediction by reading Part 3 of this series. I have no expertise in climate science, except for an intense interest in the subject as a concerned citizen. So, I won’t talk about things like how Ludescher et al use a nonstandard definition of ‘El Niño’—that’s a topic for another time. Instead, I’ll look at some statistical aspects of their paper:

• Josef Ludescher, Avi Gozolchiani, Mikhail I. Bogachev, Armin Bunde, Shlomo Havlin, and Hans Joachim Schellnhuber, Very early warning of next El Niño, Proceedings of the National Academy of Sciences, February 2014. (Click title for free version, journal name for official version.)

Analysis

I downloaded the NOAA adjusted monthly temperature anomaly data and compared the El Niño periods with the charts in this paper. I found what appear to be two errors (“phantom” El Niños) and noted some interesting situations. Some of these are annotated on the images below. Click to enlarge them:

I also listed for each year whether an El Niño initiation was predicted, or not, and whether one actually happened. I did the predictions five ways: first, I listed the author’s “arrows” as they appeared on their charts, and then I tried to match their predictions by following in turn four sets of rules. Nevertheless, I could not come up with any detailed rules that exactly reproduced the author’s results.

These were the rules I used:

An El Niño initiation is predicted for a calendar year if during the preceding year the average link strength crossed above the 2.82 threshold. However, we could also invoke additional requirements. Two possibilities are:

1. Preemption rule: the prediction of a new El Niño is canceled if the preceding year ends in an El Niño period.

2. End-of-year rule: the link strength must be above 2.82 at year’s end.

I counted the predictions using all four combinations of these two rules and compared the results to the arrows on the charts.

I defined an “El Niño initiation month” to be a month where the monthly average adjusted temperature anomaly rises up to at least 0.5 C and remains above or equal to 0.5 °C for at least five months. Note that the NOAA El Niño monthly temperature estimates are rounded to hundredths; and, on occasion, the anomaly is reported as exactly 0.5 °C. I found slightly better agreement with the authors’ El Niño periods if I counted an anomaly of exactly 0.5 °C as satisfying the threshold criterion, instead of using the strictly “greater than” condition.

Anyway, I did some formal hypothesis testing and estimation under all five scenarios. The good news is that under most scenarios the prediction method gave better results than merely guessing. (But, I wonder how many things the authors tried before they settled on their final method? Also, did they do all their work on the learning series, and then only at the end check the validation series—or were they checking both as they went about their investigations?)

The bad news is that the predictions varied with the method, and the methods were rather weak. For instance, in the training series there were 9 El Niño periods in 30 years; the authors’ rules (whatever they were, exactly) found five of the nine. At the same time, they had three false alarms in the 21 years that did not have an El Niño initiated.

I used Fisher’s exact test to compute some p-values. Suppose (as our ‘null hypothesis’) that Ludescher et al’s method does not improve the odds of a successful prediction of an El Nino initiation. What’s the probability of that method getting at least as many predictions right just by chance? Answer: 0.032 – this is marginally more significant than the conventional 1 in 20 chance that is the usual threshold for rejecting a null hypothesis, but still not terribly convincing. This was, by the way, the most significant of the five p-values for the alternative rule sets applied to the learning series.

I also computed the “relative risk” statistics for all scenarios; for instance, we are more than three times as likely to see an El Niño initiation if Ludescher et al predict one, than if they predict otherwise (the 90% confidence interval for that ratio is 1.2 to 9.7, with the point estimate 3.4). Here is a screen shot of some statistics for that case:

Here is a screen shot of part of the spreadsheet list I made. In the margin on the right I made comments about special situations of interest.

Again, click to enlarge—but my whole working spreadsheet is available with more details for anyone who wishes to see it. I did the statistical analysis with a program called JMP, a product of the SAS corporation.

My overall impression from all this is that Ludescher et al are suggesting a somewhat arbitrary (and not particularly well-defined) method for revealing the relationship between link strength and El Niño initiation, if, indeed, a relationship exists. Slight variations in the interpretation of their criteria and slight variations in the data result in appreciably different predictions. I wonder if there are better ways to analyze these two correlated time series.

July 22, 2014

The Great Beyond - Nature blog

Scripps president resigns after faculty revolt

The president of the Scripps Research Institute intends to leave his post, according to a statement from Richard Gephardt, the chair of the institute’s board of trustees. The announcement came in the wake of a faculty rebellion against the president, Michael Marletta, who had attempted to broker a deal in which the research lab, in La Jolla, California, would be acquired by the Los Angeles-based University of Southern California (USC) for US$600 million. In the statement, posted on 21 July, Gephardt said that Marletta “has indicated his desire to leave” Scripps and that the board “is working with Dr Marletta on a possible transition plan”. Scripps Research Institute president Michael Marletta resigned after clashing with faculty over a proposed merger. Scripps Research Institute Scripps faculty members see Marletta’s departure as a victory. They had been angered by the terms of the USC deal, which was scrapped on 9 July, and by the fact that Marletta did not consult with faculty during his negotiations with USC. Faculty members told the Scripps board of trustees earlier this month that they had an almost unanimous consensus of no confidence in Marletta. “I think we are more optimistic than we have been in many years, because we feel like we have some control over our own fate,” says Scripps biologist Jeanne Loring. Loring said that at a meeting with a majority of Scripps faculty on 21 July, Gephardt indicated that the board had thought that Marletta was communicating with the faculty as he negotiated the USC deal. Gephardt also promised that faculty would involved in choosing Marletta’s successor. Whoever replaces Marletta must find a way to close a projected$21-million budget gap this year left by the contraction of funding from the US National Institutes of Health (NIH) and by the virtual disappearance of support from pharmaceutical companies, who had provided major support for Scripps until 2011.

How Scripps solves its funding issue will be watched by other independent institutes, which have been hard hit by the contraction in NIH dollars. Scripps’ neighbour institutes have brought in hundreds of millions of dollars in philanthropy, and many involved see that as part of the solution for Scripps as well. But, Loring says, “the funding that other institutes have got from philanthropy is going to be a short-term solution, because even though it seems like an awful lot of money, they have to spend it, so they will eventually be facing the same issues.”

ZapperZ - Physics and Physicists

Big Mystery in the Perseus Cluster
The news about the x-ray emission line seen in the Perseus cluster that can't be explained (yet) by current physics.

The preprint that this video is based on can be found here.

Zz.

The Great Beyond - Nature blog

São Paulo state joins mega-telescope

The Giant Magellan Telescope (GMT) received a boost today when Brazil’s São Paulo Research Foundation (FAPESP) confirmed its plans to join the project. The US$880-million facility, some components of which have already been built, is one of three competing mega-telescopes that will study the skies in the next decade. Approving plans reported by Nature in February, the richest state in Brazil confirmed on 22 July that it would contribute$40 million towards membership of the GMT, which is managed by a consortium of institutions in the United States, Australia and South Korea.

São Paulo researchers might not be the only ones to benefit. FAPESP scientific director Carlos Henrique de Brito Cruz told Nature’s news team that negotiations between the foundation and the Ministry of Science and Technology of Brazil were “well advanced to share these costs and allow astronomers from all states of Brazil to have access to the telescope”. If that plan goes ahead, the ministry will refund part of the costs to FAPESP.

Although a boon for Brazilian astronomers, the move could raise concerns for advocates of the Extremely Large Telescope (E-ELT), which is being built by the European Southern Observatory (ESO) in Chile. ESO has begun blasting the top off the 3,000-metre peak of Cerro Armazones where the E-ELT will be based, but is reliant on funding from Brazil’s federal government to enter the main construction phase. In 2010 Brazil agreed to contribute €270 million (\$371 million) to ESO over a decade, but the deal has yet to be ratified and remains held up in legislative committees.

Some legislators may see the GMT agreement as a cheaper way for Brazil’s astronomers to access a future mega-telescope, even though the ESO deal also allows access to existing observatories in Chile. However Beatrice Barbuy, head of the Astronomical Society of Brazil’s ESO committee, says that the plans are still moving ahead. She adds that they had stalled in recent months owing to the country’s hosting the FIFA World Cup and staff going on winter vacations, but discussions were likely to get underway again in August.

The 25-metre GMT, to be built at the Carnegie Institution for Science’s Las Campanas Observatory in Chile, is scheduled to begin operations in 2020. It is designed to have six times the collecting power of the largest existing observatories and 10 times the resolution of NASA’s Hubble Space Telescope. The agreement is expected to secure São Paulo a 4% stake in the GMT project, guaranteeing 4% of observation time for Brazilian astronomers each year, as well as representation on the consortium’s decision-making board.

The GMT, E-ELT and a third planned next-generation ground-based observatory, the Thirty Meter Telescope, proposed to be built in Mauna Kea in Hawaii, are intended to address similar science questions. Astronomers hope to use the huge light-collecting capacity of the telescopes to explore planets outside our Solar System, study supermassive black holes and galaxy formation and unravel the nature of dark matter and dark energy.

Symmetrybreaking - Fermilab/SLAC

Exploratorium exhibit reveals the invisible

A determined volunteer gives an old detector new life as the centerpiece of a cosmic ray exhibit.

Watch one of the exhibits in San Francisco’s Exploratorium science museum and count to 10, and you’ll have a very good chance of seeing a three-foot-long, glowing red spark.

The exhibit is a spark chamber, a piece of experimental equipment 5 feet wide and more than 6 feet tall, and the spark marks the path of a muon, a particle released when a cosmic ray hits the Earth’s atmosphere. The spark chamber came to the museum by way of the garage of physicist and computer scientist Dave Grossman.

“I always thought this would make a great science exhibit,” says Grossman, who spent more than eight years gathering funding and equipment from places like SLAC National Accelerator Laboratory and Fermi National Accelerator Laboratory, building the chamber, and trying to find it a home.

Grossman wrote the book—the PhD dissertation, actually—on this type of spark chamber during the mid-1960s when he was a graduate student at Harvard University. His task was to help design and build a spark chamber that could reveal the precise paths of certain types of particles.

All spark chambers contain a mixture of inert gases—such as neon, helium and argon—that glow when an electric current passes through them (think neon signs). When an energetic charged particle passes through the gas, it leaves a trail of ionized molecules. When voltage is applied to the gas, the current flows along the trail, illuminating the particle’s path.

The longer the path, the higher the necessary voltage. Typical spark chambers from before Grossman’s time at Harvard could light up only a centimeter or two of trail. Grossman labored to design a compact, dependable generator that could produce 240,000 volts for 100 nanoseconds, enough voltage to illuminate charged particle paths measured in feet instead.

The spark chamber design worked well but was quickly rendered obsolete by more sensitive, more compact digital technology. After graduation, Grossman shifted from particle physics to computer science and went on to a long, successful career with IBM.

But during the years he spent as an occasional volunteer at his kids' and grandkids' schools, teaching students about robotics or sharing his telescope at star parties, Grossman never forgot his pet project or the thesis advisor and friend that guided him through it, Karl Strauch.

“Karl taught me the most by his own example,” Grossman says. “He was willing to do anything necessary for the sake of the science. He would even sweep the floor if he thought it was too dirty.”

Finally, retirement provided time; the garage of his Palo Alto home gave him the space; and donors provided the means for him to rebuild his spark chamber. Nobel Laureates Steven Weinberg and Norman Ramsey (Harvard colleagues of Strauch’s), Strauch’s son, venture capitalist Roger Strauch, and his business partner Dan Miller all pitched in.

The Exploratorium was happy to reap the benefits.

“I went to Dave Grossman’s house twice to look at it and I was impressed,” says Exploratorium Senior Scientist Thomas Humphrey. “I’ve made spark chambers, and they’re finicky beasts.”

Humphrey gave the go-ahead, and the detector was installed in the museum’s Central Gallery, where it attracts visitors young and old.

“Visitors are really excited to see it,” Humphrey says. “Cosmic rays are so mysterious. But here you can walk right up to a device and see a spark in real time. It makes the unseen seen.”

Editor's note: The exhibit will be available for viewing in mid-August.

Like what you see? Sign up for a free subscription to symmetry!

CERN Bulletin

CERN Bulletin Issue No. 30-31/2014
Link to e-Bulletin Issue No. 30-31/2014Link to all articles in this issue No.

Tommaso Dorigo - Scientificblogging

True And False Discoveries: How To Tell Them Apart
Many new particles and other new physics signals claimed in the last twenty years were later proven to be spurious effects, due to background fluctuations or unknown sources of systematic error. The list is long, unfortunately - and longer than the list of particles and effects that were confirmed to be true by subsequent more detailed or more statistically-rich analysis.

Clifford V. Johnson - Asymptotia

74 Questions
Hello from the Aspen Center for Physics. One of the things I wanted to point out to you last month was the 74 questions that Andy Strominger put on the slides of his talk in the last session of the Strings 2014 conference (which, you may recall from earlier posts, I attended). This was one of the "Vision Talks" that ended the sessions, where a number of speakers gave some overview thoughts about work in the field at large. Andy focused mostly on progress in quantum gravity matters in string theory, and was quite upbeat. He declines (wisely) to make predictions about where the field might be going, instead pointing out (not for the first time) that if you look at the things we've made progress on in the last N years, most (if not all) of those things would not have been on anyone's list of predictions N years ago. (He gave a specific value for N, I just can't recall what it is, but it does not matter.) He sent an email to everyone who was either speaking, organising, moderating a session or similarly involved in the conference, asking them to send, off the [...] Click to continue reading this post

Lubos Motl - string vacua and pheno

CMS: a $$2.1\TeV$$ right-handed $$W_R^\pm$$-boson
Since the beginning of this month, the ATLAS and CMS collaborations have reported several intriguing excesses such as the apparent enhancement of the $$W^+W^-$$ cross section (which may be due to some large logarithms neglected by theorists, as a recent paper indicated), a flavor-violating Higgs decay, leptoquarks, and a higgsino excess, among others.

Bizarrely enough, all of us missed another, 2.8-sigma excess exactly one week ago:
CMS: Search for heavy neutrinos and $$W^\pm_R$$ bosons with right-handed couplings in proton-proton collisions at $$\sqrt{s} = 8 \TeV$$ (arXiv)
The ordinary $$W^\pm$$-bosons only interact with the left-handed component of the electron, muon, and tau, because only those transform nontrivially (as a doublet) under the relevant $$SU(2)_W$$ part of the electroweak gauge group.

However, there exist models of new physics where this left-right asymmetry is fundamentally "repaired" at higher energies – and its apparent breakdown at accessible energies is due to some spontaneous symmetry breaking.

The CMS search assumed a new spontaneously broken non-Abelian gauge group with a gauge boson $$W^\pm_R$$. Under this gauge group, the right-handed electron and muon may transform nontrivially and it doesn't create too much havoc at accessible energies as long as the gauge boson $$W^\pm_R$$ is very heavy.

In the search, one assumes that the $$W^\pm_R$$ boson is created by the proton-proton collisions and decays$pp\to W^\pm_R \to \ell_1^\pm N_\ell \to \dots$ to a charged lepton and a (new) right-handed neutrino. The latter hypothetical particle is also in the multi-$${\rm TeV}$$ range and it decays to another charged lepton along with a new but virtual (therefore the asterisk) $$W^\pm_R$$ bosons, so the chain above continues as$\dots \to \ell_1 \ell_2 W_R^* \to \ell_1\ell_2 q\bar q$ where the final step indicates the decay of the virtual $$W_R^*$$ boson to a quark-antiquark pair. Great. They have to look for events with two charged leptons and two jets.

So the CMS folks have made a search and wrote that there is nothing interesting to be seen over there. They may obliterate the proposals of new physics (of right-handed couplings of new gauge bosons) more lethally than anyone before them, they boast, and the exclusion zone for the $$W_R^\pm$$ goes as high as $$3\TeV$$.

However, under this boasting about exclusion, there is a "detail" that isn't advertised too much. Look at the exclusion graph above. You must have seen many graphs of this kind. On the $$x$$-axis, you see a parameter labeling the hypothesis about new physics – in this case, it's the mass of the $$W^\pm_R$$-boson. The right-handed neutrino is assumed to have mass $$m_N=m_{W(R)}/2$$.

On the $$y$$-axis, you see the number of $$\ell\ell q\bar q$$ events that look like if they originated from the new particle decaying as indicated above. If there is no new physics, the expected or predicted number of events (the "background", i.e. boring events without new physics that imitate new physics) is captured by the dotted line plus minus the green and yellow (1-sigma and 2-sigma) band. The actual number of measured events is depicted by the full black line.

If there is no new physics, the wiggly black line is expected to fluctuate with the Brazil band 95% of the time. The red strip shows the prediction assuming that there is new physics – in this case, new $$W^\pm_R$$-bosons that are coupled as strongly as the known $$W_L^\pm$$-bosons.

The wiggly black curve (observation) never gets close to the red strip. However, you may see that the wiggly black curve violates the Brazil band. If the wiggly curve were black-red-yellow (German), it would tear the Brazil band apart by 7.1 sigma. (That was a stupid soccer joke.) But even the black wiggly curve deviates by 2.8 sigma, something like 99.5% confidence level.

This may be interpreted as a "near-certainty" that there are new $$W^\pm_R$$-bosons whose mass is about $$2.1\TeV$$ or perhaps between $$1.9\TeV$$ and $$2.4\TeV$$. Well, I am of course joking about the "near-certainty" but still, this "near-certainty" is 86 times stronger than the strongest available "proofs" that global warming exists.

The CMS collaboration dismisses the excess because it is nowhere near the red curve. So it must be a fluke. Well, it may also be a sign of new physics – but a different kind of physics than what the search was assuming. It's actually easy to adjust the theory so that it does predict a signal of this sort. Somewhat lower ($$g_R=0.6 g_L$$) couplings of the right-handed bosons are enough to weaken the predicted signal.

In a new hep-ph paper today,
A Signal of Right-Handed Charged Gauge Bosons at the LHC?,
Frank Deppisch and four co-authors argue that such new gauge bosons coupled to right-handed fermions may be predicted by $$SO(10)$$ grand unified theories. The minimal $$SU(5)$$ group is no good. Needless to say, I indeed love the $$SO(10)$$ grand unification more than I love the $$SU(5)$$ grand unification – especially because it's more (heterotic and) stringy and the fermions are produced in a single multiplet, not two.

The asymmetry in the left-handed and right-handed coupling (note that they need a suppression $$0.6$$ when going from the left to the right) may be achieved in "LRSM scenarios": the scalars charged under $$SU(2)_L$$ have a different mass than those under $$SU(2)_R$$, and the implied modifications of the RG running are enough to make the left-handed and right-handed couplings significantly different at low energies.

All these possibilities sound rather natural and Deppisch et al. are clearly excited about their proposal and think that it's the most promising potential signal of new physics at the LHC yet. I think that the probability is above 95% that this particular "signal" will go away but people who are interested in HEP experiments and phenomenology simply cannot and shouldn't ignore such news.

July 21, 2014

The n-Category Cafe

Pullbacks That Preserve Weak Equivalences

The following concept seems to have been reinvented a bunch of times by a bunch of people, and every time they give it a different name.

Definition: Let $CC$ be a category with pullbacks and a class of weak equivalences. A morphism $f:A\to Bf:A\to B$ is a [insert name here] if the pullback functor ${f}^{*}:C/B\to C/Af^\ast:C/B \to C/A$ preserves weak equivalences.

In a right proper model category, every fibration is one of these. But even in that case, there are usually more of these than just the fibrations. There is of course also a dual notion in which pullbacks are replaced by pushouts, and every cofibration in a left proper model category is one of those.

What should we call them?

The names that I’m aware of that have so far been given to these things are:

1. sharp map, by Charles Rezk. This is a dualization of the terminology flat map used for the dual notion by Mike Hopkins (I don’t know a reference, does anyone?). I presume that Hopkins’ motivation was that a ring homomorphism is flat if tensoring with it (which is the pushout in the category of commutative rings) is exact, hence preserves weak equivalences of chain complexes.

However, “flat” has the problem of being a rather overused word. For instance, we may want to talk about these objects in the canonical model structure on $\mathrm{Cat}Cat$ (where in fact it turns out that every such functor is a cofibration), but flat functor has a very different meaning. David White has pointed out that “flat” would also make sense to use for the monoid axiom in monoidal model categories.

2. right proper, by Andrei Radulescu-Banu. This is presumably motivated by the above-mentioned fact that fibrations in right proper model categories are such. Unfortunately, proper map also has another meaning.

3. $hh$-fibration, by Berger and Batanin. This is presumably motivated by the fact that “$hh$-cofibration” has been used by May and Sigurdsson for an intrinsic notion of cofibration in topologically enriched categories, that specializes in compactly generated spaces to closed Hurewicz cofibrations, and pushouts along the latter preserve weak homotopy equivalences. However, it makes more sense to me to keep “$hh$-cofibration” with May and Sigurdsson’s original meaning.

4. Grothendieck $WW$-fibration (where $WW$ is the class of weak equivalences on $CC$), by Ara and Maltsiniotis. Apparently this comes from unpublished work of Grothendieck. Here I guess the motivation is that these maps are “like fibrations” and are determined by the class $WW$ of weak equivalences.

Does anyone know of other references for this notion, perhaps with other names? And any opinions on what the best name is? I’m currently inclined towards “$WW$-fibration” mainly because it doesn’t clash with anything else, but I could be convinced otherwise.

The n-Category Cafe

The Place of Diversity in Pure Mathematics

Nope, this isn’t about gender or social balance in math departments, important as those are. On Friday, Glasgow’s interdisciplinary Boyd Orr Centre for Population and Ecosystem Health — named after the whirlwind of Nobel-Peace-Prize-winning scientific energy that was John Boyd Orr — held a day of conference on diversity in multiple biological senses, from the large scale of rainforest ecosystems right down to the microscopic scale of pathogens in your blood.

I used my talk (slides here) to argue that the concept of diversity is fundamentally a mathematical one, and that, moreover, it is closely related to core mathematical quantities that have been studied continuously since the time of Euclid.

In a sense, there’s nothing new here: I’ve probably written about all the mathematical content at least once before on this blog. But in another sense, it was a really new talk. I had to think very hard about how to present this material for a mixed group of ecologists, botanists, epidemiologists, mathematical modellers, and so on, all of whom are active professional scientists but some of whom haven’t studied mathematics since high school. That’s why I began the talk with an explanation of how pure mathematics looks these days.

I presented two pieces of evidence that diversity is intimately connected to ancient, fundamental mathematical concepts.

The first piece of evidence is a connection at one remove, and schematically looks like this:

maximum diversity $\leftrightarrow$ magnitude $\leftrightarrow$ intrinsic volumes

The left leg is a theorem asserting that when you have a collection of species and some notion of inter-species distance (e.g. genetic distance), the maximum diversity over all possible abundance distributions is closely related to the magnitude of the metric space that the species form.

The right leg is a conjecture by Simon Willerton and me. It states that for convex subsets of ${ℝ}^{n}\mathbb\left\{R\right\}^n$, magnitude is closely related to perimeter, volume, surface area, and so on. When I mentioned “quantities that have been studied continuously since the time of Euclid”, that’s what I had in mind. The full-strength conjecture requires you to know about “intrinsic volumes”, which are the higher-dimensional versions of these quantities. But the 2-dimensional conjecture is very elementary, and described here.

The second piece of evidence was a very brief account of a theorem of Mark Meckes, concerning fractional dimension of subsets $XX$ of ${ℝ}^{n}\mathbb\left\{R\right\}^n$ (slide 15, and Corollary 7.4 here). One of the standard notions of fractional dimension is Minkowski dimension (also known by other names such as Kolmogorov or box-counting dimension). On the other hand, the rate of growth of the magnitude function $t↦|tX|t \mapsto \left| t X \right|$ is also a decent notion of dimension. Mark showed that they are, in fact, the same. Thus, for any compact $X\subseteq {ℝ}^{n}X \subseteq \mathbb\left\{R\right\}^n$ with a well-defined Minkowski dimension $\mathrm{dim}Xdim X$, there are positive constants $cc$ and $CC$ such that

$c{t}^{\mathrm{dim}X}\le |tX|\le C{t}^{\mathrm{dim}X} c t^\left\{dim X\right\} \leq \left| t X \right| \leq C t^\left\{dim X\right\} $

for all $t\gg 0t \gg 0$.

One remarkable feature of the proof is that it makes essential use of the concept of maximum diversity, where diversity is measured in precisely the way that Christina Cobbold and I came up with for use in ecology.

So, work on diversity has already got to the stage where application-driven problems are enabling advances in pure mathematics. This is a familiar dynamic in older fields of application such as physics, but I think the fact that this is already happening in the relatively new field of diversity theory is a promising sign. It suggests that aside from all the applications, the mathematics of diversity has a lot to give pure mathematics itself.

Next April, John Baez and friends are running a three-day investigative workshop on Entropy and information in biological systems at the National Institute for Mathematical and Biological Synthesis in Knoxville, Tennessee. I hope this will provide a good opportunity for deepening our understanding of the interplay between mathematics and diversity (which is closely related to entropy and information). If you’re interested in coming, you can apply online.

Marco Frasca - The Gauge Connection

Do quarks grant confinement?

In 2010 I went to Ghent in Belgium for a very nice Conference on QCD. My contribution was accepted and I had the chance to describe my view about this matter. The result was this contribution to the proceedings. The content of this paper was really revolutionary at that time as my view about Yang-Mills theory, mass gap and the role of quarks was almost completely out of track with respect to the rest of the community. So, I am deeply grateful to the Organizers for this opportunity. The main ideas I put forward were

• Yang-Mills theory has an infrared trivial fixed point. The theory is trivial exactly as the scalar field theory is.
• Due to this, gluon propagator is well-represented by a sum of weighted Yukawa propagators.
• The theory acquires a mass gap that is just the ground state of a tower of states with the spectrum of a harmonic oscillator.
• The reason why Yang-Mills theory is trivial and QCD is not in the infrared limit is the presence of quarks. Their existence moves the theory from being trivial to asymptotic safety.

These results that I have got published on respectable journals become the reason for rejection of most of my successive papers from several referees notwithstanding there were no serious reasons motivating it. But this is routine in our activity. Indeed, what annoyed me a lot was a refeee’s report claiming that my work was incorrect because the last of my statement was incorrect: Quark existence is not a correct motivation to claim asymptotic safety, and so confinement, for QCD. Another offending point was the strong support my approach was giving to the idea of a decoupling solution as was emerging from lattice computations on extended volumes. There was a widespread idea that the gluon propagator should go to zero in a pure Yang-Mills theory to grant confinement and, if not so, an infrared non-trivial fixed point must exist.

Recently, my last point has been vindicated by a group that was instrumental in the modelling of the history of this corner of research in physics. I have seen a couple of papers on arxiv, this and this, strongly supporting my view. They are Markus Höpfer, Christian Fischer and Reinhard Alkofer. These authors work in the conformal window, this means that, for them, lightest quarks are massless and chiral symmetry is exact. Indeed, in their study quarks not even get mass dynamically. But the question they answer is somewhat different: Acquired the fact that the theory is infrared trivial (they do not state this explicitly as this is not yet recognized even if this is a “duck” indeed), how does the trivial infrared fixed point move increasing the number of quarks? The answer is in the following wonderful graph with $N_f$ the number of quarks (flavours):

From this picture it is evident that there exists a critical number of quarks for which the theory becomes asymptotically safe and confining. So, quarks are critical to grant confinement and Yang-Mills theory can happily be trivial. The authors took great care about all the involved approximations as they solved Dyson-Schwinger equations as usual, this is always been their main tool, with a proper truncation. From the picture it is seen that if the number of flavours is below a threshold the theory is generally trivial, so also for the number of quarks being zero. Otherwise, a non-trivial infrared fixed point is reached granting confinement. Then, the gluon propagator is seen to move from a Yukawa form to a scaling form.

This result is really exciting and moves us a significant step forward toward the understanding of confinement. By my side, I am happy that another one of my ideas gets such a substantial confirmation.

Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3

Markus Hopfer, Christian S. Fischer, & Reinhard Alkofer (2014). Running coupling in the conformal window of large-Nf QCD arXiv arXiv: 1405.7031v1

Markus Hopfer, Christian S. Fischer, & Reinhard Alkofer (2014). Infrared behaviour of propagators and running coupling in the conformal
window of QCD arXiv arXiv: 1405.7340v1

Filed under: Particle Physics, Physics, QCD Tagged: Confinement, Mass Gap, Quantum chromodynamics, Running coupling, Triviality, Yang-Mills Propagators, Yang-Mills theory

CERN Bulletin

Marco Grippeling (1966-2014)

It was with great sadness that we learnt that our former colleague and friend Marco Grippeling was amongst the victims of the Malaysia Airlines crash.

Marco, a Melbourne-based cyber security specialist, boarded flight MH17 on his way back to Australia after spending his last days with friends and family in his home country of the Netherlands.

Marco joined CERN as a Technical Student in the PS Division in 1992.  In 1994 he moved to the LHC Division as a Staff Member, leaving for more exotic horizons in 2000.

Marco will always be remembered for his enthusiasm and joie de vivre.

Our deepest condolences go to his family and friends at this time.

His former colleagues and friends at CERN

Symmetrybreaking - Fermilab/SLAC

Helping cancer treatment hit its mark

A prototype CT scanner could improve targeting accuracy in proton therapy treatment.

A prototype medical device developed by Fermilab and Northern Illinois University could someday reduce the amount of radiation delivered to healthy tissue in a patient undergoing cancer treatment.

The device, called a proton CT scanner, would better target radiation doses to cancerous tumors during proton therapy treatment. Physicists recently started testing with beam at the CDH Proton Center in Warrenville, Illinois.

To create a custom treatment plan for each proton therapy patient, radiation oncologists currently use X-ray CT scanners to develop 3-D images of patient anatomy, including the tumor, to determine the size, shape and density of all organs and tissues in the body. To make sure all the tumor cells are irradiated to the prescribed dose, doctors often set the targeting volume to include a minimal amount of healthy tissue just outside the tumor.

Collaborators believe that the prototype proton CT, which is essentially a particle detector, will provide a more precise 3-D map of the patient anatomy. This allows doctors to more precisely target beam delivery, reducing the amount of radiation to healthy tissue during the CT process and treatment.

“The dose to the patient with this method would be lower than using X-ray CTs while getting better precision on the imaging,” says Fermilab’s Peter Wilson, associate head for engineering and support in the particle physics division.

Fermilab became involved in the project in 2011 at the request of NIU’s high-energy physics team because of the laboratory’s detector building expertise.

The project’s goal was a tall order, Wilson explains. The group wanted to build a prototype device, imaging software and computing system that could collect data from 1 billion protons in less than 10 minutes and then produce a 3-D reconstructed image of a human head, also in less than 10 minutes. To do that, they needed to create a device that could read data very quickly, since every second data from 2 million protons would be sent from the device—which detects only one proton at a time—to a computer.

NIU physicist Victor Rykalin recommended building a scintillating fiber tracker detector with silicon photomultipliers. A similar detector was used in the DZero experiment at Fermilab.

“The new prototype CT is a good example of the technical expertise of our staff in detector technology. Their expertise goes back 35 to 45 years and is really what makes it possible for us to do this,” Wilson says.

In the prototype CT, protons pass through two tracking stations, which track the particles’ trajectories in three dimensions. The protons then pass through the patient and finally through two more tracking stations before stopping in the energy detector, which is used to calculate the total energy loss through the patient. Devices called silicon photomultipliers pick up signals from the light resulting from these interactions and subsequently transmit electronic signals to a data acquisition system.

Scientists use specialized software and a high-performance computer at NIU to accurately map the proton stopping powers in each cubic millimeter of the patient. From this map, visually displayed as conventional CT slices, the physician can outline the margins, dimensions and location of the tumor.

Elements of the prototype were developed at both NIU and Fermilab and then put together at Fermilab. NIU developed the software and computing systems. The teams at Fermilab worked on the design and construction of the tracker and the electronics to read the tracker and energy measurement. The scintillator plates, fibers and trackers were also prepared at Fermilab. A group of about eight NIU students, led by NIU’s Vishnu Zutshi, helped build the detector at Fermilab.

“A project like this requires collaboration across multiple areas of expertise,” says George Coutrakon, medical physicist and co-investigator for the project at NIU. “We’ve built on others’ previous work, and in that sense, the collaboration extends beyond NIU and Fermilab.”

A version of this article was published in Fermilab Today.

Like what you see? Sign up for a free subscription to symmetry!

July 19, 2014

Jester - Resonaances

Weekend Plot: all of dark matter
To put my recent posts into a bigger perspective, here's a graph summarizing all of dark matter particles discovered so far via direct or indirect detection:

The graph shows the number of years the signal has survived vs. the inferred mass of the dark matter particle. The particle names follow the usual Particle Data Group conventions. The label's size is related to the statistical significance of the signal. The colors correspond to the Bayesian likelihood that the signal originates from dark matter, from uncertain (red) to very unlikely (blue). The masses of the discovered particles span impressive 11 orders of magnitude, although the largest concentration is near the weak scale (this is called the WIMP miracle). If I forgot any particle for which a compelling evidence exists, let me know, and I will add it to the graph.

Here are the original references for the Bulbulon, BoehmotCollaron, CDMesonDaemon, CresstonHooperon, Wenigon, Pamelon, and the mother of Bert and Ernie

Jester - Resonaances

Follow up on BICEP
The BICEP2 collaboration claims the discovery of the primordial B-mode in the CMB at a very high confidence level.  Résonaances recently reported on the chinese whispers that cast doubts about the statistical significance of that result.  They were based in part on the work of Raphael Flauger and Colin Hill, rumors of which were spreading through email and coffee time discussions. Today Raphael gave a public seminar describing this analysis, see the slides and the video.

The familiar number r=0.2 for the CMB tensor-to-scalar ratio is based on the assumption of zero foreground contribution in the region of the sky observed by BICEP. To argue that foregrounds should not be a big effect, the BICEP paper studied several models to estimate the galactic dust emission. Of those, only the data driven models DDM1 and DDM2 were based actual polarization data inadvertently shared by Planck. However, even these models suggest that foregrounds are not completely negligible. For example, subtracting the foregrounds estimated via DDM2 brings the central value of r down to 0.16 or 0.12 depending how the model is used (cross-correlation vs. auto-correlation). If, instead,  the cross-correlated  BICEP2 and Keck Array data are used as an input, the tensor-to-scalar ratio can easily be below 0.1, in agreement with the existing bounds from Planck and WMAP.

Raphael's message is that, according to his analysis, the foreground emissions are larger than estimated by BICEP, and that systematic uncertainties on that estimate (due to incomplete information, modeling uncertainties, and scraping numbers from pdf slides) are also large. If that is true, the statistical significance of the primordial B-mode  detection is much weaker than what is being claimed by BICEP.

In his talk, Raphael described an independent and what is the most complete to date attempt to extract the foregrounds from existing data. Apart from using the same Planck's polarization fraction map as BICEP, he also included the Q and U all-sky map (the letters refer to how polarization is parameterized), and models of polarized dust emission based on  HI maps (21cm hydrogen line emission is supposed to track the galactic dust).  One reason for the discrepancy with the BICEP estimates could be that the effect of the Cosmic Infrared Background - mostly unpolarized emission from faraway galaxies - is non-negligible. The green band in the plot shows the polarized dust emission obtained from the  CIB corrected DDM2 model, and compares it to the original BICEP estimate (blue dashed line).

The analysis then goes on to extract the foregrounds starting from several different premises. All available datasets (polarization reconstructed via HI maps, the information scraped from existing Planck's polarization maps) seem to say a similar story: galactic foregrounds can be large in the region of interest and uncertainties are large.  The money plot is this one:

Recall that the primordial B-mode signal should show up at moderate angular scales with l∼100 (the high-l end is dominated by non-primordial B-modes from gravitational lensing). Given the current uncertainties, the foreground emission may easily account for the entire BICEP2 signal in that region. Again, this does not prove that tensor mode cannot be there. The story may still reach a happy ending, much like the one of  the discovery of accelerated expansion (where serious doubts about systematic uncertainties also were raised after the initial announcement). But the ball is on the BICEP side to convincingly demonstrate that foregrounds are under control.

Until that happens, I think their result does not stand.

Jester - Resonaances

Another one bites the dust...
...though it's not BICEP2 this time :) This is a long overdue update on the forward-backward asymmetry of the top quark production.
Recall that, in a collision of a quark and an anti-quark producing a top quark together with its antiparticle, the top quark is more often ejected in the direction of the incoming quark (as opposed to the anti-quark). This effect can be most easily studied at the Tevatron who was colliding protons with antiprotons, therefore the direction of the quark and of the anti-quark could be easily inferred. Indeed, the Tevatron experiments observed the asymmetry at a high confidence level. In the leading order approximation, the Standard Model predicts zero asymmetry, which boils down to the fact that gluons mediating the production process couple with the same strength to left- and right-handed quark polarizations. Taking into account quantum corrections at 1 loop leads to a small but non-zero asymmetry.
Intriguingly, the asymmetry measured at the Tevatron appeared to be large, of order 20%, significantly more than the value  predicted by the Standard Model loop effects. On top of this, the distribution of the asymmetry as a function of the top-pair invariant mass, and the angular distribution of leptons from top quark decay were strongly deviating from the Standard Model expectation. All in all, the ttbar forward-backward anomaly has been considered, for many years, one of our best hints for physics beyond the Standard Model. The asymmetry could be interpreted, for example, as  being due to new heavy resonances with the quantum numbers of the gluon, which are predicted by models where quarks are composite objects. However, the story has been getting less and less  exciting lately. First of all, no other top quark observables  (like e.g. the total production cross section) were showing any deviations, neither at the Tevatron nor at the LHC. Another worry was that the related top asymmetry was not observed at the LHC. At the same time, the Tevatron numbers have been evolving in a worrisome direction: as the Standard Model computation was being refined the prediction was going up; on the other hand, the experimental value was steadily going down as more data were being added. Today we are close to the point where the Standard Model and experiment finally meet...

The final straw is two recent updates from Tevatron's D0 experiment. Earlier this year, D0 published the measurement  of  the forward-backward asymmetry of the direction of the leptons
from top quark decays. The top quark sometimes decays leptonically, to a b-quark, a neutrino, and a charged lepton (e+, μ+).  In this case, the momentum of the lepton is to some extent correlated with that of the parent top, thus the top quark asymmetry may come together with the lepton asymmetry  (although some new physics models affect the top and lepton asymmetry in a completely different way). The previous D0 measurement showed a large, more than 3 sigma, excess in that observable. The new refined analysis using the full dataset reaches a different conclusion: the asymmetry is Al=(4.2 ± 2.4)%, in a good agreement with the Standard Model.  As can be seen in the picture,  none of the CDF and D0 measurement of the lepton asymmetry in several  final states shows any anomaly at this point.  Then came the D0 update of the regular ttbar forward-backward asymmetry in the semi-leptonic channel. Same story here: the number went down from 20% down to Att=(10.6  ± 3.0)%, compared to the Standard Model prediction of 9%. CDF got a slightly larger number here, Att=(16.4 ± 4.5)%, but taken together the results are not significantly above the Standard Model prediction of Att=9%.

So, all the current data on the top quark, both from the LHC and from the Tevatron,  are perfectly consistent with the Standard Model predictions. There may be new physics somewhere at the weak scale, but we're not gonna pin it down by measuring the top asymmetry. This one is a dead parrot:

Graphics borrowed from this talk

Jester - Resonaances

Weekend Plot: dream on
To force myself into a more regular blogging lifestyle, I thought it would be good to have a semi-regular column.  So I'm kicking off with the Weekend Plot series (any resemblance to Tommaso's Plot of the Week is purely coincidental). You understand the idea: it's weekend, people relax, drink, enjoy... and for all the nerds there's at least a plot.

For a starter, a plot from the LHC Higgs Cross Section Working Group:

It shows the Higgs boson production cross section in proton-proton collisions as a function of center-of-mass energy. Notably, the plot extends as far as our imagination can stretch, that is up to a 100 TeV collider.  At 100 TeV the cross section is 40 times larger compared to the 8 TeV LHC.  So far we produced about 1 million Higgs bosons at the LHC and we'll probably make 20 times more in this decade. With a 100 TeV collider, 3 inverse attobarn of luminosity,  and 4 detectors  (dream on) we could produce 10 billion Higgs bosons and really squeeze the shit out of it.  For the Higgs production in association with a top-antitop quark pair the increase is even more dramatic: between 8 at 100 TeV the rate increases by a factor of 300 and ttH is upgraded to the 3rd largest production mode. Double Higgs production increases by a similar factor and becomes fairly common. So these theoretically interesting production processes  will be a piece of cake in the asymptotic future.

Wouldn't it be good?

July 18, 2014

Geraint Lewis - Cosmic Horizons

Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation
I am exhausted after a month of travel, but am now back in a sunny, but cool, Sydney. It's feels especially chilly as part of my trip included Death Valley, where the temperatures were pushing 50 degrees C.

I face a couple of weeks of catch-up, especially with regards to some blog posts on my recent papers. Here, I am going to cheat and present two papers at once. Both papers are by soon-to-be-newly-minted Doctor, Foivos Diakogiannis. I hope you won't mind, as these papers are Part I and II of the same piece of work.

The fact that this work is spread over two papers tells you that it's a long and winding saga, but it's cool stuff as it does something that can really advance science - take an idea from one area and use it somewhere else.

The question the paper looks at sounds, on the face of it, rather simple. Imagine you you have a ball of stars, something like this, a globular cluster:
You can see where the stars are. Imagine that you can also measure the speeds of the stars. So, the questions is - what is the distribution of mass in this ball of stars? It might sound obvious, because isn't the mass the stars? Well, you have to be careful as we are seeing the brightest stars, and the fainter stars, are harder to see. Also, there may be dark matter in there.

So, we are faced with a dynamics problem, which means we want to find the forces; the force acting here is, of course, gravity, and so mapping the forces gives you the mass. And forces produces accelerations, so all we need is to measure these and... oh.. hang on. The Doppler Shift gives us the velocity, not the acceleration, and so we have wait (a long time) to measure accelerations (i.e. see the change of velocity over time). As they say in the old country, "Bottom".

And this has dogged astronomy for more than one hundred years. But there are some equations (which I think a lovely, but if you are not a maths fan, they may give you a minor nightmare) called the Jeans Equations. I won't pop them here, as there are lots of bits to them and it would take a blog post to explain them in detail.

But there are problems (aren't there always) and that's the assumptions that are made, and the key problem is degeneracies.

Degeneracies are a serious pain in science. Imagine you have measured a value in an experiment, let's say it's the speed of a planet (there will be an error associated with that measurement). Now, you have your mathematical laws that makes a prediction for the speed of the planet, but you find that your maths do not give you a single answer, but multiple answers that equally well explain the measurements. What's the right answer? You need some new (or better) observations to "break the degeneracies".

And degeneracies dog dynamical work. There is a traditional approach to modelling the mass distribution through the Jeans equations, where certain assumptions are made, but you are often worried about how justified your assumptions are. While we cannot remove all the degeneracies, we can try and reduce their impact. How? By letting the data point the way.

By this point, you may look a little like this

OK. So, there are parts to the Jeans equations where people traditionally put in functions to describe what something is doing. As an example, we might choose a density that has a mathematical form like
that tells us how the density change with radius (those in the know will recognise this as the well-known Navarro-Frenk-White profile. Now, what if your density doesn't look like this? Then you are going to get the wrong answers because you assumed it.

So, what you want to do is let the data choose the function for you. But how is this possible? How do you get "data" to pick the mathematical form for something like density? This is where Foivos had incredible insight and called on a completely different topic all together, namely Computer-Aided Design.

For designing things on a computer, you need curves, curves that you can bend and stretch into a range of arbitrary shapes, and it would be painful to work out the mathematical form of all of the potential curves you need. So, you don't bother. You use extremely flexible curves known as splines. I've always loved splines. They are so simple, but so versatile. You specify some points, and you get a nice smooth curve. I urge you to have a look at them.

For this work, we use b-splines and construct the profiles we want from some basic curves. Here's an example from the paper:
We then plug this flexible curve into the mathematics of dynamics. For this work, we test the approach by creating fake data from a model, and then try and recover the model from the data. And it works!
Although it is not that simple. A lot of care and thought has to be taken on just how you you construct the spline (this is the focus of the second paper), but that's now been done. We now have the mathematics we need to really crack the dynamics of globular clusters, dwarf galaxies and even our Milky Way.

There's a lot more to write on this, but we'll wait for the results to start flowing. Watch this space!

Well done Foivos! - not only on the paper, but for finishing his PhD, getting a postdoctoral position at ICRAR, but also getting married :)

Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation I: theoretical foundation

A widely employed method for estimating the mass of stellar systems with apparent spherical symmetry is dynamical modelling using the spherically symmetric Jeans equation. Unfortunately this approach suffers from a degeneracy between the assumed mass density and the second order velocity moments. This degeneracy can lead to significantly different predictions for the mass content of the system under investigation, and thus poses a barrier for accurate estimates of the dark matter content of astrophysical systems. In a series of papers we describe an algorithm that removes this degeneracy and allows for unbiased mass estimates of systems of constant or variable mass-to-light ratio. The present contribution sets the theoretical foundation of the method that reconstructs a unique kinematic profile for some assumed free functional form of the mass density. The essence of our method lies in using flexible B-spline functions for the representation of the radial velocity dispersion in the spherically symmetric Jeans equation. We demonstrate our algorithm through an application to synthetic data for the case of an isotropic King model with fixed mass-to-light ratio, recovering excellent fits of theoretical functions to observables and a unique solution. The mass-anisotropy degeneracy is removed to the extent that, for an assumed functional form of the potential and mass density pair , and a given set of line-of-sight velocity dispersion  observables, we recover a unique profile for  and . Our algorithm is simple, easy to apply and provides an efficient means to reconstruct the kinematic profile.

and

Resolving the mass--anisotropy degeneracy of the spherically symmetric Jeans equation II: optimum smoothing and model validation

The spherical Jeans equation is widely used to estimate the mass content of a stellar systems with apparent spherical symmetry. However, this method suffers from a degeneracy between the assumed mass density and the kinematic anisotropy profile, β(r). In a previous work, we laid the theoretical foundations for an algorithm that combines smoothing B-splines with equations from dynamics to remove this degeneracy. Specifically, our method reconstructs a unique kinematic profile of σ2rr and σ2tt for an assumed free functional form of the potential and mass density (Φ,ρ) and given a set of observed line-of-sight velocity dispersion measurements, σ2los. In Paper I (submitted to MNRAS: MN-14-0101-MJ) we demonstrated the efficiency of our algorithm with a very simple example and we commented on the need for optimum smoothing of the B-spline representation; this is in order to avoid unphysical variational behaviour when we have large uncertainty in our data. In the current contribution we present a process of finding the optimum smoothing for a given data set by using information of the behaviour from known ideal theoretical models. Markov Chain Monte Carlo methods are used to explore the degeneracy in the dynamical modelling process. We validate our model through applications to synthetic data for systems with constant or variable mass-to-light ratio Υ. In all cases we recover excellent fits of theoretical functions to observables and unique solutions. Our algorithm is a robust method for the removal of the mass-anisotropy degeneracy of the spherically symmetric Jeans equation for an assumed functional form of the mass density.

Sean Carroll - Preposterous Universe

Galaxies That Are Too Big To Fail, But Fail Anyway

Dark matter exists, but there is still a lot we don’t know about it. Presumably it’s some kind of particle, but we don’t know how massive it is, what forces it interacts with, or how it was produced. On the other hand, there’s actually a lot we do know about the dark matter. We know how much of it there is; we know roughly where it is; we know that it’s “cold,” meaning that the average particle’s velocity is much less than the speed of light; and we know that dark matter particles don’t interact very strongly with each other. Which is quite a bit of knowledge, when you think about it.

Fortunately, astronomers are pushing forward to study how dark matter behaves as it’s scattered through the universe, and the results are interesting. We start with a very basic idea: that dark matter is cold and completely non-interacting, or at least has interactions (the strength with which dark matter particles scatter off of each other) that are too small to make any noticeable difference. This is a well-defined and predictive model: ΛCDM, which includes the cosmological constant (Λ) as well as the cold dark matter (CDM). We can compare astronomical observations to ΛCDM predictions to see if we’re on the right track.

At first blush, we are very much on the right track. Over and over again, new observations come in that match the predictions of ΛCDM. But there are still a few anomalies that bug us, especially on relatively small (galaxy-sized) scales.

One such anomaly is the “too big to fail” problem. The idea here is that we can use ΛCDM to make quantitative predictions concerning how many galaxies there should be with different masses. For example, the Milky Way is quite a big galaxy, and it has smaller satellites like the Magellanic Clouds. In ΛCDM we can predict how many such satellites there should be, and how massive they should be. For a long time we’ve known that the actual number of satellites we observe is quite a bit smaller than the number predicted — that’s the “missing satellites” problem. But this has a possible solution: we only observe satellite galaxies by seeing stars and gas in them, and maybe the halos of dark matter that would ordinarily support such galaxies get stripped of their stars and gas by interacting with the host galaxy. The too big to fail problem tries to sharpen the issue, by pointing out that some of the predicted galaxies are just so massive that there’s no way they could not have visible stars. Or, put another way: the Milky Way does have some satellites, as do other galaxies; but when we examine these smaller galaxies, they seem to have a lot less dark matter than the simulations would predict.

Still, any time you are concentrating on galaxies that are satellites of other galaxies, you rightly worry that complicated interactions between messy atoms and photons are getting in the way of the pristine elegance of the non-interacting dark matter. So we’d like to check that this purported problem exists even out “in the field,” with lonely galaxies far away from big monsters like the Milky Way.

A new paper claims that yes, there is a too-big-to-fail problem even for galaxies in the field.

Is there a “too big to fail” problem in the field?
Emmanouil Papastergis, Riccardo Giovanelli, Martha P. Haynes, Francesco Shankar

We use the Arecibo Legacy Fast ALFA (ALFALFA) 21cm survey to measure the number density of galaxies as a function of their rotational velocity, Vrot,HI (as inferred from the width of their 21cm emission line). Based on the measured velocity function we statistically connect galaxies with their host halos, via abundance matching. In a LCDM cosmology, low-velocity galaxies are expected to be hosted by halos that are significantly more massive than indicated by the measured galactic velocity; allowing lower mass halos to host ALFALFA galaxies would result in a vast overestimate of their number counts. We then seek observational verification of this predicted trend, by analyzing the kinematics of a literature sample of field dwarf galaxies. We find that galaxies with Vrot,HI<25 km/s are kinematically incompatible with their predicted LCDM host halos, in the sense that hosts are too massive to be accommodated within the measured galactic rotation curves. This issue is analogous to the "too big to fail" problem faced by the bright satellites of the Milky Way, but here it concerns extreme dwarf galaxies in the field. Consequently, solutions based on satellite-specific processes are not applicable in this context. Our result confirms the findings of previous studies based on optical survey data, and addresses a number of observational systematics present in these works. Furthermore, we point out the assumptions and uncertainties that could strongly affect our conclusions. We show that the two most important among them, namely baryonic effects on the abundances and rotation curves of halos, do not seem capable of resolving the reported discrepancy.

Here is the money plot from the paper:

The horizontal axis is the maximum circular velocity, basically telling us the mass of the halo; the vertical axis is the observed velocity of hydrogen in the galaxy. The blue line is the prediction from ΛCDM, while the dots are observed galaxies. Now, you might think that the blue line is just a very crappy fit to the data overall. But that’s okay; the points represent upper limits in the horizontal direction, so points that lie below/to the right of the curve are fine. It’s a statistical prediction: ΛCDM is predicting how many galaxies we have at each mass, even if we don’t think we can confidently measure the mass of each individual galaxy. What we see, however, is that there are a bunch of points in the bottom left corner that are above the line. ΛCDM predicts that even the smallest galaxies in this sample should still be relatively massive (have a lot of dark matter), but that’s not what we see.

If it holds up, this result is really intriguing. ΛCDM is a nice, simple starting point for a theory of dark matter, but it’s also kind of boring. From a physicist’s point of view, it would be much more fun if dark matter particles interacted noticeably with each other. We have plenty of ideas, including some of my favorites like dark photons and dark atoms. It is very tempting to think that observed deviations from the predictions of ΛCDM are due to some interesting new physics in the dark sector.

Which is why, of course, we should be especially skeptical. Always train your doubt most strongly on those ideas that you really want to be true. Fortunately there is plenty more to be done in terms of understanding the distribution of galaxies and dark matter, so this is a very solvable problem — and a great opportunity for learning something profound about most of the matter in the universe.

Symmetrybreaking - Fermilab/SLAC

Scientists set aside rivalry to preserve knowledge

Scientists from two experiments have banded together to create a single comprehensive record of their work for scientific posterity.

Imagine Argentina and Germany, the 2014 World Cup finalists, meeting after the final match to write down all of their strategies, secrets and training techniques to give to the world of soccer.

This will never happen in the world of sports, but it just happened in the world of particle physics, where the goal of solving the puzzles of the universe belongs to all.

Two independent research teams from opposite sides of the Pacific Ocean that have been in friendly competition to discover why there is more matter than antimatter in the universe have just released a joint scientific memoir, The Physics of the B Factories.

The 900-page, three-inch-thick tome documents the experiments—BaBar, at the Department of Energy’s SLAC National Accelerator Laboratory in California, and Belle, at KEK in Tsukuba, Japan—as though they were the subject of a paper for a journal.

The effort took six years and involved thousands of scientists from all over the world

“Producing something like this is a massive undertaking but brings a lot of value to the community,” says Tim Nelson, a physicist at SLAC who was not involved in either experiment. “It’s a thorough summary of the B-factory projects, their history and their physics results. But more than that, it is an encyclopedia of elegant techniques in reconstruction and data analysis that are broadly applicable in high energy physics. It makes an excellent reference from which nearly any student can learn something valuable.”

BaBar and Belle were built to find the same thing: CP violation, a difference in the way matter and antimatter behave that contributes to the preponderance of matter in the universe. And they went about their task in essentially the same way: They collided electrons and their antimatter opposites, positrons, to create pairs of bottom and anti-bottom quarks. So many pairs, in fact, that the experiments became known as B factories—thus, the book title.

Both experiments were highly successful in their search, though what they found can’t account for the entire discrepancy. The experiments also discovered several new particles and studied rare decays.

In the process of finding CP violation they verified a theoretical model, called the CKM matrix, which describes certain types of particle decays. In 2008, Japanese theorists Makoto Kobayashi and Toshihide Maskawa—the “K” and the “M” of CKM—shared the Nobel Prize for their thus-verified model. The two physicists sent BaBar and Belle a thank-you note.

Meanwhile, Francois Le Diberder, the BaBar spokesperson at the time, had an idea.

“It’s Francois’ fault, really,” says Adrian Bevan, a physicist at Queen Mary University of London and long-time member of the BaBar collaboration. “In 2008 he said, ‘We should document the great work in the collaboration.’ The idea just resonated with a few of us. And then Francois said, ‘Let’s invite KEK, as it would be much better to document both experiments.’“

Bevan and a few like-minded BaBar members, such as Soeren Prell from Iowa State University, contacted their Belle counterparts and found them receptive to the idea. They recruited more than 170 physicists to help and spent six years planning, writing, editing and revising. Almost 2000 names appear in the list of contributors; five people, including Bevan, served as editors. Nobel laureates Kobayashi and Masakawa provided the foreward.

The book has many uses, according to Bevan: It’s a guide to analyzing Belle and BaBar data; a reference for other experiments; a teaching tool. Above all, it’s a way to keep the data relevant. Instead of becoming like obsolete alphabets for dead languages, as has happened with many old experiments, BaBar and Belle data can continue to be used for new discoveries. “This, along with long term data access projects, changes the game for archiving data,” Bevan says.

In what may or may not have been a coincidence, the completion of the manuscript coincided with the 50th anniversary of the discovery of CP violation. At a workshop organized to commemorate the anniversary, Bevan and his co-editors presented three specially bound copies of the book to three giants of the field: Nobel laureate James Cronin (pictured above, accepting his copy), one of the physicists who made that first discovery 50 years before, and old friends Kobayashi, who accepted in person, and Masakawa, who sent a representative.

Bevan jokes that Le Diberder cost them six years of hard labor, but the instigator of the project is unrepentant.

“Indeed, the idea is my fault,” Le Diberder, who is now at France’s Linear Accelerator Laboratory, says. “But the project itself got started thanks to Adrian and Soeren, who stepped forward to steward the ship. Once they gathered their impressive team they no longer needed my help except for behind-the-scenes tasks. They had the project well in hand.”

Bevan isn’t sure about the “well in hand” characterization. “It took a few years longer than we thought it would because we didn’t realize the scope of the thing,” Bevan says. “But the end result is spectacular.

“It’s War and Peace for physicists.”

Like what you see? Sign up for a free subscription to symmetry!

CERN Bulletin

New procedure for declaring changes in family and personal situation

On taking up their appointment, Members of the Personnel (employed and associated) are required to provide official documents as evidence of their family situation. Any subsequent change in their personal situation, or that of their family members, must be declared in writing to the Organization within 30 calendar days.

As part of their efforts to simplify procedures, the Administrative Processes Section (DG-RPC-PA) and the HR and GS Departments have produced a new EDH form entitled “Change of family and personal situation", which must be used to declare the following changes:

• birth or adoption of a child;
• marriage;
• divorce;
• entry into a civil partnership officially registered in a Member State;
• dissolution of such a partnership;
• change of name;
• change of nationality or new nationality.

Members of the Personnel must create the form themselves and provide the information required for the type of declaration concerned, indicating, if applicable, any benefit from an external source that they or their family members are entitled to claim that is of the same nature as a benefit provided for in the Organization’s Staff Regulations. They must also attach a scan of the original certificate corresponding to their declaration.

The form is sent automatically to the relevant Departmental Secretariat, or to the Users Office in the case of Users, Cooperation Associates and Scientific Associates, and is then handled by the services within the HR Department. The Member of the Personnel receives an EDH notification when the change in personal status has been recorded.

The information recorded remains confidential and can be accessed only by the authorised administrative services.

N.B.: If allowances and indemnities paid regularly are affected, the next payslip constitutes a contract amendment. In accordance with Article R II 1.15 of the Staff Regulations, Members of the Personnel are deemed to have accepted a contract amendment if they have not informed the Organization to the contrary within 60 calendar days of receiving it.

Further information can be found on the “Change of family situation" page of the Admin e-guide: https://admin-eguide.web.cern.ch/admin-eguide/famille/proc_change_famille.asp

Any questions about the procedure should be addressed to your Departmental Secretariat or the Users Office.

If you encounter technical difficulties with this new EDH document, please e-mail service-desk@cern.ch, explaining the problem.

The Administrative Processes Section (DG-RPC-PA)

astrobites - astro-ph reader's digest

Star Formation on a String

Title: A Thirty Kiloparsec Chain of “Beads on a String” Star Formation Between Two Merging Early Type Galaxies in the Core of a Strong-Lensing Galaxy Cluster
Authors: Grant R. Tremblay, Michael D. Gladders, Stefi A. Baum, Christopher P. O’Dea, Matthew B. Bayliss, Kevin C. Cooke, Håkon Dahle, Timothy A. Davis, Michael Florian, Jane R. Rigby, Keren Sharon, Emmaris Soto, Eva Wuyts
First Author’s Institution: European Southern Observatory, Germany
Paper Status: Accepted for Publication in ApJ Letters

Figure 1. Left: WFC3 image of a galaxy cluster lensing background galaxies. Right: A close up of the cluster, revealed to be two interacting galaxies and a chain of NUV emission indicating star formation.

Take a look at all that gorgeous science in Figure 1! No really, look: that’s a lot of science in one image. Okay, what is it you’re looking at? First, those arcs labeled in the image on the left are galaxies at high redshift being gravitationally lensed by the cluster in the middle (which has the wonderful name SDSS J1531+3414). Very briefly, gravitational lensing is when a massive object (like a galaxy cluster) bends the light of a background object (like these high redshift galaxies), fortuitously focusing the light towards the observer. It’s a chance geometric alignment that lets us learn about distant, high-redshift objects. The lensing was the impetus for these observations, taken by Hubble’s Wide Field Camera 3 (WFC3) in four different filters across the near ultraviolet (NUV, shown in blue), optical (two filters, shown in green and orange), and near infrared (yellow). But what fascinated the authors of this paper is something entirely different happening around that central cluster. The image on the right is a close-up of the cluster with no lensing involved at all. The cluster is actually two elliptical galaxies in the process of merging together, accompanied by a chain of bright NUV emission. NUV emission is associated with ongoing star formation, which is rarely seen in elliptical galaxies (ellipticals are old, well evolved galaxies, which means they’re made mostly of older stellar populations and lack significant star formation; they’re often called “red and dead” for this reason). Star formation is however expected around merging galaxies (even ellipticals) as gas gets stirred up, and the striking “beads on a string” morphology is often seen in spiral galaxy arms and arms stretching between interacting galaxies. But the “beads” shape is hard to explain here, mostly because of the orientation (look how it’s not actually between the galaxies, but off to one side) and the fact that this is possibly the first time it has been observed around giant elliptical galaxies.

Figure 2. Left: SDSS spectrum of the central galaxies, where all spectral features appear at uniform positions–no differential redshift is evident. Right: Follow-up observations of the central galaxies (one in red and one in green) with NOT. Here a small offset is seen, on the order of ~280 km/sec, which is small given the overall redshift of z=0.335.

So what’s going on in this cluster? First, the authors made sure the central two galaxies are actually interacting, and that the star formation is also related. It’s always important to remember that just because two objects appear close together in an image doesn’t necessarily mean they’re close enough to interact. Space is three dimensional, while images show us only 2D representations. Luckily, these targets all have spectroscopy from the Sloan Digital Sky Survey (SDSS), which measures a few different absorption lines and gives the same redshift for all of the components: the two interacting galaxies, and the star formation regions (see Figure 2). Furthermore the authors have follow-up spectroscopy from the Nordic Optical Telescope (NOT), which confirms the SDSS results. So they’re definitely all part of one big, interacting system.

Hα (the 3-2 transition of hydrogen) indicates ongoing star formation, so the authors measure the Hα luminosity of the NUV-bright regions to calculate a star formation rate (SFR). Extinction due to dust and various assumptions underlying the calculation mean the exact SFR is difficult to pin down, but should be between ~5-10 solar masses per year. From that number, it’s possible to estimate the molecular gas mass in the regions. This estimate basically says that if you know how fast stars are produced (the SFR), then you know roughly how much fuel is around (fuel being the cold gas). This number turns out to be about 0.5-2.0 × 1010 solar masses. The authors tried to verify this observationally by observing the CO(1-0) transition (a tracer of cold molecular gas), but received a null detection. That’s okay, as this still puts an upper limit on the gas of 1.0 × 1010 solar masses, which is both within their uncertainties and a reasonable amount of cold gas, given the mass of the central galaxy pair (but for more information on gas in elliptical galaxies, see this astrobite!).

The point is that there’s definitely a lot of star formation happening happening around these galaxies, and while star formation is expected around mergers, it’s not clear that this particular pattern of star formation has ever been seen around giant ellipticals before. The authors suggest that’s because this is a short-lived phenomenon, and encourage more observations. Specifically, they point out that Gemini GMOS observations already taken will answer questions about gas kinematics, that ALMA has the resolution to ascertain SFRs and molecular gas masses for the individual “beads” of star formation, and that Chandra could answer questions about why the star formation is happening off-center from the interacting galaxies. If the gas is condensing because it’s been shocked, that will show up in X-ray observations, but it would be expected between the galaxies, not off to the side as in this case. Maybe some viscous drag is causing a separation between the gas and the stars? There’s clearly a lot to learn from this system, so keep an eye out for follow-up work.

CERN Bulletin

Meeting staff representatives of the European Agencies
The AASC (Assembly of Agency Staff Committee) held its 27th Meeting of the specialized European Agencies on 26 and 27 May on the premises of the OHIM (Office for Harmonization in the Internal Market) in Alicante, Spain. Two representatives of the CERN Staff Association, in charge of External Relations, attended as observers. This participation is a useful complement to regular contacts we have with FICSA (Federation of International Civil Servants' Associations), which groups staff associations of the UN Agencies, and the annual CSAIO conferences (Conference of Staff Associations of International Organizations), where each Autumn representatives of international organizations based in Europe meet to discuss themes of common interest to better promote and defend the rights of the international civil servants. All these meetings allow us to remain informed on items that are directly or indirectly related to employment and social conditions of our colleagues in other international and European organizations. The AASC includes representatives of 35 specialized Agencies of the European Union. Meetings such as the one in Alicante provide an opportunity to discuss the difficulties that staffs of these Agencies encounter in different areas, such as, health insurance, recognition of the activities of the staff representatives in each Agency, attacks by Members States on social and employment conditions, or the lack of coherence between the different Agencies. These meetings are also an ideal forum for the exchange of information, and an opportunity to define common positions and coordinate joint actions. The need to encourage the activities of staff representation in order to create an effective counterweight to the European administration was stressed. In Alicante, on the morning of the first day, the discussions concerned the recent decisions of the European Commission in Brussels on the reform of the Statute of the European public service and its impact on the Agencies. Indeed, its implementation is complicated, if not impossible, and often made in analogy with the one in the Commission, for example, for social conditions or pensions. During the afternoon session a consultant in communication spoke on the theme "facilitating communication in large groups." Based on this presentation, the next day the discussions took place in several small parallel workshops, each on a particular theme, such as: recruitment, contract renewals, flexitime, and the importance of staff representation and its means of action within the Agencies. Many interesting ideas came out of these workshops, whose main purpose was to stimulate the active participation of people present at gatherings with a large number of participants, such as in Alicante where we were fifty staff representatives. An essential point to take away from this meeting is that the implementation of the recent reform of the Statute of the European civil service is rather unfavourable for the staff, especially for new recruits. This fact gives rise to reactions from staff representatives in each Agency. To improve the social dialogue the AASC Secretariat was given the task to prepare a resolution calling on the Commission and the Administrations of the Agencies to involve staff representatives from an early stage in the drafting of proposals for changes to employment conditions. This resolution was sent to E.U. leaders in early July. A follow-up will take place at the next AASC Meeting in the Autumn.